• Search Menu
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

1. introduction, what is meant by impact, 2. why evaluate research impact, 3. evaluating research impact, 4. impact and the ref, 5. the challenges of impact evaluation, 6. developing systems and taxonomies for capturing impact, 7. indicators, evidence, and impact within systems, 8. conclusions and recommendations.

  • < Previous

Assessment, evaluations, and definitions of research impact: A review

  • Article contents
  • Figures & tables
  • Supplementary Data

Teresa Penfield, Matthew J. Baker, Rosa Scoble, Michael C. Wykes, Assessment, evaluations, and definitions of research impact: A review, Research Evaluation , Volume 23, Issue 1, January 2014, Pages 21–32, https://doi.org/10.1093/reseval/rvt021

  • Permissions Icon Permissions

This article aims to explore what is understood by the term ‘research impact’ and to provide a comprehensive assimilation of available literature and information, drawing on global experiences to understand the potential for methods and frameworks of impact assessment being implemented for UK impact assessment. We take a more focused look at the impact component of the UK Research Excellence Framework taking place in 2014 and some of the challenges to evaluating impact and the role that systems might play in the future for capturing the links between research and impact and the requirements we have for these systems.

When considering the impact that is generated as a result of research, a number of authors and government recommendations have advised that a clear definition of impact is required ( Duryea, Hochman, and Parfitt 2007 ; Grant et al. 2009 ; Russell Group 2009 ). From the outset, we note that the understanding of the term impact differs between users and audiences. There is a distinction between ‘academic impact’ understood as the intellectual contribution to one’s field of study within academia and ‘external socio-economic impact’ beyond academia. In the UK, evaluation of academic and broader socio-economic impact takes place separately. ‘Impact’ has become the term of choice in the UK for research influence beyond academia. This distinction is not so clear in impact assessments outside of the UK, where academic outputs and socio-economic impacts are often viewed as one, to give an overall assessment of value and change created through research.

an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia

Impact is assessed alongside research outputs and environment to provide an evaluation of research taking place within an institution. As such research outputs, for example, knowledge generated and publications, can be translated into outcomes, for example, new products and services, and impacts or added value ( Duryea et al. 2007 ). Although some might find the distinction somewhat marginal or even confusing, this differentiation between outputs, outcomes, and impacts is important, and has been highlighted, not only for the impacts derived from university research ( Kelly and McNicol 2011 ) but also for work done in the charitable sector ( Ebrahim and Rangan, 2010 ; Berg and Månsson 2011 ; Kelly and McNicoll 2011 ). The Social Return on Investment (SROI) guide ( The SROI Network 2012 ) suggests that ‘The language varies “impact”, “returns”, “benefits”, “value” but the questions around what sort of difference and how much of a difference we are making are the same’. It is perhaps assumed here that a positive or beneficial effect will be considered as an impact but what about changes that are perceived to be negative? Wooding et al. (2007) adapted the terminology of the Payback Framework, developed for the health and biomedical sciences from ‘benefit’ to ‘impact’ when modifying the framework for the social sciences, arguing that the positive or negative nature of a change was subjective and can also change with time, as has commonly been highlighted with the drug thalidomide, which was introduced in the 1950s to help with, among other things, morning sickness but due to teratogenic effects, which resulted in birth defects, was withdrawn in the early 1960s. Thalidomide has since been found to have beneficial effects in the treatment of certain types of cancer. Clearly the impact of thalidomide would have been viewed very differently in the 1950s compared with the 1960s or today.

In viewing impact evaluations it is important to consider not only who has evaluated the work but the purpose of the evaluation to determine the limits and relevance of an assessment exercise. In this article, we draw on a broad range of examples with a focus on methods of evaluation for research impact within Higher Education Institutions (HEIs). As part of this review, we aim to explore the following questions:

What are the reasons behind trying to understand and evaluate research impact?

What are the methodologies and frameworks that have been employed globally to assess research impact and how do these compare?

What are the challenges associated with understanding and evaluating research impact?

What indicators, evidence, and impacts need to be captured within developing systems

What are the reasons behind trying to understand and evaluate research impact? Throughout history, the activities of a university have been to provide both education and research, but the fundamental purpose of a university was perhaps described in the writings of mathematician and philosopher Alfred North Whitehead (1929) .

‘The justification for a university is that it preserves the connection between knowledge and the zest of life, by uniting the young and the old in the imaginative consideration of learning. The university imparts information, but it imparts it imaginatively. At least, this is the function which it should perform for society. A university which fails in this respect has no reason for existence. This atmosphere of excitement, arising from imaginative consideration transforms knowledge.’

In undertaking excellent research, we anticipate that great things will come and as such one of the fundamental reasons for undertaking research is that we will generate and transform knowledge that will benefit society as a whole.

One might consider that by funding excellent research, impacts (including those that are unforeseen) will follow, and traditionally, assessment of university research focused on academic quality and productivity. Aspects of impact, such as value of Intellectual Property, are currently recorded by universities in the UK through their Higher Education Business and Community Interaction Survey return to Higher Education Statistics Agency; however, as with other public and charitable sector organizations, showcasing impact is an important part of attracting and retaining donors and support ( Kelly and McNicoll 2011 ).

The reasoning behind the move towards assessing research impact is undoubtedly complex, involving both political and socio-economic factors, but, nevertheless, we can differentiate between four primary purposes.

HEIs overview. To enable research organizations including HEIs to monitor and manage their performance and understand and disseminate the contribution that they are making to local, national, and international communities.

Accountability. To demonstrate to government, stakeholders, and the wider public the value of research. There has been a drive from the UK government through Higher Education Funding Council for England (HEFCE) and the Research Councils ( HM Treasury 2004 ) to account for the spending of public money by demonstrating the value of research to tax payers, voters, and the public in terms of socio-economic benefits ( European Science Foundation 2009 ), in effect, justifying this expenditure ( Davies Nutley, and Walter 2005 ; Hanney and González-Block 2011 ).

Inform funding. To understand the socio-economic value of research and subsequently inform funding decisions. By evaluating the contribution that research makes to society and the economy, future funding can be allocated where it is perceived to bring about the desired impact. As Donovan (2011) comments, ‘Impact is a strong weapon for making an evidence based case to governments for enhanced research support’.

Understand. To understand the method and routes by which research leads to impacts to maximize on the findings that come out of research and develop better ways of delivering impact.

The growing trend for accountability within the university system is not limited to research and is mirrored in assessments of teaching quality, which now feed into evaluation of universities to ensure fee-paying students’ satisfaction. In demonstrating research impact, we can provide accountability upwards to funders and downwards to users on a project and strategic basis ( Kelly and McNicoll 2011 ). Organizations may be interested in reviewing and assessing research impact for one or more of the aforementioned purposes and this will influence the way in which evaluation is approached.

It is important to emphasize that ‘Not everyone within the higher education sector itself is convinced that evaluation of higher education activity is a worthwhile task’ ( Kelly and McNicoll 2011 ). The University and College Union ( University and College Union 2011 ) organized a petition calling on the UK funding councils to withdraw the inclusion of impact assessment from the REF proposals once plans for the new assessment of university research were released. This petition was signed by 17,570 academics (52,409 academics were returned to the 2008 Research Assessment Exercise), including Nobel laureates and Fellows of the Royal Society ( University and College Union 2011 ). Impact assessments raise concerns over the steer of research towards disciplines and topics in which impact is more easily evidenced and that provide economic impacts that could subsequently lead to a devaluation of ‘blue skies’ research. Johnston ( Johnston 1995 ) notes that by developing relationships between researchers and industry, new research strategies can be developed. This raises the questions of whether UK business and industry should not invest in the research that will deliver them impacts and who will fund basic research if not the government? Donovan (2011) asserts that there should be no disincentive for conducting basic research. By asking academics to consider the impact of the research they undertake and by reviewing and funding them accordingly, the result may be to compromise research by steering it away from the imaginative and creative quest for knowledge. Professor James Ladyman, at the University of Bristol, a vocal adversary of awarding funding based on the assessment of research impact, has been quoted as saying that ‘…inclusion of impact in the REF will create “selection pressure,” promoting academic research that has “more direct economic impact” or which is easier to explain to the public’ ( Corbyn 2009 ).

Despite the concerns raised, the broader socio-economic impacts of research will be included and count for 20% of the overall research assessment, as part of the REF in 2014. From an international perspective, this represents a step change in the comprehensive nature to which impact will be assessed within universities and research institutes, incorporating impact from across all research disciplines. Understanding what impact looks like across the various strands of research and the variety of indicators and proxies used to evidence impact will be important to developing a meaningful assessment.

What are the methodologies and frameworks that have been employed globally to evaluate research impact and how do these compare? The traditional form of evaluation of university research in the UK was based on measuring academic impact and quality through a process of peer review ( Grant 2006 ). Evidence of academic impact may be derived through various bibliometric methods, one example of which is the H index, which has incorporated factors such as the number of publications and citations. These metrics may be used in the UK to understand the benefits of research within academia and are often incorporated into the broader perspective of impact seen internationally, for example, within the Excellence in Research for Australia and using Star Metrics in the USA, in which quantitative measures are used to assess impact, for example, publications, citation, and research income. These ‘traditional’ bibliometric techniques can be regarded as giving only a partial picture of full impact ( Bornmann and Marx 2013 ) with no link to causality. Standard approaches actively used in programme evaluation such as surveys, case studies, bibliometrics, econometrics and statistical analyses, content analysis, and expert judgment are each considered by some (Vonortas and Link, 2012) to have shortcomings when used to measure impacts.

Incorporating assessment of the wider socio-economic impact began using metrics-based indicators such as Intellectual Property registered and commercial income generated ( Australian Research Council 2008 ). In the UK, more sophisticated assessments of impact incorporating wider socio-economic benefits were first investigated within the fields of Biomedical and Health Sciences ( Grant 2006 ), an area of research that wanted to be able to justify the significant investment it received. Frameworks for assessing impact have been designed and are employed at an organizational level addressing the specific requirements of the organization and stakeholders. As a result, numerous and widely varying models and frameworks for assessing impact exist. Here we outline a few of the most notable models that demonstrate the contrast in approaches available.

The Payback Framework is possibly the most widely used and adapted model for impact assessment ( Wooding et al. 2007 ; Nason et al. 2008 ), developed during the mid-1990s by Buxton and Hanney, working at Brunel University. It incorporates both academic outputs and wider societal benefits ( Donovan and Hanney 2011 ) to assess outcomes of health sciences research. The Payback Framework systematically links research with the associated benefits ( Scoble et al. 2010 ; Hanney and González-Block 2011 ) and can be thought of in two parts: a model that allows the research and subsequent dissemination process to be broken into specific components within which the benefits of research can be studied, and second, a multi-dimensional classification scheme into which the various outputs, outcomes, and impacts can be placed ( Hanney and Gonzalez Block 2011 ). The Payback Framework has been adopted internationally, largely within the health sector, by organizations such as the Canadian Institute of Health Research, the Dutch Public Health Authority, the Australian National Health and Medical Research Council, and the Welfare Bureau in Hong Kong ( Bernstein et al. 2006 ; Nason et al. 2008 ; CAHS 2009; Spaapen et al. n.d. ). The Payback Framework enables health and medical research and impact to be linked and the process by which impact occurs to be traced. For more extensive reviews of the Payback Framework, see Davies et al. (2005) , Wooding et al. (2007) , Nason et al. (2008) , and Hanney and González-Block (2011) .

A very different approach known as Social Impact Assessment Methods for research and funding instruments through the study of Productive Interactions (SIAMPI) was developed from the Dutch project Evaluating Research in Context and has a central theme of capturing ‘productive interactions’ between researchers and stakeholders by analysing the networks that evolve during research programmes ( Spaapen and Drooge, 2011 ; Spaapen et al. n.d. ). SIAMPI is based on the widely held assumption that interactions between researchers and stakeholder are an important pre-requisite to achieving impact ( Donovan 2011 ; Hughes and Martin 2012 ; Spaapen et al. n.d. ). This framework is intended to be used as a learning tool to develop a better understanding of how research interactions lead to social impact rather than as an assessment tool for judging, showcasing, or even linking impact to a specific piece of research. SIAMPI has been used within the Netherlands Institute for health Services Research ( SIAMPI n.d. ). ‘Productive interactions’, which can perhaps be viewed as instances of knowledge exchange, are widely valued and supported internationally as mechanisms for enabling impact and are often supported financially for example by Canada’s Social Sciences and Humanities Research Council, which aims to support knowledge exchange (financially) with a view to enabling long-term impact. In the UK, UK Department for Business, Innovation, and Skills provided funding of £150 million for knowledge exchange in 2011–12 to ‘help universities and colleges support the economic recovery and growth, and contribute to wider society’ ( Department for Business, Innovation and Skills 2012 ). While valuing and supporting knowledge exchange is important, SIAMPI perhaps takes this a step further in enabling these exchange events to be captured and analysed. One of the advantages of this method is that less input is required compared with capturing the full route from research to impact. A comprehensive assessment of impact itself is not undertaken with SIAMPI, which make it a less-suitable method where showcasing the benefits of research is desirable or where this justification of funding based on impact is required.

The first attempt globally to comprehensively capture the socio-economic impact of research across all disciplines was undertaken for the Australian Research Quality Framework (RQF), using a case study approach. The RQF was developed to demonstrate and justify public expenditure on research, and as part of this framework, a pilot assessment was undertaken by the Australian Technology Network. Researchers were asked to evidence the economic, societal, environmental, and cultural impact of their research within broad categories, which were then verified by an expert panel ( Duryea et al. 2007 ) who concluded that the researchers and case studies could provide enough qualitative and quantitative evidence for reviewers to assess the impact arising from their research ( Duryea et al. 2007 ). To evaluate impact, case studies were interrogated and verifiable indicators assessed to determine whether research had led to reciprocal engagement, adoption of research findings, or public value. The RQF pioneered the case study approach to assessing research impact; however, with a change in government in 2007, this framework was never implemented in Australia, although it has since been taken up and adapted for the UK REF.

In developing the UK REF, HEFCE commissioned a report, in 2009, from RAND to review international practice for assessing research impact and provide recommendations to inform the development of the REF. RAND selected four frameworks to represent the international arena ( Grant et al. 2009 ). One of these, the RQF, they identified as providing a ‘promising basis for developing an impact approach for the REF’ using the case study approach. HEFCE developed an initial methodology that was then tested through a pilot exercise. The case study approach, recommended by the RQF, was combined with ‘significance’ and ‘reach’ as criteria for assessment. The criteria for assessment were also supported by a model developed by Brunel for ‘measurement’ of impact that used similar measures defined as depth and spread. In the Brunel model, depth refers to the degree to which the research has influenced or caused change, whereas spread refers to the extent to which the change has occurred and influenced end users. Evaluation of impact in terms of reach and significance allows all disciplines of research and types of impact to be assessed side-by-side ( Scoble et al. 2010 ).

The range and diversity of frameworks developed reflect the variation in purpose of evaluation including the stakeholders for whom the assessment takes place, along with the type of impact and evidence anticipated. The most appropriate type of evaluation will vary according to the stakeholder whom we are wishing to inform. Studies ( Buxton, Hanney and Jones 2004 ) into the economic gains from biomedical and health sciences determined that different methodologies provide different ways of considering economic benefits. A discussion on the benefits and drawbacks of a range of evaluation tools (bibliometrics, economic rate of return, peer review, case study, logic modelling, and benchmarking) can be found in the article by Grant (2006) .

Evaluation of impact is becoming increasingly important, both within the UK and internationally, and research and development into impact evaluation continues, for example, researchers at Brunel have developed the concept of depth and spread further into the Brunel Impact Device for Evaluation, which also assesses the degree of separation between research and impact ( Scoble et al. working paper ).

Although based on the RQF, the REF did not adopt all of the suggestions held within, for example, the option of allowing research groups to opt out of impact assessment should the nature or stage of research deem it unsuitable ( Donovan 2008 ). In 2009–10, the REF team conducted a pilot study for the REF involving 29 institutions, submitting case studies to one of five units of assessment (in clinical medicine, physics, earth systems and environmental sciences, social work and social policy, and English language and literature) ( REF2014 2010 ). These case studies were reviewed by expert panels and, as with the RQF, they found that it was possible to assess impact and develop ‘impact profiles’ using the case study approach ( REF2014 2010 ).

From 2014, research within UK universities and institutions will be assessed through the REF; this will replace the Research Assessment Exercise, which has been used to assess UK research since the 1980s. Differences between these two assessments include the removal of indicators of esteem and the addition of assessment of socio-economic research impact. The REF will therefore assess three aspects of research:

Environment

Research impact is assessed in two formats, first, through an impact template that describes the approach to enabling impact within a unit of assessment, and second, using impact case studies that describe the impact taking place following excellent research within a unit of assessment ( REF2014 2011a ). HEFCE indicated that impact should merit a 25% weighting within the REF ( REF2014 2011b ); however, this has been reduced for the 2014 REF to 20%, perhaps as a result of feedback and lobbying, for example, from the Russell Group and Million + group of Universities who called for impact to count for 15% ( Russell Group 2009 ; Jump 2011 ) and following guidance from the expert panels undertaking the pilot exercise who suggested that during the 2014 REF, impact assessment would be in a developmental phase and that a lower weighting for impact would be appropriate with the expectation that this would be increased in subsequent assessments ( REF2014 2010 ).

The quality and reliability of impact indicators will vary according to the impact we are trying to describe and link to research. In the UK, evidence and research impacts will be assessed for the REF within research disciplines. Although it can be envisaged that the range of impacts derived from research of different disciplines are likely to vary, one might question whether it makes sense to compare impacts within disciplines when the range of impact can vary enormously, for example, from business development to cultural changes or saving lives? An alternative approach was suggested for the RQF in Australia, where it was proposed that types of impact be compared rather than impact from specific disciplines.

Providing advice and guidance within specific disciplines is undoubtedly helpful. It can be seen from the panel guidance produced by HEFCE to illustrate impacts and evidence that it is expected that impact and evidence will vary according to discipline ( REF2014 2012 ). Why should this be the case? Two areas of research impact health and biomedical sciences and the social sciences have received particular attention in the literature by comparison with, for example, the arts. Reviews and guidance on developing and evidencing impact in particular disciplines include the London School of Economics (LSE) Public Policy Group’s impact handbook (LSE n.d.), a review of the social and economic impacts arising from the arts produced by Reeve ( Reeves 2002 ), and a review by Kuruvilla et al. (2006) on the impact arising from health research. Perhaps it is time for a generic guide based on types of impact rather than research discipline?

What are the challenges associated with understanding and evaluating research impact? In endeavouring to assess or evaluate impact, a number of difficulties emerge and these may be specific to certain types of impact. Given that the type of impact we might expect varies according to research discipline, impact-specific challenges present us with the problem that an evaluation mechanism may not fairly compare impact between research disciplines.

5.1 Time lag

The time lag between research and impact varies enormously. For example, the development of a spin out can take place in a very short period, whereas it took around 30 years from the discovery of DNA before technology was developed to enable DNA fingerprinting. In development of the RQF, The Allen Consulting Group (2005) highlighted that defining a time lag between research and impact was difficult. In the UK, the Russell Group Universities responded to the REF consultation by recommending that no time lag be put on the delivery of impact from a piece of research citing examples such as the development of cardiovascular disease treatments, which take between 10 and 25 years from research to impact ( Russell Group 2009 ). To be considered for inclusion within the REF, impact must be underpinned by research that took place between 1 January 1993 and 31 December 2013, with impact occurring during an assessment window from 1 January 2008 to 31 July 2013. However, there has been recognition that this time window may be insufficient in some instances, with architecture being granted an additional 5-year period ( REF2014 2012 ); why only architecture has been granted this dispensation is not clear, when similar cases could be made for medicine, physics, or even English literature. Recommendations from the REF pilot were that the panel should be able to extend the time frame where appropriate; this, however, poses difficult decisions when submitting a case study to the REF as to what the view of the panel will be and whether if deemed inappropriate this will render the case study ‘unclassified’.

5.2 The developmental nature of impact

Impact is not static, it will develop and change over time, and this development may be an increase or decrease in the current degree of impact. Impact can be temporary or long-lasting. The point at which assessment takes place will therefore influence the degree and significance of that impact. For example, following the discovery of a new potential drug, preclinical work is required, followed by Phase 1, 2, and 3 trials, and then regulatory approval is granted before the drug is used to deliver potential health benefits. Clearly there is the possibility that the potential new drug will fail at any one of these phases but each phase can be classed as an interim impact of the original discovery work on route to the delivery of health benefits, but the time at which an impact assessment takes place will influence the degree of impact that has taken place. If impact is short-lived and has come and gone within an assessment period, how will it be viewed and considered? Again the objective and perspective of the individuals and organizations assessing impact will be key to understanding how temporal and dissipated impact will be valued in comparison with longer-term impact.

5.3 Attribution

Impact is derived not only from targeted research but from serendipitous findings, good fortune, and complex networks interacting and translating knowledge and research. The exploitation of research to provide impact occurs through a complex variety of processes, individuals, and organizations, and therefore, attributing the contribution made by a specific individual, piece of research, funding, strategy, or organization to an impact is not straight forward. Husbands-Fealing suggests that to assist identification of causality for impact assessment, it is useful to develop a theoretical framework to map the actors, activities, linkages, outputs, and impacts within the system under evaluation, which shows how later phases result from earlier ones. Such a framework should be not linear but recursive, including elements from contextual environments that influence and/or interact with various aspects of the system. Impact is often the culmination of work within spanning research communities ( Duryea et al. 2007 ). Concerns over how to attribute impacts have been raised many times ( The Allen Consulting Group 2005 ; Duryea et al. 2007 ; Grant et al. 2009 ), and differentiating between the various major and minor contributions that lead to impact is a significant challenge.

Figure 1 , replicated from Hughes and Martin (2012) , illustrates how the ease with which impact can be attributed decreases with time, whereas the impact, or effect of complementary assets, increases, highlighting the problem that it may take a considerable amount of time for the full impact of a piece of research to develop but because of this time and the increase in complexity of the networks involved in translating the research and interim impacts, it is more difficult to attribute and link back to a contributing piece of research.

Time, attribution, impact. Replicated from (Hughes and Martin 2012).

Time, attribution, impact. Replicated from ( Hughes and Martin 2012 ).

This presents particular difficulties in research disciplines conducting basic research, such as pure mathematics, where the impact of research is unlikely to be foreseen. Research findings will be taken up in other branches of research and developed further before socio-economic impact occurs, by which point, attribution becomes a huge challenge. If this research is to be assessed alongside more applied research, it is important that we are able to at least determine the contribution of basic research. It has been acknowledged that outstanding leaps forward in knowledge and understanding come from immersing in a background of intellectual thinking that ‘one is able to see further by standing on the shoulders of giants’.

5.4 Knowledge creep

It is acknowledged that one of the outcomes of developing new knowledge through research can be ‘knowledge creep’ where new data or information becomes accepted and gets absorbed over time. This is particularly recognized in the development of new government policy where findings can influence policy debate and policy change, without recognition of the contributing research ( Davies et al. 2005 ; Wooding et al. 2007 ). This is recognized as being particularly problematic within the social sciences where informing policy is a likely impact of research. In putting together evidence for the REF, impact can be attributed to a specific piece of research if it made a ‘distinctive contribution’ ( REF2014 2011a ). The difficulty then is how to determine what the contribution has been in the absence of adequate evidence and how we ensure that research that results in impacts that cannot be evidenced is valued and supported.

5.5 Gathering evidence

Gathering evidence of the links between research and impact is not only a challenge where that evidence is lacking. The introduction of impact assessments with the requirement to collate evidence retrospectively poses difficulties because evidence, measurements, and baselines have, in many cases, not been collected and may no longer be available. While looking forward, we will be able to reduce this problem in the future, identifying, capturing, and storing the evidence in such a way that it can be used in the decades to come is a difficulty that we will need to tackle.

Collating the evidence and indicators of impact is a significant task that is being undertaken within universities and institutions globally. Decker et al. (2007) surveyed researchers in the US top research institutions during 2005; the survey of more than 6000 researchers found that, on average, more than 40% of their time was spent doing administrative tasks. It is desirable that the assignation of administrative tasks to researchers is limited, and therefore, to assist the tracking and collating of impact data, systems are being developed involving numerous projects and developments internationally, including Star Metrics in the USA, the ERC (European Research Council) Research Information System, and Lattes in Brazil ( Lane 2010 ; Mugabushaka and Papazoglou 2012 ).

Ideally, systems within universities internationally would be able to share data allowing direct comparisons, accurate storage of information developed in collaborations, and transfer of comparable data as researchers move between institutions. To achieve compatible systems, a shared language is required. CERIF (Common European Research Information Format) was developed for this purpose, first released in 1991; a number of projects and systems across Europe such as the ERC Research Information System ( Mugabushaka and Papazoglou 2012 ) are being developed as CERIF-compatible.

In the UK, there have been several Jisc-funded projects in recent years to develop systems capable of storing research information, for example, MICE (Measuring Impacts Under CERIF), UK Research Information Shared Service, and Integrated Research Input and Output System, all based on the CERIF standard. To allow comparisons between institutions, identifying a comprehensive taxonomy of impact, and the evidence for it, that can be used universally is seen to be very valuable. However, the Achilles heel of any such attempt, as critics suggest, is the creation of a system that rewards what it can measure and codify, with the knock-on effect of directing research projects to deliver within the measures and categories that reward.

Attempts have been made to categorize impact evidence and data, for example, the aim of the MICE Project was to develop a set of impact indicators to enable impact to be fed into a based system. Indicators were identified from documents produced for the REF, by Research Councils UK, in unpublished draft case studies undertaken at King’s College London or outlined in relevant publications (MICE Project n.d.). A taxonomy of impact categories was then produced onto which impact could be mapped. What emerged on testing the MICE taxonomy ( Cooke and Nadim 2011 ), by mapping impacts from case studies, was that detailed categorization of impact was found to be too prescriptive. Every piece of research results in a unique tapestry of impact and despite the MICE taxonomy having more than 100 indicators, it was found that these did not suffice. It is perhaps worth noting that the expert panels, who assessed the pilot exercise for the REF, commented that the evidence provided by research institutes to demonstrate impact were ‘a unique collection’. Where quantitative data were available, for example, audience numbers or book sales, these numbers rarely reflected the degree of impact, as no context or baseline was available. Cooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks of impacts that are generally found. The Goldsmith report ( Cooke and Nadim 2011 ) recommended making indicators ‘value free’, enabling the value or quality to be established in an impact descriptor that could be assessed by expert panels. The Goldsmith report concluded that general categories of evidence would be more useful such that indicators could encompass dissemination and circulation, re-use and influence, collaboration and boundary work, and innovation and invention.

While defining the terminology used to understand impact and indicators will enable comparable data to be stored and shared between organizations, we would recommend that any categorization of impacts be flexible such that impacts arising from non-standard routes can be placed. It is worth considering the degree to which indicators are defined and provide broader definitions with greater flexibility.

It is possible to incorporate both metrics and narratives within systems, for example, within the Research Outcomes System and Researchfish, currently used by several of the UK research councils to allow impacts to be recorded; although recording narratives has the advantage of allowing some context to be documented, it may make the evidence less flexible for use by different stakeholder groups (which include government, funding bodies, research assessment agencies, research providers, and user communities) for whom the purpose of analysis may vary ( Davies et al. 2005 ). Any tool for impact evaluation needs to be flexible, such that it enables access to impact data for a variety of purposes (Scoble et al. n.d.). Systems need to be able to capture links between and evidence of the full pathway from research to impact, including knowledge exchange, outputs, outcomes, and interim impacts, to allow the route to impact to be traced. This database of evidence needs to establish both where impact can be directly attributed to a piece of research as well as various contributions to impact made during the pathway.

Baselines and controls need to be captured alongside change to demonstrate the degree of impact. In many instances, controls are not feasible as we cannot look at what impact would have occurred if a piece of research had not taken place; however, indications of the picture before and after impact are valuable and worth collecting for impact that can be predicted.

It is now possible to use data-mining tools to extract specific data from narratives or unstructured data ( Mugabushaka and Papazoglou 2012 ). This is being done for collation of academic impact and outputs, for example, Research Portfolio Online Reporting Tools, which uses PubMed and text mining to cluster research projects, and STAR Metrics in the US, which uses administrative records and research outputs and is also being implemented by the ERC using data in the public domain ( Mugabushaka and Papazoglou 2012 ). These techniques have the potential to provide a transformation in data capture and impact assessment ( Jones and Grant 2013 ). It is acknowledged in the article by Mugabushaka and Papazoglou (2012) that it will take years to fully incorporate the impacts of ERC funding. For systems to be able to capture a full range of systems, definitions and categories of impact need to be determined that can be incorporated into system development. To adequately capture interactions taking place between researchers, institutions, and stakeholders, the introduction of tools to enable this would be very valuable. If knowledge exchange events could be captured, for example, electronically as they occur or automatically if flagged from an electronic calendar or a diary, then far more of these events could be recorded with relative ease. Capturing knowledge exchange events would greatly assist the linking of research with impact.

The transition to routine capture of impact data not only requires the development of tools and systems to help with implementation but also a cultural change to develop practices, currently undertaken by a few to be incorporated as standard behaviour among researchers and universities.

What indicators, evidence, and impacts need to be captured within developing systems? There is a great deal of interest in collating terms for impact and indicators of impact. Consortia for Advancing Standards in Research Administration Information, for example, has put together a data dictionary with the aim of setting the standards for terminology used to describe impact and indicators that can be incorporated into systems internationally and seems to be building a certain momentum in this area. A variety of types of indicators can be captured within systems; however, it is important that these are universally understood. Here we address types of evidence that need to be captured to enable an overview of impact to be developed. In the majority of cases, a number of types of evidence will be required to provide an overview of impact.

7.1 Metrics

Metrics have commonly been used as a measure of impact, for example, in terms of profit made, number of jobs provided, number of trained personnel recruited, number of visitors to an exhibition, number of items purchased, and so on. Metrics in themselves cannot convey the full impact; however, they are often viewed as powerful and unequivocal forms of evidence. If metrics are available as impact evidence, they should, where possible, also capture any baseline or control data. Any information on the context of the data will be valuable to understanding the degree to which impact has taken place.

Perhaps, SROI indicates the desire to be able to demonstrate the monetary value of investment and impact by some organizations. SROI aims to provide a valuation of the broader social, environmental, and economic impacts, providing a metric that can be used for demonstration of worth. This is a metric that has been used within the charitable sector ( Berg and Månsson 2011 ) and also features as evidence in the REF guidance for panel D ( REF2014 2012 ). More details on SROI can be found in ‘A guide to Social Return on Investment’ produced by The SROI Network (2012) .

Although metrics can provide evidence of quantitative changes or impacts from our research, they are unable to adequately provide evidence of the qualitative impacts that take place and hence are not suitable for all of the impact we will encounter. The main risks associated with the use of standardized metrics are that

The full impact will not be realized, as we focus on easily quantifiable indicators

We will focus attention towards generating results that enable boxes to be ticked rather than delivering real value for money and innovative research.

They risk being monetized or converted into a lowest common denominator in an attempt to compare the cost of a new theatre against that of a hospital.

7.2 Narratives

Narratives can be used to describe impact; the use of narratives enables a story to be told and the impact to be placed in context and can make good use of qualitative information. They are often written with a reader from a particular stakeholder group in mind and will present a view of impact from a particular perspective. The risk of relying on narratives to assess impact is that they often lack the evidence required to judge whether the research and impact are linked appropriately. Where narratives are used in conjunction with metrics, a complete picture of impact can be developed, again from a particular perspective but with the evidence available to corroborate the claims made. Table 1 summarizes some of the advantages and disadvantages of the case study approach.

The advantages and disadvantages of the case study approach

By allowing impact to be placed in context, we answer the ‘so what?’ question that can result from quantitative data analyses, but is there a risk that the full picture may not be presented to demonstrate impact in a positive light? Case studies are ideal for showcasing impact, but should they be used to critically evaluate impact?

7.3 Surveys and testimonies

One way in which change of opinion and user perceptions can be evidenced is by gathering of stakeholder and user testimonies or undertaking surveys. This might describe support for and development of research with end users, public engagement and evidence of knowledge exchange, or a demonstration of change in public opinion as a result of research. Collecting this type of evidence is time-consuming, and again, it can be difficult to gather the required evidence retrospectively when, for example, the appropriate user group might have dispersed.

The ability to record and log these type of data is important for enabling the path from research to impact to be established and the development of systems that can capture this would be very valuable.

7.4 Citations (outside of academia) and documentation

Citations (outside of academia) and documentation can be used as evidence to demonstrate the use research findings in developing new ideas and products for example. This might include the citation of a piece of research in policy documents or reference to a piece of research being cited within the media. A collation of several indicators of impact may be enough to convince that an impact has taken place. Even where we can evidence changes and benefits linked to our research, understanding the causal relationship may be difficult. Media coverage is a useful means of disseminating our research and ideas and may be considered alongside other evidence as contributing to or an indicator of impact.

The fast-moving developments in the field of altmetrics (or alternative metrics) are providing a richer understanding of how research is being used, viewed, and moved. The transfer of information electronically can be traced and reviewed to provide data on where and to whom research findings are going.

The understanding of the term impact varies considerably and as such the objectives of an impact assessment need to be thoroughly understood before evidence is collated.

While aspects of impact can be adequately interpreted using metrics, narratives, and other evidence, the mixed-method case study approach is an excellent means of pulling all available information, data, and evidence together, allowing a comprehensive summary of the impact within context. While the case study is a useful way of showcasing impact, its limitations must be understood if we are to use this for evaluation purposes. The case study does present evidence from a particular perspective and may need to be adapted for use with different stakeholders. It is time-intensive to both assimilate and review case studies and we therefore need to ensure that the resources required for this type of evaluation are justified by the knowledge gained. The ability to write a persuasive well-evidenced case study may influence the assessment of impact. Over the past year, there have been a number of new posts created within universities, such as writing impact case studies, and a number of companies are now offering this as a contract service. A key concern here is that we could find that universities which can afford to employ either consultants or impact ‘administrators’ will generate the best case studies.

The development of tools and systems for assisting with impact evaluation would be very valuable. We suggest that developing systems that focus on recording impact information alone will not provide all that is required to link research to ensuing events and impacts, systems require the capacity to capture any interactions between researchers, the institution, and external stakeholders and link these with research findings and outputs or interim impacts to provide a network of data. In designing systems and tools for collating data related to impact, it is important to consider who will populate the database and ensure that the time and capability required for capture of information is considered. Capturing data, interactions, and indicators as they emerge increases the chance of capturing all relevant information and tools to enable researchers to capture much of this would be valuable. However, it must be remembered that in the case of the UK REF, impact is only considered that is based on research that has taken place within the institution submitting the case study. It is therefore in an institution’s interest to have a process by which all the necessary information is captured to enable a story to be developed in the absence of a researcher who may have left the employment of the institution. Figure 2 demonstrates the information that systems will need to capture and link.

Research findings including outputs (e.g., presentations and publications)

Communications and interactions with stakeholders and the wider public (emails, visits, workshops, media publicity, etc)

Feedback from stakeholders and communication summaries (e.g., testimonials and altmetrics)

Research developments (based on stakeholder input and discussions)

Outcomes (e.g., commercial and cultural, citations)

Impacts (changes, e.g., behavioural and economic)

Overview of the types of information that systems need to capture and link.

Overview of the types of information that systems need to capture and link.

Attempting to evaluate impact to justify expenditure, showcase our work, and inform future funding decisions will only prove to be a valuable use of time and resources if we can take measures to ensure that assessment attempts will not ultimately have a negative influence on the impact of our research. There are areas of basic research where the impacts are so far removed from the research or are impractical to demonstrate; in these cases, it might be prudent to accept the limitations of impact assessment, and provide the potential for exclusion in appropriate circumstances.

This work was supported by Jisc [DIINN10].

Google Scholar

Google Preview

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Open access
  • Published: 04 October 2019

Engaging with research impact assessment for an environmental science case study

  • Kirstie A. Fryirs   ORCID: orcid.org/0000-0003-0541-3384 1 ,
  • Gary J. Brierley   ORCID: orcid.org/0000-0002-1310-1105 2 &
  • Thom Dixon   ORCID: orcid.org/0000-0003-4746-2301 3  

Nature Communications volume  10 , Article number:  4542 ( 2019 ) Cite this article

15k Accesses

18 Citations

39 Altmetric

Metrics details

  • Environmental impact
  • Research management

An Author Correction to this article was published on 08 November 2019

This article has been updated

Impact assessment is embedded in many national and international research rating systems. Most applications use the Research Impact Pathway to track inputs, activities, outputs and outcomes of an invention or initiative to assess impact beyond scholarly contributions to an academic research field (i.e., benefits to environment, society, economy and culture). Existing approaches emphasise easy to attribute ‘hard’ impacts, and fail to include a range of ‘soft’ impacts that are less easy to attribute, yet are often a dominant part of the impact mix. Here, we develop an inclusive 3-part impact mapping approach. We demonstrate its application using an environmental initiative.

Similar content being viewed by others

research paper on impact assessment

SciSciNet: A large-scale open data lake for the science of science research

research paper on impact assessment

Interdisciplinarity revisited: evidence for research impact and dynamism

research paper on impact assessment

A global database for conducting systematic reviews and meta-analyses in innovation and quality management

Introduction.

Universities around the World are increasingly required to demonstrate and measure the impact of their research beyond academia. The Times Higher Education (THE) World University Rankings now includes a measure of knowledge transfer and impact as an indicator of an institution’s quality and the THE World University Rankings released their inaugural University impact rankings in 2019. With the global rise of impact assessment, most nations adopt a variant of the Organisation for Economic Cooperation and Development (OECD) definition of impact 1 ; “the contribution that research makes to the economy, society, environment or culture, beyond the contribution to academic research.” Yet research impact mapping provides benefits beyond just meeting the requirements for assessment 1 . It provides an opportunity for academics to reflect on and consider the impact their research can, and should, have on the environment, our social networks and wellbeing, our economic prosperity and our cultural identities. If considered at the development stage of research practices, the design and implementation of impact mapping procedures and frameworks can provide an opportunity to better plan for impact and create an environment where impact is more likely to be achieved.

Almost all impact assessments use variants of the Research Impact Pathway (Fig. 1 ) as the conceptual framework and model with which to document, measure and assess environmental, social, economic and cultural impacts of research 1 . This Pathway starts with inputs, followed by activities. Outputs and outcomes are produced and these lead to impact. Writing for Nature Outlook: Assessing Science , Morgan 2 reported on how Australia’s Commonwealth Scientific and Research Organisation (CSIRO) mapped impact using this approach. However, the literature contains very few worked examples to guide academics and co-ordinators in the process of research impact mapping. This is particularly evident for environmental initiatives and innovations 3 , 4 .

Here we provide a new, 3-part impact mapping approach that can accommodate non-linearity in the impact pathway and can more broadly include and assess both ‘hard’ impacts, those that can be directly attributed to an initiative or invention, and ‘soft’ impacts, those that can be indirectly attributed to an initiative or invention. We then present a worked example for an environmental innovation called the River Styles Framework, developed at Macquarie University, Sydney, Australia. The River Styles Framework is an approach to analysis, interpretation and application of geomorphic insights into river landscapes as a tool to support management applications 5 , 6 . We document and map how this Framework has shaped, and continues to shape, river management practice in various parts of the world. Through mapping impact we demonstrate how the River Styles Framework has contributed to environmental, social and economic benefits at local, national and international scales. Cvitanovic and Hobday (2018) 3  in Nature Communications might consider this case study a ‘bright spot’ that sits at the environmental science-policy-practice interface and is representative of examples that are seldom documented.

figure 1

The Research Impact Pathway (modified from ref. 2 )

This case study is presented from the perspective of the researchers who developed the River Styles Framework, and the University Impact co-ordinator who has worked with the researchers to document and measure the impact as part of ex post assessment 1 , 7 . We highlight challenges in planning for impact, as the research impact pathway evolves and entails significant lag times 8 . We discuss challenges that remain in the mapping process, particularly when trying to measure and attribute ‘soft’ impacts such as a change in practice or philosophy, an improvement in environmental condition, or a reduction in community conflict to a particular initiative or innovation 9 . We then provide a personal perspective of the challenges faced and lessons learnt in applying and mapping research impact so that others, particularly in the environmental sciences and related interdisciplinary fields, can undertake similar exercises for their own research impact assessments.

Brief background on research impact assessment and reporting

Historical reviews of research policy record long-term shifts towards incorporation of concerns for research impact within national funding agencies. In the 1970s the focus was on ‘research utilisation’ 10 , more recently it has been on ‘knowledge mobilisation’ 11 . The focus is always on seeking to understand the actual manner and pathways through which research becomes incorporated into policy, and through which research has an economic, social, cultural and environmental impact. Often these are far from linear circumstances, entailing multiple pathways.

Since the 1980s, higher education systems around the world have been transitioning to performance-based research funding systems (PRFS). The initial application of the PRFS in university contexts occurred as part of the first Research Assessment Exercise (RAE) in the United Kingdom in 1986 12 . PRFS systems have been designed to reward and perpetuate the highest quality research, presenting notionally rational criteria with which to support more intellectually competitive institutions 13 . The United Kingdom’s (UK) RAE was replicated in Australia as the Research Quality Framework (RQF), and more recently as the Excellence in Research for Australia (ERA) assessment. In 2010, 15 countries engaged in some form of PRFS 14 . These frameworks focus almost solely on academic research performance and productivity, rather than the contribution and impact that research makes to the economy, society, environment or culture.

In the last decade, research policy frameworks have increasingly focused on facilitating national prosperity through the transfer, translation and commercialisation of knowledge 15 , 16 , combined with the integration of research findings into government policy-making 17 . In 2009, the Higher Education Funding Council for England conducted a year-long review and consultation process regarding the structure of the Research Excellence Framework (REF) 18 . Following this review, in 2010 the Higher Education Funding Council for England (HEFCE) commissioned a series of impact pilot studies designed to produce narrative-style case studies by 29 higher education institutions. The pilot studies featured five units of assessment: clinical medicine, physics, earth systems and environmental sciences, social work and social policy, and English language and literature 12 . These pilot studies became the basis of the REF conducted in the UK in 2014 9 , 19 with research impact reporting comprising a 20% component of the overall assessment.

In Canada, in 2009 and from 2014 the Canadian Academy of Health Sciences and Manitoba Research, respectively, developed an impact framework and narrative outputs to evaluate the returns on investment in health research 20 , 21 . Similarly the UK National Institute for Health Research (NIHR) regularly produces impact synthesis case studies 22 . In Ireland, in 2012, the Science Foundation Ireland placed research impact assessment at the core of its scientific and engineering research vision, called Agenda 2020 23 . In the United States, in 2016, the National Science Foundation, National Institute of Health, US Department of Agriculture, and US Environmental Protection Authority developed a repository of data and tools for assessing the impact of federal research and development investments 24 . In 2016–2017, the European Union (EU) established a high-level group to advise on how to maximise the impact of the EU’s investment in research and innovation, focussing on the future of funding allocation and the implementation of the remaining years of Horizon 2020 25 . In New Zealand, in 2017, the Ministry of Business, Innovation and Employment released a discussion paper proposing the introduction of an impact ‘pillar’ into the science investment system 26 . In 2020, Hong Kong will include impact assessment in their Research Assessment Exercise (RAE) for the first time 27 . Other countries including Denmark, Finland and Israel have scoped the use of research impact assessments of their major research programs as part of the Small Advanced Economies Initiative 28 .

In 2017, the Australian Research Council (ARC) conducted an Engagement and Impact Assessment Pilot (EIAP) 7 . While engagement is not analogous to impact, it is an evidential mechanism that elucidates the potential beneficiaries, stakeholders, and partners of academic research 12 , 16 . In addition to piloting narrative-style impact case study reporting, the EIAP characterised and mapped patterns of academic engagement with end users that create and enable research impact. The 2017 EIAP assessed a selection of disciplines for engagement, and a selection of disciplines for impact. Environmental science was a discipline selected for the impact pilot. These pilots became the basis for the Australian Engagement and Impact (EI) assessment in 2018 7 that ran in parallel with the ERA, and from which the case study in this paper is drawn.

Research impact assessment does not just include ex post reporting that can feed into a national PRFS. A large component of academic impact assessment involves ex ante impact reporting in research funding applications. In both the UK and Australia, the perceived merit of a research funding application has been linked in part to its planning and potential for external research impact. In the UK this is labelled a ‘Pathways to Impact’ statement (used by the Research Council UK), in Australia this is an Impact statement (used by the ARC), with a national interest statement also implemented in 2018. These statements explicitly draw from the ‘pathway to impact’ model which simplifies a direct and linear relationship between research excellence, research engagement, and research impact 29 . These ex ante impact statements can be difficult for academics, especially early career researchers, if they do not understand the process, nature and timing of impact. This issue exists in ex post impact reporting and assessment as well, with many researchers finding it difficult to supply evidence that directly or indirectly links their research to impacts that may have taken decades to manifest 1 , 7 , 8 . Also, the simplified linearity of the Research Impact Pathway model makes it difficult to adequately represent the transformation of research into impact.

For research impact statements and assessments to be successful, researchers need to understand the patterns and pathways by which impact occurs prior to articulating how their own research project might achieve impact ex ante, or has had impact ex post. The quality of research impact assessment will improve if researchers and funding agencies understand the types and qualities of impact that can reasonably be expected to arise from a research project or initiative.

Given the plethora of interest in, and a growing global movement towards, both ex ante and ex post research impact assessment and reporting, it is surprising that very few published examples demonstrate how to map research impact. Even in the business, economics and corporate sectors where impact assessment and reporting is common practice 30 , 31 , 32 , very few published examples exist. This hinders prospects for researchers and co-ordinators to develop a more critical understanding of impact, inhibiting more nuanced understandings of the pathways to impact model. Mapping impact networks and recording a cartography of impact for research projects and initiatives provides an appropriate basis to conduct such tasks. This paper provides a new method by which this can be achieved.

The research impact pathway and impact mapping

Many impact assessment frameworks around the world have common characteristics, often structured around the Research Impact Pathway model (Fig. 1 ). This model can be identified in a series of 2009 and 2016 Organisation for Economic Cooperation and Development (OECD) reports that investigated the mechanisms of impact reporting 1 , 33 . The Research Impact Pathway is presented as a sequence of steps by which impact is realised. This pathway can be visualised for an innovation or initiative using an impact mapping approach. It starts with inputs that can include funding, staff, background intellectual property and support structures (e.g., administration, facilities). This is followed by activities or the ‘doing’ elements. This includes the work of discovery (i.e., research) and the translation—i.e., courses, workshops, conferences, and processes of community and stakeholder engagement.

Outputs are the results of inputs and activities. They includes publications, reports, databases, new intellectual property, patents and inventions, policy briefings, media, and new courses or teaching materials. Inputs, activities and outputs can be planned and somewhat controlled by the researcher, their collaborators and their organisations (universities). Outcomes then occur under direct influence of the researcher(s) with intended results. This may include commercial products and licences, job creation, new contracts, grants or programs, citations of work, new companies or spin-offs and new joint ventures and collaborations.

Impacts (sometimes called benefits) tend to occur via uptake and use of an innovation or initiative by independent parties under indirect (or no) influence from the original researcher(s). Impacts can be ‘hard’ or ‘soft’ and have intended and unintended consequences. They span four main areas outside of academia, including environmental, social, economic and cultural spaces. Impacts can include improvements in environmental health, quality of life, changes in industry or agency philosophy and practice, implementation or improvement in policy, improvements in monitoring and reporting, cost-savings to the economy or industry, generation of a higher quality workforce, job creation, improvements in community knowledge, better inter-personal relationships and collaborations, beneficial transfer and use of knowledge, technologies, methods or resources, and risk-reduction in decision making.

The challenge: applying the research impact pathway to map impact for a case study

The River Styles Framework 5 , 34 aligns with UN Sustainable Development Goals of Life on Land and Clean Water and Sanitation that have a 2020 target to “ensure the conservation, restoration and sustainable use of terrestrial and inland freshwater ecosystems and their services” and a 2030 target to urgently “implement integrated water resources management at all levels” 35 .

The River Styles Framework is a catchment-scale approach to analysis and interpretation of river geomorphology 36 . It is an open-ended, generic approach for use in any landscape or environmental setting. The Framework has four stages (see refs. 5 , 37 , 38 , 39 ); (1) Analysis of river types, behaviour and controls, (2) Assessment of river condition, (3) Forecasting of river recovery potential, and (4) Vision setting and prioritisation for decision making.

River Styles Framework development, uptake, extension and training courses have contributed to a global change in river management philosophy and practice, resulting in improved on-ground river condition, use of geomorphology in river management, and end-user professional development. Using the River Styles Framework has changed the way river management decisions are made and the level of intervention and resources required to reach environmental health targets. This has been achieved through the generation of catchment-scale and regional-level templates derived from use of the Framework 6 . These templates are integrated with other biophysical science tools and datasets to enhance planning, monitoring and forecasting of freshwater resources 6 . The Framework is based on foundation research on the form and function of streams and their interaction with the landscape through which they flow (fluvial geomorphology) 5 , 40 .

The Framework has a pioneering structure and coherence due to its open-ended and generic approach to river analysis and interpretation. Going well beyond off-the-shelf imported manuals for river management, the Framework has been adopted because of its innovative approach to geomorphic analysis of rivers. The Framework is tailored for the landscape and institutional context of any given place to produce scaffolded, coherent and consistent datasets for catchment-specific decision making. Through on-ground communication of place-based results, the application of the Framework spans local, state, national and international networks and initiatives. The quality of the underlying science has been key to generating the confidence required in industry and government to adopt geomorphology as a core scientific tool to support river management in a range of geographical, societal and scientific contexts 6 .

The impact of this case study spans conceptual use, instrumental use and capacity building 4 defined as ways of thinking and alerting policy makers and practitioners to an issue. Impact also includes direct use of research in policy and planning decisions, and education, training and development of end-users, respectively 4 , 41 , 42 . The River Styles Framework has led to establishment of new decision-making processes while also changing philosophy and practice so on-ground impacts can be realised.

Impact does not just occur at one point in time. Rather, it comes and goes or builds and is sustained. How this is represented and measured, particularly for an environmental case study, and especially for an initiative built around a Framework where a traditional ‘product’, ‘widget’, or ‘invention’ is not produced is challenging 4 . More traditional metrics-based indicators such as the number of lives saved or the amount of money generated cannot be deployed for these types of case studies 4 , 9 . It is particularly challenging to unravel the commercial value and benefits of adopting and using an initiative (or Framework) that is part of a much bigger, international paradigm shift in river management philosophy and practice.

Similarly, how do you measure environmental, social, economic or cultural impacts of an initiative where the benefits can take many years (and in the case of rivers, decades) to emerge, and how do you then link and attribute those impacts directly with the design, development, use and extension of that initiative in many different places at many different times? For the River Styles Framework, on-ground impacts in terms of improved river condition and recovery are occurring 43 , but other environmental, social and economic benefits may be years or decades away. Impactful initiatives in themselves often reshape the contextual setting that then frames the next phase of science and management practices which leads to further implications for policy and institutional settings, and for societal (socio-cultural) and environmental benefits. This is currently the case in assessing the impact of the River Styles Framework.

The method: a new, 3-part impact mapping approach

Using the River Styles framework as an environmental case study, Fig. 2 presents a 3-part impact mapping approach that contains (1) a context strip, (2) an impact map, and (3) soft impact intensity strips to capture the scope of the impact and the conditions under which it has been realised. This approach provides a template that can be used or replicated by others in their own impact mapping exercises 44 .

figure 2

The research impact map for the River Styles Framework case study. This map contains 3 parts, a context strip, impact map and soft impact intensity strips

The cartographic approach to mapping impact shown in Fig. 2 provides a mechanism to display a large amount of complex information and interactions in a style that conveys and communicates an immediate snapshot of the research impact pathway, its components and associated impacts. The map can be analysed to identify patterns and interactions between components as part of ex post assessment, and as a basis for ex ante impact forecasting.

The 3-part impact map output is produced in an interactive online environment, acknowledging that impact maps are live, open-ended documents that evolve as new impacts emerge and inputs, activities, outputs and outcomes continue. The map changes when activities, outputs or outcomes that the developers had forgotten, or considered to be peripheral, later re-appear as having been influential to a stakeholder, community or network not originally considered as an end-user. Such activities, outputs and outcomes can be inserted into a live map to broaden its base and understand the impact. Also, by clicking on each icon on the map, pop-up bubbles contain details that are specific to each component of the case study. This functionality can also be used to journal or archive important information and evidence in the ‘back-end’ of the map. Such evidence is often required, or called upon, in research impact assessments. Figure 2 only provides a static reproduction of the map output for the River Styles Framework. The fully worked, interactive, River Styles Framework impact map can be viewed at https://indd.adobe.com/view/c9e2a270–4396–4fe3-afcb-be6dd9da7a36 .

Context is a key driver of research impact 1 , 45 . Context can provide goals for research agendas and impact that feeds into ex ante assessments, or provide a lens through which to analyse the conditions within which certain impacts emerged and occurred as part of ex post assessment. Part 1 of our mapping approach produces a context strip that situates the case study (Fig. 2 ). This strip is used to document settings occurring outside of academia before, during and throughout the case study. Context can be local, national or global and examples can be gathered from a range of sources such as reports, the media and personal experience. For the River Styles case study only key context moments are shown. Context for this case study is the constantly changing communities of practice in global river restoration that are driven by (or inhibited by) the environmental setting (coded with a leaf symbol), policy and institutional settings (coded with a building symbol), social and cultural settings (coded with a crowd symbol), and economic settings (coded with a dollar symbol). For most case studies, these extrinsic setting categories will be similar, but others can be added to this part of the map if needed.

Part 2 of our mapping approach produces an impact map using the Research Impact Pathway (Fig. 1 ). This impact map (Fig. 2 ) documents the time-series of inputs (coded with a blue hexagon), activities (coded with a green hexagon), outputs (coded with a yellow hexagon), outcomes (coded with a red hexagon) and impacts (coded with a purple hexagon) that occurred for the case study. Heavier bordered hexagons and intensity strips represent international aspects and uptake. To start, only the primary inputs, activities, outputs and outcomes are mapped. A hexagon appears when there is evidence that an input, activity, output or outcome has occurred. Evidence includes event advertisements, reports, publications, website mentions, funding applications, awards, personnel appointments and communications products.

However, in conducting this standard mapping exercise it soon became evident that it is difficult to map and attribute impacts, particularly for an initiative that has a wide range of both direct and indirect impacts. To address this, our approach distinguishes between ‘hard’ impacts and ‘soft’ impacts. Hard impacts can be directly attributed to an initiative or invention, whereas soft impacts can be indirectly attributed to an initiative or invention. The inclusion of soft impacts is critical as they are often an important and sometimes dominant part of the impact mix. Both quantitative and qualitative measures and evidence can be used to attribute hard or soft impacts. There is not a direct one-to-one relationship between quantitative measurement of hard impacts and qualitative appraisal of soft impacts.

Hard impacts are represented as purple hexagons in the body of the impact map. For the River Styles Framework we have only placed a purple hexagon on the impact map where the impact can be ‘named’ and for which there is ‘hard’ evidence (in the form of a report, policy, strategic plan or citation) that directly mentions and therefore attributes the impact to River Styles. Most of these are multi-year impacts and the position of the hexagons on the map is noted at the first mention.

For many case studies, particularly those that impact on the environment, society and culture, attributing impact directly to an initiative or invention is not necessarily easy or straighforward. To address this our approach contains a third element, soft impact intensity strips (Fig. 2 ) to recognise, document, capture and map the extent and influence of impact created by an initiative or invention. This is represented as a heat intensity chart (coded as a purple bar of varying intenstiy) and organised under the environmental, social and economic categories that are often used to measure Triple-Bottom-Line (TBL) benefits in sustainability and research and development (R&D) reporting (e.g., refs. 7 , 46 ). Within these broad categories, soft impacts are categorised according to the dimensions of impacts of science used by the OECD 1 . These include environmental, societal, cultural, economic, policy, organisational, scientific, symbolic and training impacts. Each impact strip for soft impacts uses different levels of purple shading (to match the purple hexagon colour in the impact map) to visualise the timing and intensity of soft impacts. For the River Styles Framework, the intensity of the purple colour is used to show those impacts that have been most impactful (darker purple), the timing of initiation, growth or step-change in intensity of each impact, the rise and wane of some impacts and the longevity of others. A heavy black border is used to note the timing of internationalisation of some impacts. This heat intensity chart was constructed by quantitatively representing qualitative sentiment in testimonials, interviews, course evaluations and feedback, surveys and questionnaires, acknowledgements and recognitions, documentation of collaborations and networks, use of River Styles concepts, and reports on the development of spin-off frameworks. Quantitative representations of qualitative sentiment was achieved through using the methods of time-series keyword searches and expert judgement. These are just two methods by which the level of heat intensity can be measured and assigned 9 .

The outcome: impact of the River Styles Framework case study

Figure 2 , and its interactive online version, present the impact map for the River Styles Framework initiative and Table 1 documents the detail of the River Styles impact story from pre-1996 to post-2020. The distribution of colour-coded hexagons and the intensity of purple on the soft impact intensity strips on Fig. 2 demonstrates the development and maturation of the initiative and the emergence of the impact.

In the first phase (pre-1996–2002), blue inputs, green activities and yellow output hexagons dominate. The next phase (2002–2005) was an intensive phase of output production (yellow hexagons). It is during this phase that red outcome hexagons appear and intensify. From 2006, purple impact hexagons appear for the first time, representing hard impact outside of academia. Soft impacts also start to emerge more intensely (Fig. 2 ). 2008–2015 represents a phase of domestic consolidation of yellow outputs, red outcomes and purple impacts, and the start of international uptake. Some of this impact is under direct influence and some is independent of the developers of the River Styles Framework (Fig. 1 ). The number of purple impact hexagons is more intense during the 2008–2015 period and soft impacts intensify further. 2016–2018 (and beyond) represents a phase of extension into international markets, collaborations and impact (heavier bordered hexagons and intensity strips; Fig. 2 ). The domestic impacts that emerged most intensively post-2006 continue in the background. Green activity hexagons re-appear during this period, much like the 1996–2002 phase, but in an international context. Foundational science (green activity hexagons) re-emerge, particularly internationally with new collaborations. At the same time, yellow outputs and red outcomes continue.

For the River Styles case study the challenge still remains one of how to adequately attribute, measure and provide evidence for soft impacts 4 that include:

a change in river management philosophy and practice

an improvement in river health and conservation of threatened species

the provision of an operational Framework that provides a common and consistent approach to analysis

the value of knowledge generation and databases for monitoring river health and informing river management decision-making for years to come

the integration into, and improvement in, river management policy

a change in prioritisation that reduces risk in decision-making and cost savings on-the-ground

professional development to produce a better trained, higher quality workforce and increased graduate employability

the creation of stronger networks of river professionals and a common suite of concepts that enable communication

more confident and appropriate use of geomorphic principles by river management practitioners

an improvement in citizen knowledge and reduced community conflict in river management practice

Lessons learnt by applying research impact mapping to a real case study

When applying the Research Impact Pathway and undertaking impact mapping for a case study it becomes obvious that generating and realising impact is not a linear process and it is never complete, and in many aspects it cannot be planned 8 , 9 , 29 . Rather, the pathway has many highways, secondary roads, intersections, some dead ends or cul-de-sacs and many unexpected detours of interest along the way.

Cycles of input, activity, outputs, outcomes and impact occur throughout the process. There are phases where greater emphasis is placed on inputs and activities, or phases of productivity that produce outputs and outcomes, and there are phases where the innovation or initiative gains momentum and produces a flurry of benefits and impacts. However, throughout the journey, inputs, activities, outputs and outcomes are always occurring, and the impact pathway never ends. Some impacts come and go while others are sustained.

The saying “being in the right place at the right time with the right people” has some truth. Impact can be probabilistically generated ex ante by the researcher(s) regularly placing themselves and their outputs in key locations or ‘rooms’ and in ‘moments’ where the chance of non-academic translation is high 47 . Context is also critical 45 . Economic, political, institutional, social and environmental conditions need to come together if an innovation or initiative is to ‘get off the ground’, gain traction and lead to impact (e.g., Fig. 2 ). Ongoing and sustained support is vital. An innovation funded 10 years ago may not receive funding today, or an innovation funded today may not lead to impact unless the right sets of circumstances and support are in place. This is, in part, a serendipitous process that involves the calculated creation of circumstances aligned to evoke the ‘black swan’ event of impact 48 . The ‘black swan’ effect, coined by Nassem Nicholas Taleb, is a metaphor for an unanticipated event that becomes reinterpreted through the benefit of hindsight, or alternatively, an event that exists ‘outside the model’. For example, black swans were presumed not to exist by Europeans until they were encountered in Australia and scientifically described in 1790. Such ‘black swan’ events are a useful device in ex post assessment for characterising those pivotal moments when a research program translates into research impact. While the exact nature of such events cannot be anticipated, by understanding the ways in which ‘black swan’ events take place in the context of research impact, researchers can manufacture scenarios that optimise their probability of provoking a ‘black swan’ event and therefore translating their research project into research impact, albeit in an unexpected way. One ‘black swan’ event for the River Styles Framework occurred between 1996–2002 (Table 1 ). Initial motivations for developing the Framework reflected inappropriate use of geomorphic principles derived elsewhere to address management concerns for distinctive river landscapes and ecosystems in Australia. Although initial applications and testing of the Framework were local (regional-scale), advice by senior-level personnel in the original funding agency, Land and Water Australia (blue input hexagon in 1997; Fig. 2 ), suggested we make principles generic such that the Framework can be used in any landscape setting. The impact of this ‘moment’ was only apparent much later on, when the Framework was adopted to inform place-based, catchment-specific river management applications in various parts of the world.

What is often not recognised is the time lag in the research impact process 9 . Depending on the innovation or initiative, this is, at best, a decadal process. Of critical importance is setting the foundations for impact. The ‘gem of an idea’ needs to be translated into a sound program of research, testing (proof of concept), peer-review and demonstration. These foundations must generate a level of confidence in the innovation or initiative before uptake. A level of branding may be required to make the innovation or initiative stand out from the crowd. Drivers are required to incentivise academics, both internal and external to their University setting, encouraging them to go outside their comfort zone to apply and translate their research in ‘real-world’ settings. Maintaining passion, patience and persistence throughout the journey are some of the most hidden and unrecognised parts of this process.

Some impacts are not foreseeable and surprises are inevitable. Activities, outputs and outcomes that may initially have seemed like a dead end, often re-appear in a different context or in a different network. Other outputs or outcomes take off very quickly and are implemented with immediate impact. Catalytic moments are sometimes required for uptake and impact to be realised 8 . These surprises are particularly obvious when an innovation or initiative enters the independent uptake stage, called impact under indirect influence on Fig. 1 . In this phase the originating researchers, developers or inventors are often absent or peripheral to the impact process. Other people or organisations have the confidence to use the innovation or initiative (as intended, or in some cases not as intended), and find new ways of taking the impact further. The innovation or initiative generates a life of its own in a snowball effect. Independent uptake is not easily measured, but it is a critical indicator of impact. Unless the foundations are solid and sound, prospects for sustained impact are diminished.

The maturity and type of impact also vary in different places at different times. This is particularly the case for innovations and initiatives where local and domestic uptake is strong, but international impact lags. Some places may be well advanced on the uptake part of the impact journey, firmly embedding the benefits while developing new extensions, add-ons and spin-offs with inputs and activities. Elsewhere, the uptake will only have just begun, such that outputs and outcomes are the primary focus for now, with the aim of generating impact soon. In some instances, authorities and practitioners are either unaware or are yet to be convinced that the innovation or initiative is relevant and useful for their circumstances. In these places the focus is on the inputs and activity phases necessary to generating outputs and outcomes relevant to their situation and context. Managing this variability while maintaining momentum is critical to creating impact.

Future directions for the practice of impact mapping and assessment

The process of engaging with impact and undertaking impact mapping for an environmental case study has been a reflective, positive but challenging experience. Our example is typical of many of the issues that must be addressed when undertaking research impact mapping and assessments where both ‘hard’ and ‘soft’ impacts are generated. Our 3-part impact mapping approach helps deal with these challenges and provides a mechanism to visualise and enhance communication of research impact to a broad range of scientists and policy practitioners from many fields, including industry and government agencies, as well as citizens who are interested in learning about the tangible and intangible benefits that arise from investing in research.

Such impact mapping work cannot be undertaken quickly 44 , 45 . Lateral thinking is required about what research impact really means, moving beyond the perception in academia that outputs and outcomes equals impact 4 , 9 , 12 . This is not the case. The research impact journey does not end at outcomes. The real measure of research impact is when an initiative gains a ‘life of its own’ and is independently picked-up and used for environmental, social or economic benefit in the ‘real-world’. This is when an initiative exits from the original researcher(s) owning the entirety of the impact, to one where the researcher(s) have an ongoing contribution to vastly scaled-up sets of collective impacts that are no longer controlled by any one actor, community or network. Penfield et al. 9 relates this to ‘knowledge creep’ where new data, information or frameworks become accepted and get absorbed over time.

Careful consideration of how an initiative is developed, emerges, is used, and the resulting benefits is needed to map impact. This process, in its own regard, provides solid foundations for future planning and consideration of possible (or maybe unforeseen) opportunities to develop the impact further as part of ex ante impact forecasting 1 , 44 . It’s value also lies in communicating and teaching others, using worked case studies, about what impact can mean, to demonstrate how it can evolve and mature, and outline the possible pathways of impact as part of ex post impact assessment 1 , 44 .

With greater emphasis being placed on impact in research policy and reporting in many parts of the world, it is timely to consider the level of ongoing support required to genuinely capture and assess impact over yearly and decadal timeframes 20 . Creation of environments and cultures in which impact can be incubated, nourished and supported aids effective planning, knowledge translation and engagement. Ongoing research is required to consider, more broadly and laterally, what is measured, what indicators are used, and the evidence required to assign attribution. This remains a challenge not just for the case study documented here, but for the process of impact assessment more generally 1 , 9 . Continuous monitoring of impacts (both intended and unintended) is needed. To do this requires support and systems to gather, archive and track data, whether quantitative or qualitative, and adequately build evidence portfolios 20 . A keen eye is needed to identify, document and archive evidence that may seem insignificant at the time, but can lead to a step-change in impact, or a re-appearance elsewhere on the pathway.

Impact reporting extends beyond traditional outreach and service roles in academia 16 , 19 . Despite the increasing recognition of the importance of impact and its permeation into academic lives, it is yet to be formally built into many academic and professional roles 9 . To date, the rewards are implicit rather than explicit 44 . Support is required if impact planning and reporting for assessment is to become a new practice for academics.

Managing the research impact process is vital, but it is also important to be open to new ideas and avenues for creating impact at different stages of the process. It is important to listen and to be attuned to developments outside of academia, and learn to live with the creative spark of uncertainty as we expect the unexpected!

Change history

08 november 2019.

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

Organisation for Economic Cooperation and Development (OECD). Enhancing Research performance through Evaluation, Impact Assessment and Priority Setting  (Directorate for Science, Technology and Innovation, Paris, 2009). This is a ‘go-to’ guide for impact assessment in Research and Development, used in OECD countries .

Morgan, B. Income for outcome. Australia and New Zealand are experimenting with ways of assessing the impact of publicly funded research. Nat. Outlook 511 , S72–S75 (2014). This Nature Outlook article reports on how Australia’s Commonwealth Scientific and Research Organisation (CSIRO) mapped their research programs against impact classes using the Research Impact Pathway .

CAS   Google Scholar  

Cvitanovic, C. & Hobday, A. J. Building optimism at the environmental science-policy-practice interface through the study of bright spots. Nat. Commun. 9 , 3466 (2018). This Nature Communications paper presents a commentary on the key principles that underpin what are termed ‘bright spots’, case studies where science and research has successfully influenced and impacted on policy and practice, as a means to inspire optimism in humanity’s capacity to address environmental challenges .

Article   ADS   Google Scholar  

Rau, H., Goggins, G. & Fahy, F. From invisibility to impact: recognising the scientific and societal relevance of interdisciplinary sustainability research. Res. Policy 47 , 266–276 (2018). This paper uses interdisciplinary sustainability research as a centrepiece for arguing the need for alternative approaches for conceptualising and measuring impact that recognise and capture the diverse forms of engagement between scientists and non-scientists, and diverse uses and uptake of knowledge at the science-policy-practice interface .

Article   Google Scholar  

Brierley, G. J. & Fryirs, K. A. Geomorphology and River Management: Applications of the River Styles Framework . 398 (Blackwell Publications, Oxford, 2005). This book contains the full River Styles Framework set within the context of the science of fluvial geomorphology .

Brierley, G. J. et al. Geomorphology in action: linking policy with on-the-ground actions through applications of the River Styles framework. Appl. Geogr. 31 , 1132–1143 (2011).

Australian Research Council (ARC). EI 2018 Framework  (Commonwealth of Australia, Canberra, 2017). This document and associated website contains the procedures for assessing research impact as part of the Australian Research Council Engagement and Impact process, and the national report, outcomes and impact cases studies assessed in the 2018 round .

Matt, M., Gaunand, A., Joly, P.-B. & Colinet, L. Opening the black box of impact–Ideal type impact pathways in a pubic agricultural research organisation. Res. Policy 46 , 207–218 (2017). This article presents a metrics-based approach to impact assessment, called the Actor Network Theory approach, to systematically code variables used to measure ex-post research impact in the agricultural sector .

Penfield, T., Baker, M. J., Scoble, R. & Wykes, M. C. Assessment, evaluations, and definitions of research impact: a review. Res. Eval. 23 , 21–32 (2014). This article reviews the concepts behind research impact assessment and takes a focussed look at how impact assessment was implemented for the UK’s Research Excellence Framework (REF) .

Weiss, C. H. The many meanings of research utilization. Public Adm. Rev. 39 , 426–431 (1979).

Cooper, A. & Levin, B. Some Canadian contributions to understanding knowledge mobilisation. Evid. Policy 6 , 351–369 (2010).

Watermeyer, R. Issues in the articulation of ‘impact’: the responses of UK academics to ‘impact’ as a new measure of research assessment. Stud. High. Educ. 39 , 359–377 (2014).

Hicks, D. Overview of Models of Performance-based Research Funding Systems. In: Organisation for Economic Cooperation and Development (OECD), Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings . 23–52 (OECD Publishing, Paris, 2010). https://doi.org/10.1787/9789264094611-en (Accessed 27 Aug 2019).

Hicks, D. Performance-based university research funding systems. Res. Policy 41 , 251–26 (2012).

Etzkowitz, H. Networks of innovation: science, technology and development in the triple helix era. Int. J. Technol. Manag. Sustain. Dev. 1 , 7–20 (2002).

Perkmann, M. et al. Academic engagement and commercialisation: a review of the literature on university-industry relations. Res. Policy 42 , 423–442 (2013).

Leydesdorff, L. & Etzkowitz, H. Emergence of a Triple Helix of university—industry—government relations. Sci. Public Policy 23 , 279–286 (1996).

Google Scholar  

Higher Education Funding Council for England (HEFCE). Research Excellence Framework . Second consultation on the assessment and funding of research. London. https://www.hefce.ac.uk (Accessed 12 Aug 2019).

Smith, S., Ward, V. & House, A. ‘Impact’ in the proposals for the UK’s Research Excellence Framework: Shifting the boundaries of academic autonomy. Res. Policy 40 , 1369–1379 (2011).

Canadian Academy of Health Sciences (CAHS). Making an Impact. A Preferred Framework and Indicators to Measure Returns on Investment in Health Research  (Canadian Academy of Health Sciences, Ottawa, 2009). This report presents the approach to research impact assessment adopted by the health science industry in Canada using the Research Impact Pathway .

Research Manitoba. Impact Framework . Research Manitoba, Winnipeg, Manitoba, Canada. (2012–2019). https://researchmanitoba.ca/impacts/impact-framework/ (Accessed 3 June 2019).

United Kingdom National Institute for Health Research (UKNIHR). Research and Impact . (NIHR, London, 2019).

Science Foundation Ireland (SFI). Agenda 2020: Excellence and Impact . (SFI, Dublin, 2012).

StarMetrics. Science and Technology for America’s Reinvestment Measuring the Effects of Research on Innovation , Competitiveness and Science . Process Guide (Office of Science and Technology Policy, Washington DC, 2016).

European Commission (EU). Guidelines on Impact Assessment . (EU, Brussels, 2015).

Ministry of Business, Innovation and Employment (MBIE). The impact of science: Discussion paper . (MBIE, Wellington, 2018).

University Grants Committee. Panel-specific Guidelines on Assessment Criteria and Working Methods for RAE 2020. University Grants Committee, (Government of the Hong Kong Special Administrative Region, Hong Kong, 2018).

Harland, K. & O’Connor, H. Broadening the Scope of Impact: Defining, assessing and measuring impact of major public research programmes, with lessons from 6 small advanced economies . Public issue version: 2, Small Advanced Economies Initiative, (Department of Foreign Affairs and Trade, Dublin, 2015).

Chubb, J. & Watermeyer, R. Artifice or integrity in the marketization of research impact? Investigating the moral economy of (pathways to) impact statements within research funding proposals in the UK and Australia. Stud. High. Educ. 42 , 2360–2372 (2017).

Oliver Schwarz, J. Ex ante strategy evaluation: the case for business wargaming. Bus. Strategy Ser. 12 , 122–135 (2011).

Neugebauer, S., Forin, S. & Finkbeiner, M. From life cycle costing to economic life cycle assessment-introducing an economic impact pathway. Sustainability 8 , 428 (2016).

Legner, C., Urbach, N. & Nolte, C. Mobile business application for service and maintenance processes: Using ex post evaluation by end-users as input for iterative design. Inf. Manag. 53 , 817–831 (2016).

Organisation for Economic Cooperation and Development (OECD). Fact sheets: Approaches to Impact Assessment; Research and Innovation Process Issues; Causality Problems; What is Impact Assessment?; What is Impact Assessment? Mechanisms . (Directorate for Science, Technology and Innovation, Paris, 2016).

River Styles. https://riverstyles.com (Accessed 2 May 2019).

United Nations Sustainable Development Goals. https://sustainabledevelopment.un.org (Accessed 2 May 2019).

Kasprak, A. et al. The Blurred Line between form and process: a comparison of stream channel classification frameworks. PLoS ONE 11 , e0150293 (2016).

Fryirs, K. Developing and using geomorphic condition assessments for river rehabilitation planning, implementation and monitoring. WIREs Water 2 , 649–667 (2015).

Fryirs, K. & Brierley, G. J. Assessing the geomorphic recovery potential of rivers: forecasting future trajectories of adjustment for use in river management. WIREs Water 3 , 727–748 (2016).

Fryirs, K. A. & Brierley, G. J. What’s in a name? A naming convention for geomorphic river types using the River Styles Framework. PLoS ONE 13 , e0201909 (2018).

Fryirs, K. A. & Brierley, G. J. Geomorphic Analysis of River Systems: An Approach to Reading the Landscape . 345 (John Wiley and Sons: Chichester, 2013).

Meagher, L., Lyall, C. & Nutley, S. Flows of knowledge, expertise and influence: a method for assessing policy and practice impacts from social science research. Res. Eval. 17 , 163–173 (2008).

Meagher, L. & Lyall, C. The invisible made visible. Using impact evaluations to illuminate and inform the role of knowledge intermediaries. Evid. Policy 9 , 409–418 (2013).

Fryirs, K. A. et al. Tracking geomorphic river recovery in process-based river management. Land Degrad. Dev. 29 , 3221–3244 (2018).

Kuruvilla, S., Mays, N., Pleasant, A. & Walt, G. Describing the impact of health research: a Research Impact Framework. BMC Health Serv. Res. 6 , 134 (2006).

Barjolle, D., Midmore, P. & Schmid, O. Tracing the pathways from research to innovation: evidence from case studies. EuroChoices 17 , 11–18 (2018).

Department of Environment and Heritage (DEH). Triple bottom line reporting in Australia. A guide to reporting against environmental indicators . (Commonwealth of Australia, Canberra, 2003).

Le Heron, E., Le Heron, R. & Lewis, N. Performing Research Capability Building in New Zealand’s Social Sciences: Capacity–Capability Insights from Exploring the Work of BRCSS’s ‘sustainability’ Theme, 2004–2009. Environ. Plan. A 43 , 1400–1420 (2011).

Taleb, N. N. The Black Swan: The Impact of the Highly Improbable . 2nd edn. (Penguin, London, 2010).

Fryirs, K. A. & Brierley, G. J. Practical Applications of the River Styles Framework as a Tool for Catchment-wide River Management : A Case Study from Bega Catchment. (Macquarie University Press, Sydney, 2005).

Brierley, G. J. & Fryirs, K. A. (eds) River Futures: An Integrative Scientific Approach to River Repair . (Island Press, Washington, DC, 2008).

Fryirs, K., Wheaton, J., Bizzi, S., Williams, R. & Brierley, G. To plug-in or not to plug-in? Geomorphic analysis of rivers using the River Styles Framework in an era of big data acquisition and automation. WiresWater . https://doi.org/10.1002/wat2.1372 (2019).

Rinaldi, M. et al. New tools for the hydromorphological assessment and monitoring of European streams. J. Environ. Manag. 202 , 363–378 (2017).

Article   CAS   Google Scholar  

Rinaldi, M., Surian, N., Comiti, F. & Bussettini, M. A method for the assessment and analysis of the hydromorphological condition of Italian streams: The Morphological Quality Index (MQI). Geomorphology 180–181 , 96–108 (2013).

Rinaldi, M., Surian, N., Comiti, F. & Bussettini, M. A methodological framework for hydromorphological assessment, analysis and monitoring (IDRAIM) aimed at promoting integrated river management. Geomorphology 251 , 122–136 (2015).

Gurnell, A. M. et al. A multi-scale hierarchical framework for developing understanding of river behaviour to support river management. Aquat. Sci. 78 , 1–16 (2016).

Belletti, B., Rinaldi, M., Buijse, A. D., Gurnell, A. M. & Mosselman, E. A review of assessment methods for river hydromorphology. Environ. Earth Sci. 73 , 2079–2100 (2015).

Belletti, B. et al. Characterising physical habitats and fluvial hydromorphology: a new system for the survey and classification of river geomorphic units. Geomorphology 283 , 143–157 (2017).

O’Brien, G. et al. Mapping valley bottom confinement at the network scale. Earth Surf. Process. Landf. 44 , 1828–1845 (2019).

Sinha, R., Mohanta, H. A., Jain, V. & Tandon, S. K. Geomorphic diversity as a river management tool and its application to the Ganga River, India. River Res. Appl. 33 , 1156–1176 (2017).

O’Brien, G. O. & Wheaton, J. M. River Styles Report for the Middle Fork John Day Watershed, Oregon . Ecogeomorphology and Topographic Analysis Lab, Prepared for Eco Logical Research, and Bonneville Power Administration, Logan. 215 (Utah State University, Utah, 2014).

Marçal, M., Brierley, G. J. & Lima, R. Using geomorphic understanding of catchment-scale process relationships to support the management of river futures: Macaé Basin, Brazil. Appl. Geogr. 84 , 23–41 (2017).

Download references

Acknowledgements

We thank Simon Mould for building the online interactive version of the impact map for River Styles and Dr Faith Welch, Research Impact Manager at the University of Auckland for comments on the paper. The case study documented in this paper builds on over 20 years of foundation research in fluvial geomorphology and strong and lasting collaboration between researchers, scientists and managers at various universities and government agencies in many parts of the world.

Author information

Authors and affiliations.

Department of Environmental Sciences, Macquarie University, Sydney, NSW, 2109, Australia

Kirstie A. Fryirs

School of Environment, University of Auckland, Auckland, 1010, New Zealand

Gary J. Brierley

Research Services, Macquarie University, Sydney, NSW, 2109, Australia

You can also search for this author in PubMed   Google Scholar

Contributions

K.F. conceived, developed and wrote this paper. G.B., T.D. contributed to, and edited, the paper. K.F., T.D. conceived, developed and produced the impact mapping toolbox.

Corresponding author

Correspondence to Kirstie A. Fryirs .

Ethics declarations

Competing interests.

K.F. and G.B. are co-developers of the River Styles Framework. River Styles foundation research has been supported through competitive grant schemes and university grants. Consultancy-based River Styles short courses taught by K.F. and G.B. are administered by Macquarie University. River Styles contract research is administered by Macquarie University and University of Auckland. River Styles as a trade mark expires in May 2020. T.D. declares no conflict of interest.

Additional information

Peer review information Nature Communications thanks Barbara Belletti and Gary Goggins for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Fryirs, K.A., Brierley, G.J. & Dixon, T. Engaging with research impact assessment for an environmental science case study. Nat Commun 10 , 4542 (2019). https://doi.org/10.1038/s41467-019-12020-z

Download citation

Received : 17 June 2019

Accepted : 15 August 2019

Published : 04 October 2019

DOI : https://doi.org/10.1038/s41467-019-12020-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Applying a framework to assess the impact of cardiovascular outcomes improvement research.

  • Mitchell N. Sarkies
  • Suzanne Robinson

Health Research Policy and Systems (2021)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Anthropocene newsletter — what matters in anthropocene research, free to your inbox weekly.

research paper on impact assessment

  • Open access
  • Published: 25 June 2018

Looking both ways: a review of methods for assessing research impacts on policy and the policy utilisation of research

  • Robyn Newson   ORCID: orcid.org/0000-0001-7226-1726 1 ,
  • Lesley King 1 ,
  • Lucie Rychetnik 2 ,
  • Andrew Milat 1 &
  • Adrian Bauman 1  

Health Research Policy and Systems volume  16 , Article number:  54 ( 2018 ) Cite this article

7430 Accesses

19 Citations

24 Altmetric

Metrics details

Measuring the policy and practice impacts of research is becoming increasingly important. Policy impacts can be measured from two directions – tracing forward from research and tracing backwards from a policy outcome. In this review, we compare these approaches and document the characteristics of studies assessing research impacts on policy and the policy utilisation of research.

Keyword searches of electronic databases were conducted in December 2016. Included studies were published between 1995 and 2016 in English and reported methods and findings of studies measuring policy impacts of specified health research, or research use in relation to a specified health policy outcome, and reviews reporting methods of research impact assessment. Using an iterative data extraction process, we developed a framework to define the key elements of empirical studies (assessment reason, assessment direction, assessment starting point, unit of analysis, assessment methods, assessment endpoint and outcomes assessed) and then documented the characteristics of included empirical studies according to this framework.

We identified 144 empirical studies and 19 literature reviews. Empirical studies were derived from two parallel streams of research of equal size, which we termed ‘research impact assessments’ and ‘research use assessments’. Both streams provided insights about the influence of research on policy and utilised similar assessment methods, but approached measurement from opposite directions. Research impact assessments predominantly utilised forward tracing approaches while the converse was true for research use assessments. Within each stream, assessments focussed on narrow or broader research/policy units of analysis as the starting point for assessment, each with associated strengths and limitations. The two streams differed in terms of their relative focus on the contributions made by specific research (research impact assessments) versus research more generally (research use assessments) and the emphasis placed on research and the activities of researchers in comparison to other factors and actors as influencers of change.

Conclusions

The Framework presented in this paper provides a mechanism for comparing studies within this broad field of research enquiry. Forward and backward tracing approaches, and their different ways of ‘looking’, tell a different story of research-based policy change. Combining approaches may provide the best way forward in terms of linking outcomes to specific research, as well as providing a realistic picture of research influence.

Peer Review reports

Research evidence has the potential to improve health policy and programme effectiveness, help build more efficient health services and ultimately achieve better population health outcomes [ 1 ]. The translation of research evidence into health policy, programmes and services is an ongoing and commonly reported challenge [ 2 ]. If research is not translated, it means that extensive investments in research and development are potentially going to waste [ 3 ]. In response to this issue, researchers and funding bodies are being asked to demonstrate that funded research represents value for money, not only through the generation of new knowledge but also by contributing to health and economic outcomes [ 4 , 5 ]. Pressures for accountability have also led to a greater focus on evidence-informed policy-making, which calls for policy-makers to make greater use of research in policy decisions so that policies and programmes are more likely to improve population health outcomes [ 1 ].

Consequently, there has been an increasing emphasis on measuring the wider impacts of research [ 6 ] (“ an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia ” ([ 7 ] p. 4), as well as understanding how research is used in decision-making processes [ 1 , 8 , 9 , 10 , 11 , 12 , 13 ]. This literature review focuses on methods for measuring the impacts of research on public policy specifically, where policy impacts are considered as intermediary outcomes between research outputs and longer-term impacts such as population health and socioeconomic changes [ 1 ]. Health policy impacts can be defined variously, but encompass indirect or direct contributions of research processes or outputs to the development of new health policy or revisions of existing health policy at various levels of governance [ 14 ]. It is proposed that the use of research to inform public policy leads to desired outcomes such as health gains [ 1 ]. Policy impacts, however, can be more easily measured and attributed to research than impacts that are further ‘upstream’ from research outputs [ 1 , 15 ].

Measuring the policy impacts of research can be approached from two directions – tracing forward from research to identify its impacts on policy and other outcomes, and tracing backwards from a policy outcome (e.g. policy change or document) to identify whether and how research has been utilised [ 1 , 11 , 16 , 17 ]. Several reviews have considered conceptual approaches and methods for assessing research impacts [ 5 , 6 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ] and research utilisation in health policy-making [ 1 , 11 ]. These reviews identify elements that characterise and differentiate assessment processes (Box 1). Examples of the empirical application of forward tracing research impact assessments are more commonly discussed in existing reviews than backward tracing approaches.

In addition, existing reviews have only addressed the relative advantages and disadvantages of forward and backward tracing approaches to a limited degree [ 1 , 11 , 16 , 17 ]. Forward tracing approaches are reported to be more common because they allow a more precise focus on specific research, which is important for funding bodies seeking to account for research expenditure [ 1 , 16 , 17 ]. However, this focus on specific research creates challenges attributing any observed changes to the specific research under study [ 16 ], this is because research is usually only one factor amongst many at play during policy decisions [ 25 ]. Furthermore, where research is influential, policy decisions are usually based on the synthesis of a broad spectrum of knowledge, rather than the findings of individual studies or a specific programme of work [ 26 ]. In addition, it can be difficult to establish what would have occurred in the absence of the research under study (counterfactual conditional) [ 27 ]; there is no ‘control’ state against which to compare outcomes [ 18 ]. Examining the context in which policy change occurs therefore becomes important [ 27 , 28 ]; however, forward tracing assessments have been criticised for failing to address the complexities involved in policy decision-making [ 17 ]. Forward tracing assessments are also subject to limitations associated with the timing of assessment because research impacts can take a long time to occur [ 25 ]. On the other hand, backward tracing approaches are said to be more suited to understanding the extent and processes through which knowledge, including research, influences policy decisions [ 11 ], but they are not always able to identify the influences of specific research, or the relative degree of influence of a particular study, and other potential limitations in terms of measuring research use are not well documented [ 23 , 24 , 29 ].

In this review of the literature, our aim was to document the extent and nature of studies measuring the impacts of health research on policy and compare forward and backward tracing approaches to assessment. Firstly, we documented the characteristics of empirical studies drawn from two streams of empirical research, namely studies measuring the impacts of health research on policy and studies examining research utilisation in health policy decisions. Secondly, a descriptive framework (Fig.  1 ) was developed to allow structured comparisons between assessments to be made. This framework incorporated both the key elements identified in studies described in previous reviews (Box 1) and those emerging from an iterative analysis of studies included in the current review. Thirdly, based on reported strengths and limitations of the approaches described, we considered what may be gained or lost where different approaches were chosen, and particularly how the direction of assessment may influence the assessment findings. Finally, we sought to identify gaps in the existing literature and areas that warranted further investigation. To our knowledge, this paper is the first to systematically analyse these two streams of research in relation to each other.

figure 1

Descriptive framework for research impact and research use assessments

This review of the literature was completed in December 2016, and examines peer-reviewed empirical studies published between 1995 and 2016 in English that measured the impacts of health research on policy and research use in health policy decisions. We also examined existing reviews on these topics. Our review questions were as follows:

What are the core elements of empirical research impact or research use assessments?

What is the extent and nature of empirical peer-reviewed research in this area of study?

What are the advantages and disadvantages of different approaches to assessment?

Where do the gaps in the existing literature lie and which areas warrant further investigation?

Search strategy

The review utilised an iterative process that included several steps. We initially searched electronic databases (Medline, CINAHL, EBM reviews, Embase, Google Scholar) using keyword search terms derived from research impact assessment reviews and empirical studies known to the authors (e.g. research impact, impact assessment, investment return, research payback, payback model, payback framework, societal impact, policy impact, research benefit, health research). Based on the abstracts from this search, we compiled all empirical studies that reported policy impacts in relation to health research, or research use in relation to health policy outcomes, and reviews reporting methods of research impact assessment.

After completing the above process, it was clear that the initial search had identified papers starting with research and measuring its impacts, but had been less successful in identifying papers starting with policy outcomes and measuring research use. Another search of key databases was therefore conducted using ‘research use’ search terms derived from the studies already identified on this topic (e.g. research use, research translation, evidence use, research utilisation, research evidence, evidence-based policy, knowledge utilisation, health policy). This resulted in further relevant studies being added to our list.

The reference lists of all included studies were then scanned to identify other relevant papers not found during the database search. The full texts of included studies were read to ensure they met the inclusion/exclusion criteria for the review. The search process is shown in Fig.  2 .

figure 2

Flow diagram of literature search process

Inclusion criteria

In relation to our analysis of empirical studies, we only included studies where the research or health policy outcome under study was clearly defined. We excluded studies that did not report on health research or a health policy outcome. Studies that did not report methods in conjunction with results of impact or research use assessments were also excluded. In addition, we excluded studies reporting opinions about research impact or use in general, rather than measuring the impact of specific research or research use in relation to specific policy outcomes. Finally, we excluded studies examining strategies or interventions to improve research translation. As our aim was to define and report key characteristics of studies rather than synthesise study findings, studies were not included/excluded based on study quality.

Data extraction, development of the descriptive framework and categorisation of empirical studies

To analyse the included studies, we prepared a data extraction tool incorporating the key elements described in existing reviews (Box 1). The initial categories were progressively refined during the data extraction and analysis process and integrated into a comprehensive ‘research impact and research use’ assessment framework; the elements of which are described in the results below. Thus, data extraction was iterative, until information from all the empirical studies was documented in relation to the final framework. Categorisation of studies according to key elements of the framework was done based on statements made by the study authors, where possible. Where judgements were required, categorisations were discussed by the authors of this paper until a consensus was reached.

Literature search

An initial review of abstracts in electronic databases against the inclusion criteria yielded 137 papers, 34 of which were excluded after full text review. Searches of reference lists of the included papers identified a further 60 studies (Fig. 2 ). The final number of papers included in this review was 163; 144 were empirical studies reporting methods and findings of research use or research impact assessments (included in the results that follow) and 19 were reviews of the literature. A full list of the included empirical studies is provided in Additional file  1 . To aid the reader to identify studies cited as examples in the results section, the numbers given in brackets in subscript match the reference numbers in Additional file  1 .

Analysis of empirical studies ( n  = 144)

Overview of the descriptive framework and included studies.

Figure  1 provides a descriptive representation of the empirical studies included in this review. It depicts the two parallel streams of research, namely studies concerned with measuring and understanding the ‘impacts of research’ (research impact assessments) and those concerned with measuring and understanding ‘research use’ in policy decisions (research use assessments). The study starting point defined whether a study was categorised as research impact or research use – research impact assessments usually started with research and traced forward to identify the benefits arising from that research; conversely, research use assessments usually started with a policy outcome and traced backwards to understand whether and how research had been used. There was a small group of ‘intersecting studies’ that drew on elements from both streams of research, and where, occasionally, research impact assessments used backward tracing approaches and research use assessments used forward tracing approaches. Assessments in both streams were based on similar theoretical concepts, utilised similar methods and had similar assessment end-points (i.e. they reported on similar outcomes). However, outcomes were reported from different perspectives depending on the direction of assessment chosen. The unit of analysis utilised in assessments varied across studies overall, ranging from a narrow focus on specific research projects or policy outputs to a broader focus on larger programmes of research or policy processes.

Below, we describe the number and nature of the included studies according to the key elements of the framework. Table 1 provides the number of studies categorised by type of assessment, direction of assessment, unit of analysis and methods of assessment. Illustrative examples of the different types of assessments are provided in Table 2 . Overall, we identified a similar number of research impact ( n  = 68; Table 1 ) and research use assessments ( n  = 67; Table 1 ), as well as a small group of intersecting studies, drawing on elements of both streams of research ( n  = 9; Table 1 ).

The studies originated from 44 different countries. Three quarters (76%; n  = 109) were from high-income countries and predominantly the United Kingdom ( n  = 38), the United States of America ( n  = 16), Australia ( n  = 15), and Canada (n = 10). In middle- to low-income countries, a greater number of research use studies than of research impact studies were completed ( n  = 22 vs. n  = 7, respectively). Most studies (81%; n  = 116) were published in the last decade (2006–2016). A wide variety of research types and policy decisions were studied, as summarised in Boxes 2 and 3.

Core elements of the descriptive framework

Key drivers and reasons for assessment.

The two streams of research were driven by different factors and conducted for different but related reasons. Research impact assessments were primarily driven by pressures to demonstrate that spending money on research is an appropriate use of scarce resources, while research use assessments were primarily driven by a desire to understand and improve the use of research in policy decisions so that health outcomes could be improved. Research impact assessments were most commonly conducted to demonstrate the value of research beyond the academic setting, to identify factors associated with research impact and develop impact assessment methods (Table 3 ; Fig. 1 ). Research use assessments were most commonly conducted to understand policy processes and the use of research within them (Table 3 ; Fig. 1 ). Intersecting studies were influenced by factors consistent with both streams of research.

Direction of assessment

As depicted in Fig. 1 , research impact assessments most commonly used forward tracing approaches ( n  = 61, Table 1 ; Examples A, F, Table 2 ), while research use assessments most commonly used backward tracing approaches ( n  = 63; Examples I-Q Table 2 ). However, there were several groups of studies that deviated from this pattern. Firstly, a few research impact assessments used a backwards tracing approach ( n  = 7; Table 1 ). For example, starting with a group of related policy documents, tracing backwards from these to identify the use of specific research outputs as an indication of research impact [26, 64, 110] , or tracing the origins of research (country of origin, research funder, type of research) cited in clinical guidelines to identify research that had been impactful [41, 70, 76, 77] (Example H, Table 2 ). These backward tracing studies included a systematic analysis of a group of policy documents from a given policy area, rather than examining single policy documents to corroborate claimed impacts, as was common for forward tracing research impact assessments.

Secondly, there were a few studies where the reasons for assessment were more consistent with the research use group, but the study used a forward tracing approach. These studies traced forward from specific research outputs but assessed whether and how these had been used by a specific policy community who had commissioned or were mandated to consider the research under study ( n  = 4, Table 1 ; Example G Table 2 ). [20, 22, 23, 30] Individual research user and agency characteristics associated with research use were assessed, as well as the characteristics associated with the research itself. Only policy-makers were interviewed or surveyed, which was unusual for forward tracing assessments, and some assessments involved an element of evaluation or audit of the policy-makers’ responsibilities to consider evidence.

Finally, there was a group of studies sitting within the intersection between the two streams of research that utilised a combination of forward and backward tracing approaches ( n  = 9; Table 1 ). In some cases, the study authors were explicit about their intentions to utilise both forward and backward tracing approaches to assessment, aiming to triangulate data from both approaches to produce an overall picture of research impact. For example, tracing forward from a programme of research to identify impacts, as well as analysing a group of policy documents to identify how the programme of research had influenced policy [11] , or tracing forward from the activities of researchers to identify impacts as well as analysing a policy process linked to the research [88] (Examples R, S, Table 2 ). These studies drew mainly on elements consistent with the research impact literature. Other intersecting studies were more difficult to classify as they focussed on the interface between a specific research output and a specific policy outcome, examining both the research production and dissemination process as well as the policy decision-making process (Example T, Table 2 ) [19, 31, 63, 127] .

Unit of analysis

The unit of analysis for studies starting with research ranged from discrete research projects with a defined start, end-point and limited set of findings, to increasingly larger programmes of work, representing multiple research studies linked through the researcher, research topic area or research funder. Thus, we classified studies (Fig. 1 ; Table 4 ) in terms of whether the unit of analysis was a research project (Examples A, B, Table 2 ), programme of research (Example D, Table 2 ), research centre (Example E, Table 2 ), or portfolio of research (Example F, Table 2 ). Research projects were the most common unit of analysis ( n  = 52; Table 1 ).

The unit of analysis for assessments starting with a policy outcome included (Fig. 1 ; Table 4 ) a group of policy documents or process for developing a specific document/s (Examples H–J, Table 2 ), decision-making committees where the committee itself and the decisions made over a period of time were under study (Examples K–L, Table 2 ), and policy processes where a series of events and debate over time, culminating in a decision to implement or reject a course of policy action, was studied (Examples M–Q, Table 2 ). Policy processes were the most common unit of analysis ( n  = 49; Table 1 ).

Several studies compared the impacts of different types of research grants (e.g. project, fellowship, research centre grants) and thus included more than one unit of analysis [14, 55, 141] . The same was true for studies adopting both forward and backwards tracing approaches, where the starting point for assessment was both a specific research project or programme and a specific policy outcome or process [11, 19, 31, 63, 88, 127] .

Theories and conceptual models underpinning assessments

It was common for studies in our sample to draw on existing models and theories of research use and policy-making [ 1 , 11 , 12 ]. These were used to form conceptual frameworks and organise assessments and discussions around the nature of research use or impacts identified in assessments. As well as drawing on this broad base of literature, the studies often utilised a specific framework to structure data collection, analysis and to facilitate comparisons between cases (Fig. 1 ). Specific frameworks were more often utilised in research impact assessments than research use assessments ( n  = 46 vs. n  = 23, respectively; Table 1 ).

The frameworks in the set of research impact assessments most commonly provided a structure for examining multiple categories or types of impact, and sometimes included more detailed case studies of how and why impacts occurred. The Payback Framework [ 30 ] was the most commonly used framework of this nature ( n  = 23). The elements and categories of the Payback Framework seek to capture the diverse ways in which impacts arise, including the interactions between researchers and end-users across different stages of the research process and feedback loops connecting stages [ 6 , 20 ]. Other similar frameworks included the Research Impact Framework [ 31 ], the Canadian Academy of Health Sciences impact framework [ 32 ] or frameworks that combined these existing approaches [11, 16, 85] . In addition, some studies used frameworks based on a logic model approach to describe the intended outputs, outcomes and impacts of a specific portfolio of research, sometimes including multiple categories of impact [44, 78, 98, 114] or focussing on policy impacts alone [99, 100] . Finally, there were several examples of studies utilising frameworks based on contribution analysis, an approach to exploring cause and effect by assessing the contribution a programme is making to observed results [ 33 ]. Such frameworks emphasise the networks and relationships associated with research production and focus on the processes and pathways that lead to impact rather than outcomes [ 27 , 33 ]. Examples included frameworks that prompted the evaluation of research dissemination activities to measure changes in awareness, knowledge, attitudes and behaviours of target audiences as precursors to impact [88] ; models that focussed on actor scenarios or productive interactions prompting the examination of the pathways through which actors linked to research, and distal to it, took up the research findings to describe impact (Contribution Mapping [ 34 ]) [57, 69, 87] ; and frameworks that prompted an analysis of network interactions and flows of knowledge between knowledge producers and users [84] . Most frameworks utilised in research impact assessments depicted a linear relationship between research outputs and impacts (that is, simple or direct links from research to policy), albeit with feedback loops between policy change and knowledge production included. Research impact studies rarely utilised frameworks depicting the relationship between contextual factors and research use [84, 133] .

By contrast, contextual factors featured strongly in the models and frameworks utilised in the research use assessments examined. Research use frameworks most commonly provided a mechanism for understanding how issues entered the policy agenda or how policy decisions were made. Dynamic and multidirectional interactions occurring between the policy context, actors and the evidence were emphasised, thus providing a structure for examining the factors that were influential in the policy process. Many examples were utilised, including Kingdon’s Multiple Streams Theory [ 35 ], Walt and Glison’s Health Policy Analysis Triangle [ 36 ], Dobrow’s framework for context-based evidence-based decision-making [ 37 ], Lomas’s framework for contextual influences on the decision-making process [ 26 ], and the Overseas Development Institutes Research and Policy in Development (RAPID) Framework [ 38 ]. In addition, models provided a structure for analysis of different stages of the policy process [2, 61, 122] or according to different types of research use (e.g. conceptual, symbolic, instrumental research use [19] , research use continuum [ 11 ]) [4, 61] . Finally, evidence typologies were sometimes used to structure assessments, so the use of research evidence could be compared to the use of information from other sources [9, 143] .

Intersecting studies utilised frameworks that focussed on the research–policy interface depicting the links or interactions occurring between researchers and policy-makers during research and policy development [19, 124, 129] . There were also examples of models depicting channels for knowledge diffusion [63, 127] .

Methods of assessment

Data sources.

We found that similar data sources were used in both research impact and research use assessments (Fig. 1 ), including interviews, surveys, policy documents, focus groups/discussion groups, expert panels, literature reviews, media sources, direct observation and bibliometric data. Research impact assessments also utilised healthcare administrative data [64, 78, 119] and routinely collected research impact data (e.g. ResearchFish [ 39 ] [43] , Altmetrics [ 40 ] [12] , UK Research Excellence Framework case studies [ 41 ] [42] ).

Triangulation of data and case studies

Most studies triangulated data from multiple sources, often in the form of case studies (Table 1 ). Research use assessments were more likely to describe single case studies than research impact assessments, where multiple case study designs were more common (Table 1 ). Data was most commonly sourced from a combination of interviews, surveys and document analysis, for research impact assessments, and interviews and document analysis for research use assessments. Research impact assessments often combined a larger survey with a smaller number of case studies, to obtain breadth as well as depth of information. Surveys were rarely used in research use assessments [22, 23, 33, 74, 135] .

Cases for both research impact and research use studies were most often purposely selected, based on the likelihood of having impacts for research impact assessments and known research use or to illustrate a point (e.g. delay in research uptake, influence of various actors) for research use assessments. Exceptions included assessments where a whole of sample [16, 85, 95, 115] or stratified sampling approach [37, 54, 66, 74, 107, 140] was adopted.

Scoring of impacts and research use

In some research impact and research use assessments, a process for scoring impacts or research use was utilised, usually to compare cases [2, 7, 10, 14, 16, 17, 37, 42, 50, 54, 62, 64, 79, 90, 91, 97, 107, 113, 115, 117, 140, 141] . Examples of the scoring criteria used for each group are provided in Table 5 .

Study participants

Where respondents were surveyed or interviewed, research impact assessments tended to focus on the perspectives of researchers (Table 1 ), most commonly questioning researchers about the impacts of their own research and end-users directly linked to the research or researchers under study. Research use assessments tended to draw on the views of a wider range of stakeholders (Table 1 ), and where researchers were interviewed, they were often interviewed as experts/advisors rather than about the role played by their own research.

Data analysis methods used

As most of the data collected in both research impact and research use studies was qualitative in nature, qualitative methods of analysis or basic descriptive statistics were most commonly used. However, there were studies in which more complex statistical analyses of quantitative data were employed. For example, logistic or linear regression analyses to determine which variables were associated with research impact [73, 140] , research use by policy-makers [20, 22, 23] or policy decision-making [17, 79] . In addition, one study used network analysis to explore the nature and structure of interactions and relationships between the actors involved in policy networks [118]

Retrospective versus prospective data collection

Most assessments collected data retrospectively (i.e. sometime after the research findings were available (from 2 to 20 years) for forward tracing assessments, or after a policy outcome had occurred for backwards tracing assessments). Prospective data collection was rare (i.e. data collected during and immediately after research completion for forward tracing assessments, or during policy development for backwards tracing assessments) [19, 49, 103, 137] .

End-point for assessment

Depending on the starting point for assessment, the end-point of assessment was either to describe policy impact or research use (Fig. 1 ). Intersecting studies reported how specific research was used in relation to a specific policy outcome. Definitions for what constituted a ‘policy impact’ or ‘research’ in the assessment differed between studies.

Definitions of policy impact

For studies commencing with research, not all studies explicitly defined what was considered a policy impact, rather describing any changes attributable to the research. Where definitions were provided, some definitions required evidence of the explicit application of research in policy decisions; that is, the research directly influenced the policy outcome in some way [16] . There were also examples where incremental steps on the pathway to policy impact, such as changes in policy-makers’ awareness and knowledge or process measures (e.g. interaction (participation on an advisory committee), dissemination (presentation of research findings to policy-makers)) [88] , were counted as impacts. Here, such process measures were seen as “ a more practical way of looking at how research interacts with other drivers to create change ” rather than looking “ for examples of types of outputs or benefits to specific sectors ” ([ 42 ] p.12). In addition, a shift in language and focus from ‘attribution’ to ‘contribution’ was promoted by some authors to suggest that research was only one factor amongst many influencing outcomes [57,69,87,88] . Some studies reported policy impacts alone, while others reported multiple categories of impact. Where multiple categories of impact were reported, impacts were not always categorised in the same way so that what was considered a policy impact in one study would have fallen under a different category of impact in another (e.g. policy impact vs. health services impact) [16, 114, 140] .

Definitions of research

Conversely, for studies commencing with a policy outcome, not all studies provided a definition for what constituted ‘research’ in the assessment, rather summarising the relevant scientific literature to provide an overview of the research available to policy-makers. [8, 13, 18] Where definitions were provided, some studies used narrower definitions of research, such as ‘citable’ academic research only [74] , as opposed to broader definitions where ‘any data’ that played a role in shaping/driving policy change was included in the definition of research. [4] Other authors defined a specific type of research to be identified in the assessment (e.g. economic analyses [49, 103, 137] , research on patient preferences [130] , evidence of effect and efficiency [37, 101] ). Most authors of research use studies explicitly recognised that research was only one source of information considered by policy-makers. Some studies explored the use of various types of information (e.g. contextual socio-political, expert knowledge and opinion, policy audit, synthesis, reviews, economic analyses [9] ), as well as research (e.g. scientific literature [9] ). In addition, some studies included research in a broader definition of ‘evidence’ alongside other information sources (e.g. including research study results, findings on monitoring and evaluation studies and population-based surveys, Ministry of Health reports, community complaints and clinical observations as ‘evidence’ used during policy-making [90] ). Finally, there were examples of research being distinguished in terms of local and international research sources [8, 61] .

Common outcomes reported

Despite differing trajectories of assessments, research impact and research use assessments reported similar types of outcomes (Fig. 1 ), although the discussion was framed in different ways. For example, qualitative methods were utilised in both research impact and research use assessments to describe the impacts that occurred or how research had been used. Authors from both groups described outcomes in terms of conceptual, symbolic or instrumental uses of research [4, 19, 20, 44, 61, 72, 74, 129, 133, 143] , direct/explicit and indirect impacts/uses of research [42, 74, 87] , or research use according to the stage of the policy process at which the use occurred [61, 75, 85, 100, 122] (Box 4). Other assessments adopted a quantitative approach, to sum impacts or research use across units of analysis, resulting in an overall measure of impact for a research portfolio or area of research [50, 55, 73, 141] , or in policy domains as a benchmark of research use for that policy area [40, 68, 143] .

In tackling the question about what is needed to facilitate research utilisation and research impact, both research impact and research use studies reported on the processes and pathways through which research was utilised and the factors associated with research use. Studies from both groups also focussed on the role played by various actors in the policy process. Research impact assessments tended to focus on research and researchers as facilitators of impact, commonly examining the dissemination, engagement activities, networks and other characteristics of specific researchers in considering impact pathways and factors associated with impact. Study participants were usually linked to the research or researchers under study in some way and commonly provided a perspective about the context surrounding the uptake of the research under study, rather than being asked about the policy context more broadly (e.g. sociocultural, political, economic factors and other information sources influencing the policy process).

In contrast, research use assessments generally examined the role played by a wide range of actors in the policy process (e.g. politicians, policy-makers, service providers, donors, interest groups, communities, researchers) and links between the research and policy interface (e.g. networks, information exchange activities, capacity-building activities, research dissemination activities, partnerships). Variables associated with policy-makers and policy organisations (e.g. culture, ideologies, interests, beliefs, experience), as well as the research and researchers, were examined. In addition, assessments tended to adopt a broader approach when examining the policy context, considering research along-side a range of other influencing factors.

In this paper, we provide a framework for categorising the key elements of two parallel and sometimes intersecting streams of research – studies assessing the policy impacts of research and studies assessing research use in policy processes. Within the studies examined, research impact assessments were primarily conducted to demonstrate the value of research in terms of producing impacts beyond the academic setting. This information was important for grant funding bodies seeking to account for research expenditure. As such, research impact assessments focussed on research, identifying impacts that could be attributed to specific research projects or programmes and the mechanisms or factors associated with achieving these impacts. Such studies predominantly used forward tracing approaches, where research projects (the most common unit of grant funding) were the unit of analysis. Research use assessments, on the other hand, were conducted with a view to improving policy outcomes by identifying ways in which research use could be enhanced. Here, the assessments most commonly focussed on understanding policy processes, whether and how research was used and the mechanisms and factors that facilitated research use. Thus, backward tracing approaches predominated; starting with a specific policy outcome and utilising a policy analysis frame to consider the influence of research alongside other factors. The approaches to assessment influenced the nature of the findings, so their respective strengths and limitations should be considered.

Strengths and limitations of approaches

The main difference between the research impact and research use studies we considered was the relative focus on the influence of ‘specific research’ in relation to a policy outcome. Research impact assessments focused on specific pieces or bodies of research so that observed effects could be linked to grant funding, researchers or research groups [ 17 ]. While research projects were most commonly assessed, we encountered examples where the unit of analysis was broadened to include larger programmes of research, in some respects to overcome problems attributing impacts to single projects within a researcher’s larger body of work. However, this did not overcome problems separating the influence of this research from that conducted by others in the same field [ 46 , 47 ]. Broadening the unit of analysis also created problems with defining the scope of assessment in terms of where the programme of research started and ended, as research generally builds on earlier research and itself [ 46 , 47 ]. In addition, the larger the programme of research under study, the more diffuse its impacts became, making them more difficult to identify and attribute to individuals or groups of researchers and certainly funding bodies [ 48 , 49 , 50 ].

The research use assessments on the other hand, tended to examine the role played by research in more general terms rather than attempting to determine the contribution made by specific research projects or programmes. Indeed, such assessments often highlighted the relationships between related or conflicting programmes of research, local and international research and other sources of information (e.g. expert opinion, practice-based knowledge). There were also examples of research use assessments that examined the use of ‘evidence’ without separating the influence of research from other information sources (e.g. scientific research, population surveys, administrative data and reports, community complaints, clinical/expert opinion). These differences raise the issue about whether a single research project is a valid unit of analysis [ 17 , 26 ] and what unit of analysis is the most appropriate. While it might be useful to focus on specific research for research accountability purposes and ease of measurement, the use of information assimilated from multiple sources is consistently reported as closer to the reality of how knowledge enters the policy debate and contributes to policy outcomes [ 45 ].

Different approaches to assessment will also give rise to a differential emphasis on the role of research in policy-decisions and the relevance of context [ 27 , 42 ]. The research impact assessments we examined tended to focus on why impacts occurred (did not occur) and the contextual factors associated with research uptake, rather than adopting a wider frame to examine other factors and information sources that may have been influential. Focusing on research uptake may mean that details of the policy story are missed and the influence of the research overstated [ 17 ], whereas research use assessments commonly sought to understand the relationship between the various factors involved in decision-making and the role played by research within this mix. Tracing backwards to examine the policy process in this way is likely to provide a more accurate picture of research influence [ 11 ]. However, this finding depended on the unit of policy analysis chosen for assessment. As policy decisions often build on previous policy decisions which in turn may be influenced by research [ 29 ], focussing on a narrow aspect of the policy process as a unit of analysis may not capture all of the research considered in reaching an outcome or the full range of factors that may have influenced the policy decision [ 51 ]. In particular, policy documents represent the outputs of policy discussions or the policy position at a single point in time, so examining research use at this level may mean that it is missed, or undue emphasis is placed on the influence of cited research [ 51 ].

As well as the relative emphasis placed on research, the assessment approach itself may determine the type and nature of impacts or research use identified. For example, it was common for the research impact assessments we examined to seek evidence linking the research in question to the policy outcome (e.g. seeking corroborating testimony from policy-makers or evidence in policy documents). Studies also sometimes sought to quantify the strength of this relationship, or the relative contribution of the research in relation to other factors, by subjectively scoring the extent of research influence on the policy outcome. This focus on measurable links between research and policy that can be proven meant that such assessments were more likely to identify instances where research had been directly applied in policy discussions (instrumental uses) [ 52 ]. In addition, the research impact assessments we examined most commonly utilised frameworks suggesting direct and linear links between research and policy (albeit with feedback loops included), and thus potentially overlooking indirect or conceptual uses. Finding evidence for indirect influences, such as changes in awareness and understanding of an issue, may be challenging [ 27 ]. To better capture indirect and as well as direct impacts, some authors propose that research impact should be measured in terms of the processes (e.g. interactions, dissemination activities) and stages of research adoption amongst end-users/stakeholders resulting from these processes (e.g. changes in awareness, understanding, attitude/perceptions), rather than focussing on outcome-based modes of impact evaluation [ 27 , 34 , 42 ]. This way of thinking about impact helps to identify changes that occur early in the impact pathway and can establish clear links between the research and the contribution it has made [ 42 ], however, this may emphasise ‘potential’ rather than actual impact. It can be argued that actual impact only occurs if a stakeholder uses or applies (e.g. to inform or encourage/discourage change) the research results within a policy debate; that is, if there has been a behavioural change because of the knowledge gained [ 53 ].

For research use assessments, the nature of research use reported may vary depending on what type of policy process was considered [ 29 ]. The studies we examined that assessed specific and discrete policy decisions, for example, committee decisions focussed on making recommendations for practice, also tended to emphasise instrumental research use, as there was a requirement or mandate for research to be directly applied in the decision-making process, whereas studies considering broader policy processes, where events overtime were examined, had the potential to identify the many ways in which research could be utilised. The conceptual models that were adopted in these assessments provided a mechanism for considering how issues entered the policy agenda or how policy decisions were made without a presumption that research had made a direct contribution to the policy outcome. However, assessments of this nature highlighted the difficulties of determining the influence of research on tacit knowledge, where research use lies within other types of information (e.g. expert knowledge) and stakeholder positions [ 29 ]. For example, the research use assessments we examined commonly investigated the influence of other information sources and stakeholder’s positions on policy decisions, but stopped short of investigating whether these sources of influence were themselves informed by research [ 29 ]. Identifying hidden or unconscious uses of research will always be challenging for both research use and research impact assessments.

Not only does the overall choice of approach influence the assessment findings, but also specific methodological choices. Some methodological issues were common to both research impact and research use assessments. For example, issues to do with the timing of assessment to best capture research impacts or use. In addition, purposeful sampling and the number of case studies conducted influenced how predictive or transferrable the assessment findings were [ 17 , 24 , 54 ]. There were also tensions within both streams between the value of utilising the most comprehensive and robust methods of assessments possible and the resources required for these methods. Case studies, including interviews with study participants, were considered the gold standard method of assessment, but resource intensive to conduct [ 55 ]. Policy case studies were particularly time and resource intensive, requiring careful consideration of historical and contextual influences, hence the predominance of single policy case studies amongst the research use assessments we examined [ 24 ]. On the research impact side, methods utilising automated data extraction from policy documents and electronic surveys of researchers have been introduced [ 6 , 56 ]. Such methods are less resource intensive and offer greater potential for implementation on a wide scale, but there is still limited information available about their validity and reliability [ 5 , 6 , 57 , 58 ]. There were also instances where methodological choices differed between the two streams of research, influencing outcomes of assessments from each group. For example, researchers or end-users directly associated with the research project or programme under study were most commonly interviewed or surveyed in the research impact assessments, whereas the research use assessments we examined often involved a broader cross section of policy actors and researchers as study participants. These differences provide different perspectives about the role played by research, and thus the method influences the findings.

In essence, the differences between forward and backward tracing assessments highlighted above illustrate how the choices made in assessments alter the phenomenon they aim to examine. In fact, this is similar to other types of evaluation; the assessment process illuminates a particular pathway, perspective or outcome, but another assessment process would see it differently.

Possibilities for further research

It is likely that the pathways to impact and the degree to which research will be utilised will differ for different types of research and policy areas [ 1 , 29 ]. Understanding these differences may help researchers and policy-makers to set appropriate goals in terms of research impact and use, as well as to identify the most appropriate pathways through which translation could be achieved. However, we identified only a small number of studies comparing the impacts of different types of research (and only biomedical compared to clinical research) or differences in research use according to policy area. Further studies adopting across-case comparison approaches to investigate these issues would be useful.

In this review, we encountered a lack of consistency in the definitions and terminology applied across the included studies. This was the case for describing the type of research being assessed and what constituted policy impacts in research impact assessments, as well as in defining and categorising forms of evidence and types of policies in research use assessments. Different conclusions about the extent to which policy-making is informed by research may arise from different views about what constitutes research in research use assessments or, conversely, policy impact in research impact assessments [ 29 ]. Moving towards the application of consistent definitions across this area of study would also be beneficial [ 14 ]. It is also important that authors in future studies are clear about the definitions and ways of thinking about research impact/use applied in assessments, so that comparisons between study findings can be made and limitations made explicit [ 14 ].

The two streams of research discussed in this review have developed separately over a similar timeframe. More recently, studies have drawn on elements from both streams of research. Some of these studies are exemplary in many ways, tracing forward from research and backwards from policy to produce case studies which address common limitations in novel and rigorous ways. There is scope for more research impact assessments to borrow from backwards tracing approaches in this way. In addition, very few studies utilising network analyses and applied systems-based theories, were identified in this review. Such approaches may also provide a means of exploring these issues [ 52 ].

Most of the studies included in this review appeared to be initiated by researchers for researchers or by research funding bodies. Researchers are now being asked to routinely track the impacts of their own research [ 6 ]. This focus on research and researchers places a one-sided emphasis on the role of researchers in getting research into policy. Reducing the waste from research also requires action from policy-makers. Yet, very few studies investigated to what degree the decision-making environment supported research use. To address this imbalance, there is scope for policy agencies to develop mechanisms to assess their own requirements and practices for considering research during policy deliberations, as well as investigating ways to routinely monitor research use.

Finally, there were very few examples of prospective approaches being utilised in either stream of research examined in this review. These approaches have disadvantages, for example, they may not be practical in terms the resources required to trace research or policy processes for extended periods, or it can be difficult to obtain permission to directly observe policy processes or respondents may not be as forthcoming about factors of influence at the time they are occurring (e.g. political debates) [ 15 ]. However, prospective approaches to assessment may prompt researchers and end-users to think about research translation from the outset of a research project or policy process, and provide opportunities for appropriate and tailored translational interventions to be embedded into work processes [ 59 ]. Routine data collection and, in particular, process metrics related to research translation activities could be used to provide feedback about areas requiring attention in order to improve research uptake [ 59 ]. With the advent of routine data collection systems, the potential advantages of this approach could be explored in future studies.

Limitations of this review

This review only included English language publications and therefore studies from non-English speaking countries will be under-represented. This may in part explain our findings around the high proportion of studies conducted in high-income countries. The studies included in this review are likely to be broadly representative of the type of studies conducted to date. However, due to our exclusion criteria, we may have missed examples of studies published only in the grey literature or methodological approaches that have not been empirically tested. For example, we identified only a small number of peer-reviewed publications where a programme of research was the unit of analysis. The preparation of case studies based on a researcher’s programme of research was adopted in both the Australian Research Quality Framework [ 60 ] and more recently the UK Research Excellence framework [ 41 ]. Reports describing the application and findings of this approach are available in the grey literature [ 41 , 61 ]. Finally, author one managed the literature search and inclusion process, as well as extracting primary data from the included articles. This may have introduced some bias, although the other authors of this review were consulted and came to agreement on ambiguous cases. Study authors did not always explicitly describe their studies in terms of the characteristics we have included in our descriptive framework and some studies required judgements to be made regarding classification. Our findings in terms of the number of studies within each category should therefore be considered indicative. However, this issue highlights the need for a framework, such as the one we propose, to facilitate clearer communication about what, in fact, studies were seeking to achieve and how they did it.

Herein, we have defined the key characteristics of two research streams with the aim of facilitating structured comparisons between studies. In many ways, the separate and distinct development of these two research streams, and their different approach to examining the issues, reflect the much-discussed separation of the two domains of research and policy. The descriptive framework introduced and discussed in this paper provides a ‘missing link’, showing how these two streams intersect, compare and differ. Our framework offers an integrated perspective and analysis, and can be used by researchers to identify where their own research fits within this field of study and to more clearly communicate what is being assessed, how this is done and the limitations of these choices.

We have shown that the approach to assessment can determine the perceived influence of research on policy, the nature of this influence and our understanding of the relationship between research and policy. As such, the two approaches, forward and backward tracing, essentially tell a different story about how (if at all) research-based policy change happens. In some ways, the assessments construct the phenomenon they aim to measure. For example, forward tracing research impact assessments, with their focus on specific research and the activities of researchers, may emphasise direct influences of research on policy and overstate the influence of research in policy processes. Conversely, research use assessments utilising a backwards tracing analysis tend to paint a more complex picture of assimilated knowledge contributing to policy outcomes alongside other influential factors. Combining aspects of the two approaches may provide the best way forward in terms of linking outcomes to specific research, as well as providing a realistic picture of research influence.

Box 1 Key elements differentiating research impact assessments

• Purpose of assessment [ 1 , 17 ]

• Type of research or policy assessed [ 1 , 23 ]

• Direction of analysis (e.g. tracing forwards from research or tracing backwards from a policy outcome) [ 1 , 11 , 16 , 17 ]

• Unit of analysis (e.g. whether the analysis starts with a single research project or a broader programme of work) [ 1 , 17 ]

• Conceptual framework used to organise assessment [ 1 , 6 , 11 , 16 , 17 , 18 , 19 , 20 , 23 , 24 ]

• Types of outcomes measured (e.g. type/categories of impact and levels of utilisation) [ 1 , 16 , 17 ]

• Methods used to measure outcomes of interest (e.g. data sources, retrospective or prospective data collection; case studies or other methods; scoring impacts) [ 1 , 16 , 18 , 20 , 24 ]

• Strategies to address attribution of impacts to the research in question [ 16 ]

Box 2 Summary of the types of research under study where research was the starting point for assessment

There was a high degree of variability in the type of research under study, between studies and in some cases within individual studies. The types of research under study differed in terms of topic area (e.g. arthritis research, obesity research, asthma research), discipline (e.g. clinical, public health, health services research), where the research lay along the research continuum (e.g. basic to applied research), and whether the research was primary research or a synthesis of research (e.g. health technology assessments and systematic reviews). It was rare for studies to compare impacts between types of research and only the impacts of biomedical and clinical research were compared in this way [25, 140] . The research under study within individual assessments was most commonly drawn from a single research funder or portfolio of research. Assessments were often commissioned by the government agency [1, 14, 34, 39, 43, 54, 67, 69, 73, 78, 82, 84, 87, 93, 98, 107, 114, 115, 121, 123, 132, 144] , charitable group [25, 44, 100, 141] or professional body [112] responsible for funding the research under study.

Box 3 Summary of the types of policies under study where a policy outcome was the starting point for assessment

Assessments starting with a policy outcome examined a wide range of policies. Policies differed in terms of the policy type (e.g. clinical- and practice-based policies, public health policies, financial and structural policies), topic area (e.g. legislative bills relevant to active living, home nurse visiting, immunisation, malaria prevention, health insurance, drug reimbursement decisions), who was responsible for the final policy decision (e.g. parliament/legislative process, committee or expert group, government department or agency, or local health services), the geographical reach of the policy (e.g. international, national, regional/provincial, or local health policy), the stage or stages of the policy process considered in the assessment (e.g. agenda setting, policy formulation, policy implementation), and whether the decision was to proceed or not with the course of policy action (e.g. ‘go’ or ‘no go’ decisions [74] ). There were examples of studies comparing research use for different policy types [37, 74, 96, 111, 129, 139] , at different levels of policy-making [13, 102, 111, 137] , and between different countries [45, 48, 83] . The authors of studies rarely stated if the assessment had been commissioned by the agency responsible for the policy under study [21, 33, 74, 126, 135] .

Box 4 Common ways of describing use/use

Conceptual: Refers to a more general or indirect form of enlightenment where research has an influence on awareness, understanding or attitudes/perceptions amongst policy-makers [ 29 , 43 ]. Conceptual use of research may influence policy debate (ideas, arguments and criticism), which can then feed forward into policy change [ 44 ]. The link between the research and any policy change is indirect but the influence of the research on policy-makers is still tangible and potentially measurable.

Symbolic: Where research is used to justify a position or specific action already taken for other reasons or to obtain specific goals based on a predetermined position [ 29 , 44 ]. This is difficult to measure as policy-makers may not acknowledge or be conscious that they are using research in this way. Therefore, identification of this type of research use may rely on judgement of policy-maker’s intent/motivations for using research.

Instrumental: Refers to the explicit application of research to address a policy problem; where research influences issue identification, policy refinement, definition or implementation in a direct and potentially measurable way [ 29 , 44 ]. That is, policy-makers are aware that they are using research in this way and there may be evidence supporting claimed instances of use.

Indirect: Refers to the way in research may enter the policy environment in a diffuse way [ 44 ]. Indirect use includes the concept of conceptual use where research results in changes in awareness and knowledge that may subsequently influence policy directions. Here, the change is brought about by research and this is recognised by the research user. Indirect use also includes examples where the influence of research may be unseen and unacknowledged; there is no evidence linking decisions to the findings of research, yet a linkage of some sort seems to have existed [ 27 , 29 ]. This type of indirect influence of research may be exerted on policy decisions through socially shared ‘tacit knowledge’ (e.g. expert opinion, public perception or practice-based knowledge) or through stakeholder positions [ 29 ].

Direct: Refers to the explicit or direct application of research to policy. Sometimes used inter-changeably with instrumental use.

Stages of policy development: Research use described according to the stages of policy development, which vary between models but commonly include identification, agenda-setting, consideration of potential actions/policy formulation, implementation and evaluation [ 45 ]

Hanney SR, Gonzalez-Block MA, Buxton MJ, Kogan M. The utilisation of health research in policy-making: Concepts, examples and method of assessment. Health Res Policy Syst. 2003;1:2.

Mitton C, Adair CE, McKenzie E, Patten SB, Waye Perry B. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007;85:729–68.

Article   PubMed   PubMed Central   Google Scholar  

Chalmers I. Biomedical research: Are we getting value for money? Significance. 2006;3:172–5.

Article   Google Scholar  

Martin BR. The Research Excellence Framework and the ‘impact agenda’: are we creating a Frankenstein monster? Res Eval. 2011;20:247–54.

Bornmann L. What is societal impact of research and how can it be assessed? A literature survey. J Am Soc Inf Sci Technol. 2013;64:217–33.

Greenhalgh T, Raftery J, Hanney S, Glover M. Research impact: a narrative review. BMC Med. 2016;14:78.

REF 2014 Key Facts.  http://www.ref.ac.uk/2014/media/ref/content/pub/REF%20Brief%20Guide%202014.pdf. Accessed 15 Dec 2016.

Orton L, Lloyd-Williams F, Taylor-Robinson D, O'Flaherty M, Capewell S. The use of research evidence in public health decision making processes: systematic review. PLoS One. 2011;6:e21704.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14:2.

Innvaer S, Vist G, Trommald M, Oxman A. Health policy-makers' perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002;7:239–44.

Article   PubMed   Google Scholar  

Nutley S, Walter I, Davies H. Using Evidence: How Research Can Inform Public Services. Bristol: Policy Press at the University of Bristol; 2007.

Almeida C, Bascolo E. Use of research results in policy decision-making, formulation, and implementation: a review of the literature. Cad Saude Publica. 2006;22(Suppl):S7–19. discussion S20-33

Liverani M, Hawkins B, Parkhurst JO. Political and institutional influences on the use of evidence in public health policy. A systematic review. PLoS One. 2013;8:e77404.

Alla K, Hall WD, Whiteford HA, Head BW, Meurk CS. How do we define the policy impact of public health research? A systematic review. Health Res Policy Syst. 2017;15:84.

Lavis J, Ross S, McLeod C, Gildiner A. Measuring the impact of health research. J Health Serv Res Policy. 2003;8:165–70.

Boaz A, Fitzpatrick S, Shaw B. Assessing the impact of research on policy: a literature review. Sci Public Policy. 2009;36:255–70.

Hanney S, Buxton M, Green C, Coulson D, Raftery J. An assessment of the impact of the NHS Health Technology Assessment Programme. Health Technol Assess. 2007;11(53):1–180.

Milat AJ, Bauman AE, Redman S. A narrative review of research impact assessment models and methods. Health Res Policy Syst. 2015;13:18.

Raftery J, Hanney S, Greenhalgh T, Glover M, Blatch-Jones A. Models and applications for measuring the impact of health research: Update of a systematic review for the health technology assessment programme. Health Technol Assess. 2016;20(76):1–254.

Banzi R, Moja L, Pistotti V, Facchini A, Liberati A. Conceptual frameworks and empirical approaches used to assess the impact of health research: An overview of reviews. Health Res Policy Syst. 2011;9:26.

Penfield T, Baker MJ, Scoble R, Wykes MC. Assessment, evaluations, and definitions of research impact: A review. Res Eval. 2013;23(1):21–32.

Thonon F, Boulkedid R, Delory T, Rousseau S, Saghatchian M, Van Harten W, O'Neill C, Alberti C. Measuring the outcome of biomedical research: A systematic literature review. PLoS One. 2015;10(4):e0122239.

Gilson L, Raphaely N. The terrain of health policy analysis in low and middle income countries: a review of published literature 1994-2007. Health Policy Plan. 2008;23:294–307.

Walt G, Shiffman J, Schneider H, Murray SF, Brugha R, Gilson L. 'Doing' health policy analysis: methodological and conceptual reflections and challenges. Health Policy Plan. 2008;23:308–17.

Frank C, Nason E. Health research: measuring the social, health and economic benefits. Can Med Assoc J. 2009;180:528–34.

Lomas J. Connecting research and policy. Can J Policy Res. 2000;1:140–44.

Molas-Gallart J, Tang P, Morrow S. Assessing the non-academic impact of grant-funded socio-economic research: results from a pilot study. Res Eval. 2000;9:171–82.

Morton S. Creating research impact: the roles of research users in interactive research mobilisation. Evid Policy J Res Debate Pract. 2015;11:35–55.

Lavis JN, Ross SE, Hurley JE. Examining the role of health services research in public policymaking. Milbank Q. 2002;80:125–54.

Buxton M, Hanney S. How can payback from health services research be assessed? J Health Serv Res Policy. 1996;1:35–43.

Article   PubMed   CAS   Google Scholar  

Kuruvilla S, Mays N, Walt G. Describing the impact of health services and policy research. 2007;12:S1–23-31.

Canadian Academy of Health Sciences. Making an Impact: A preferred Framework and indicators to Measure Returns on Investment in Health Research. Ottawa: Panel on Return on Investment in Health Research. Canadian Academy of Health Sciences; 2009.

Google Scholar  

Riley BL, Kernoghan A, Stockton L, Montague S, Yessis J, Willis CD. Using contribution analysis to evaluate the impacts of research on policy: Getting to ‘good enough’. Res Eval. 2018;27:16–27.

Kok MO, Schuit AJ. Contribution mapping: A method for mapping the contribution of research to enhance its impact. Health Res Policy Syst. 2012;10:21.

Kingdon JW. Agendas, Alternatives, and Public Policies. Second ed. New York: Longman; 2003.

Walt G, Gilson L. Reforming the health sector in developing countries: the central role of policy analysis. Health Policy Plan. 1994;9:353–70.

Dobrow MJ, Goel V, Upshur RE. Evidence-based health policy: context and utilisation. Soc Sci Med. 2004;58:207–17.

ODI. Briefing Paper: Bridging Research and Policy in International Development An analytical and practical framework https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/198.pdf . Accessed Dec 2016.

ResearchFish. www.researchfish.net . Accessed 15 Dec 2016.

Altmetric. www.altmetric.com . Accessed 15 Dec 2016.

UK REF2014 Case Studies.  http://impact.ref.ac.uk/CaseStudies/ . Accessed 15 Dec 2016.

Morton S. Progressing research impact assessment: A ‘contributions’ approach. Res Eval. 2015;24:405–19.

Meagher L, Lyall C, Nutley S. Flows of knowledge, expertise and influence: a method for assessing policy and practice impacts from social science research. Res Eval. 2008;17:163–73.

Gilson L, McIntyre D. The interface between research and policy: Experience from South Africa. Soc Sci Med. 2008;67:748–59.

Smith KE, Katikireddi SV. A glossary of theories for understanding policymaking. J Epidemiol Community Health. 2013;67:198–202.

Hanney SR, Home PD, Frame I, Grant J, Green P, Buxton MJ. Identifying the impact of diabetes research. Diabet Med. 2006;23:176–84.

Hanney S, Mugford M, Grant J, Buxton M. Assessing the benefits of health research: lessons from research into the use of antenatal corticosteroids for the prevention of neonatal respiratory distress syndrome. Soc Sci Med. 2005;60:937–47.

Hanney S, Packwood T, Buxton M. Evaluating the benefits from health research and development centres: a categorization, a model and examples of application. Evaluation. 2000;6:137–60.

Orians CE, Abed J, Drew CH, Rose SW, Cohen JH, Phelps J. Scientific and public health impacts of the NIEHS Extramural Asthma Research Program: insights from primary data. Res Eval. 2009;18:375–85.

Ottoson JM, Ramirez AG, Green LW, Gallion KJ. Exploring potential research contributions to policy. Am J Prev Med. 2013;44:S282–9.

Bunn F, Kendall S. Does nursing research impact on policy? A case study of health visiting research and UK health policy. J Res Nurs. 2011;16:169–91.

Greenhalgh T, Fahy N. Research impact in the community-based health sciences: An analysis of 162 case studies from the 2014 UK Research Excellence Framework. BMC Med. 2015;13:232.

Samuel GN, Derrick GE. Societal impact evaluation: Exploring evaluator perceptions of the characterization of impact under the REF2014. Res Eval. 2015;24:229–41.

Tulloch O, Mayaud P, Adu-Sarkodie Y, Opoku BK, Lithur NO, Sickle E, Delany-Moretlwe S, Wambura M, Changalucha J, Theobald S. Using research to influence sexual and reproductive health practice and implementation in sub-Saharan Africa: a case-study analysis. Health Res Policy Syst. 2011;9(Suppl 1):S10.

Cohen G, Schroeder J, Newson R, King L, Rychetnik L, Milat AJ, Bauman AE, Redman S, Chapman S. Does health intervention research have real world policy and practice impacts: testing a new impact assessment tool. Health Res Policy Syst. 2015;13:3.

Guthrie S, Bienkowska-Gibbs T, Manville C, Pollitt A, Kirtley A, Wooding S. The impact of the National Institute for Health Research Health Technology Assessment programme, 2003-13: a multimethod evaluation. Health Technol Assess. 2015;19:1–291.

Drew CH, Pettibone KG, Finch FO, Giles D, Jordan P. Automated research impact assessment: a new bibliometrics approach. Scientometrics. 2016;106:987–1005.

Bornmann L, Haunschild R, Marx W. Policy documents as sources for measuring societal impact: how often is climate change research mentioned in policy-related documents? Scientometrics. 2016;109:1477–95.

Searles A, Doran C, Attia J, Knight D, Wiggers J, Deeming S, Mattes J, Webb B, Hannan S, Ling R, et al. An approach to measuring and encouraging research translation and research impact. Health Res Policy Syst. 2016;14:60.

Donovan C. The Australian Research Quality Framework: A live experiment in capturing the social, economic, environmental, and cultural returns of publicly funded research. N Dir Eval. 2008;2008:47–60.

Excellence in Innovation for Australia (EIA) Trial. https://go8.edu.au/programs-and-fellowships/excellence-innovation-australia-eia-trial . Accessed 15 Dec 2016.

Ritter A, Lancaster K. Measuring research influence on drug policy: A case example of two epidemiological monitoring systems. Int J Drug policy. 2013;24:30–7.

Download references

This work was supported by funding from the National Health and Medical Research Council of Australia (Grant #1024291).

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Author information

Authors and affiliations.

Sydney School of Public Health, The University of Sydney, Charles Perkins Centre D17, Level 6 Hub, Sydney, NSW, 2006, Australia

Robyn Newson, Lesley King, Andrew Milat & Adrian Bauman

School of Medicine Sydney, University of Notre Dame Australia, 160 Oxford St, Darlinghurst, 2010, Australia

Lucie Rychetnik

You can also search for this author in PubMed   Google Scholar

Contributions

All authors were involved in the conception and design of this review. RN was responsible for the search strategy design, study retrieval and data extraction. All authors contributed to the analysis of and interpretation of data. All authors contributed to the preparation of the final text of the article and approved the final manuscript.

Corresponding author

Correspondence to Robyn Newson .

Ethics declarations

Ethics approval and consent to participate.

This study received approval from the Human Research Ethics Committee, University of Sydney (2016/268).

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:.

List of included empirical studies. (DOCX 40 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Newson, R., King, L., Rychetnik, L. et al. Looking both ways: a review of methods for assessing research impacts on policy and the policy utilisation of research. Health Res Policy Sys 16 , 54 (2018). https://doi.org/10.1186/s12961-018-0310-4

Download citation

Received : 14 December 2017

Accepted : 02 April 2018

Published : 25 June 2018

DOI : https://doi.org/10.1186/s12961-018-0310-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research impact assessment
  • Research impact
  • Research payback
  • Policy impact
  • Research utilisation
  • Research use
  • Health policy
  • Health research
  • Evidence-informed policy

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research paper on impact assessment

Exploring the application of environmental impact assessment to tourism and recreation in protected areas: a systematic literature review

  • Published: 09 February 2024

Cite this article

research paper on impact assessment

  • Gabriela Francisco Pegler   ORCID: orcid.org/0000-0001-6147-6983 1 ,
  • Clara Carvalho de Lemos 2 &
  • Victor Eduardo Lima Ranieri 1  

268 Accesses

Explore all metrics

Over the years, concerns regarding the effects of tourism and recreational activities on protected areas have been consistently raised. The establishment of recreation ecology dates as far back as the 1920s and 1930s, marking efforts to address these concerns. Throughout the development of this field, a variety of tools and procedures were proposed for managing and monitoring the impacts of recreation, such as the recreation opportunity spectrum, limits of acceptable change, visitor activity management process, visitor impact management (VIM), visitor experience and resource protection, and the protected area VIM. In addition to these tools, environmental impact assessment (EIA) is a valuable approach for informing decision-making processes and predicting the environmental consequences of activities that may cause significant environmental degradation, thus aligning tourism and recreation with the goals of preserving protected areas. The purpose of this paper is to identify and critically discuss how environmental impact assessment is contributing to improving decision-making and management of public use in protected areas, with a focus on methodological approaches, the extent of its application and reported outcomes. To achieve this, we conducted a systematic literature review and established a preliminary connection between the methodologies for evaluating and monitoring the impacts of public use proposed in the reviewed articles and EIA. Our findings indicate that EIA can contribute in four main ways: firstly, by being applied prior to the implementation of the activity, secondly, by using methods to identify and predict impacts, thirdly, by applying monitoring procedures, and finally, by providing tiered steps to facilitate better decision-making.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research paper on impact assessment

Similar content being viewed by others

research paper on impact assessment

Landscape ecological concepts in planning: review of recent developments

research paper on impact assessment

Understanding and managing the interactions of impacts from nature-based recreation and climate change

research paper on impact assessment

Tourists’ valuation of nature in protected areas: A systematic review

Abaza, H., Bisset, R., & Sadler, B. (2004). Environmental impact assessment and strategic environmental assessment: towards an integrated approach . PNUMA/Earthprint.

Alberts, R. C., Retief, F. P., Cilliers, D. P., Roos, C., & Hauptfleisch, M. (2021). Environmental impact assessment (EIA) effectiveness in protected areas. Impact Assessment and Project Appraisal, 39 (4), 290–303. https://doi.org/10.1080/14615517.2021.1904377

Article   Google Scholar  

Albuquerque, T., Loiola, M., Nunes, J. A. C. C., Reis-Filho, J. A., Sampaio, C. L. S., & Leduc, A. O. H. C. (2014). In situ effects of human disturbances on coral reef-fish assemblage structure: Temporary and persisting changes are reflected as a result of intensive tourism. Marine and Freshwater Research, 66 (1), 23–32. https://doi.org/10.1071/MF13185

Ancin‐Murguzur, F. J., Munoz, L., Monz, C., & Hausner, V. H. (2020). Drones as a tool to monitor human impacts and vegetation changes in parks and protected areas. Remote Sensing in Ecology and Conservation, 6 (1), 105–113. https://doi.org/10.1002/rse2.127

Ballantyne, M., Pickering, C. M., McDougall, K. L., & Wright, G. T. (2014). Sustained impacts of a hiking trail on changing Windswept Feldmark vegetation in the Australian Alps. Australian Journal of Botany, 62 (4), 263–275. https://doi.org/10.1071/BT14114

Balmford, A., Green, J. M. H., Anderson, M., Beresford, J., Huang, C., Naidoo, R., Walpole, M., & Manica, A. (2015). Walk on the wild side: Estimating the global magnitude of visits to protected areas. PLoS Biology, 13 (2), e1002074. https://doi.org/10.1371/journal.pbio.1002074

Article   CAS   PubMed   PubMed Central   Google Scholar  

Barros, A., & Pickering, C. M. (2017). How networks of informal trails cause landscape level damage to vegetation. Environmental Management, 60 (1), 57–68. https://doi.org/10.1007/s00267-017-0865-9

Article   PubMed   Google Scholar  

Barros, A., Pickering, C., & Gudes, O. (2015). Desktop analysis of potential impacts of visitor use: A case study for the highest park in the Southern Hemisphere. Journal of Environmental Management, 150 , 179–195. https://doi.org/10.1016/j.jenvman.2014.11.004

Belotti, E., Mayer, K., Kreisinger, J., Heurich, M., & Bufka, L. (2018). Recreational activities affect resting site selection and foraging time of Eurasian lynx (Lynx lynx). Hystrix, the Italian Journal of Mammalogy, 29 (2), 181–189. https://doi.org/10.4404/hystrix-00053-2018

de Bie, K., & Vesk, P. A. (2014). Ecological indicators for assessing management effectiveness: A case study of horse riding in an Alpine National Park. Ecological Management & Restoration, 15 (3), 215–221. https://doi.org/10.1111/emr.12127

Bond, A., Pope, J., Fundingsland, M., Morrison-Saunders, A., Retief, F., & Hauptfleisch, M. (2020). Explaining the political nature of environmental impact assessment (EIA): A neo-Gramscian perspective. Journal of Cleaner Production . https://doi.org/10.1016/j.jclepro.2019.118694

Browning, M. H., Marion, J. L., & Gregoire, T. G. (2013). Sustainably connecting children with nature—An exploratory study of nature play area visitor impacts and their management. Landscape and Urban Planning, 119 , 104–112. https://doi.org/10.1016/j.landurbplan.2013.07.004

Buckley, R. (2008). Thresholds and standards for tourism environmental impact assessment. In Standards and thresholds for impact assessment (pp. 205–215). Springer. https://doi.org/10.1007/978-3-540-31141-6_16

Buckley, R. (2011). Tourism and environment. Annual Review of Environment and Resources, 36 , 397–416. https://doi.org/10.1146/annurev-environ-041210-132637

Çakir, G., Müderrisoğlu, H., & Kaya, L. G. (2016). Assessing the effects of long-term recreational activities on landscape changes in Abant Natural Park, Turkey. Journal of Forestry Research, 27 (2), 453–461. https://doi.org/10.1007/s11676-015-0141-x

Canteiro, M., Córdova-Tapia, F., & Brazeiro, A. (2018). Tourism impact assessment: A tool to evaluate the environmental impacts of touristic activities in Natural Protected Areas. Tourism Management Perspectives, 28 , 220–227. https://doi.org/10.1016/j.tmp.2018.09.007

Castley, J. G., Hill, W., & Pickering, C. M. (2009). Developing ecological indicators of visitor use of protected areas: A new integrated framework from Australia. Australasian Journal of Environmental Management, 16 (4), 196–207. https://doi.org/10.1080/14486563.2009.9725235

CBD (2020). Global Biodiversity Outlook 5. CBD, Montreal.

CEE. Collaboration for Environmental Evidence (2013). Guidelines for systematic review and evidence syntheses in environmental management. Version 4.2. Environmental Evidence. http://www.environmentalevidence.org/

CEE. Collaboration for Environmental Evidence. (2018). Guidelines and standards for evidence synthesis in environmental management. Version 5.0 (AS Pullin, GK Frampton, B Livoreil & G Petrokofsky, Eds). http://www.nvironmentalevidence.org/information-for-authors

Chakraborty, A. (2021). Can tourism contribute to environmentally sustainable development? Arguments from an ecological limits perspective. Environment Development and Sustainnability, 23 , 8130–8146. https://doi.org/10.1007/s10668-020-00987-5

Cheung, S. Y., Leung, Y. F., & Larson, L. R. (2022). Citizen science as a tool for enhancing recreation research in protected areas: Applications and opportunities. Journal of Environmental Management, 305 , 114353. https://doi.org/10.1016/j.jenvman.2021.114353

Claudet, J., Lenfant, P., & Schrimm, M. (2010). Snorkelers impact on fish communities and algae in a temperate marine protected area. Biodiversity and Conservation, 19 (6), 1649–1658. https://doi.org/10.1007/s10531-010-9794-0

Clark, R., Stankey, G. (1978). The recreation opportunity spectrum: A framework for planning, managing and research. General Technical Report .

Cole, D. N., Foti, P., & Brown, M. (2008). Twenty years of change on campsites in the backcountry of Grand Canyon National Park. Environmental Management, 41 (6), 959–970. https://doi.org/10.1007/s00267-008-9087-5

Article   ADS   PubMed   Google Scholar  

Cole, D. N., & McCool, S. F. (1997). The limits of acceptable change process: modifications and clarifications. United States Department of Agriculture Forest Service General Technical Report Int (pp. 61–68).

Cole, D. N., & Monz, C. A. (2003). Impacts of camping on vegetation: Response and recovery following acute and chronic disturbance. Environmental Management, 32 (6), 693–705. https://doi.org/10.1007/s00267-003-0046-x

Coma, R., Pola, E., Ribes, M., & Zabala, M. (2004). Long-term assessment of temperate octocoral mortality patterns, protected vs unprotected areas. Ecological Applications, 14 (5), 1466–1478. https://doi.org/10.1890/03-5176

Cook, C. N., Possingham, H. P., & Fuller, R. A. (2013). Contribution of systematic reviews to management decisions: Systematic reviews. Conservation Biology, 27 (5), 902–915. https://doi.org/10.1111/cobi.12114

Cunha, A. A. (2010). Negative effects of tourism in a Brazilian Atlantic forest National Park. Journal for Nature Conservation, 18 (4), 291–295. https://doi.org/10.1016/j.jnc.2010.01.001

Eagles, P. F. J., McCool, S. F., & Haynes, C. D. A. (2002). Sustainable tourism in protected areas : Guidelines for planning and management. IUCN Gland.

Farrell, T. A., & Marion, J. L. (2001a). Identifying and assessing ecotourism visitor impacts at eight protected areas in Costa Rica and Belize. Environmental Conservation, 28 (3), 215–225. https://doi.org/10.1017/S0376892901000224

Farrell, T. A., & Marion, J. L. (2001b). Trail impacts and trail impact management related to visitation at Torres del Paine National Park, Chile. Leisure/loisir, 26 (1–2), 31–59. https://doi.org/10.1080/14927713.2001.9649928

Farrell, T. A., & Marion, J. L. (2002). The protected area visitor impact management (PAVIM) framework: A simplified process for making management decisions. Journal of Sustainable Tourism, 10 (1), 31–51. https://doi.org/10.1080/09669580208667151

Geneletti, D., & Dawa, D. (2009). Environmental impact assessment of mountain tourism in developing regions: A study in Ladakh, Indian, Himalaya. Environmental Impact Assessment Review, 29 (4), 229–242. https://doi.org/10.1016/j.eiar.2009.01.003

George, P. (2003). Trail LMPACTS in Sagarmatha (Mt. Everest) National Park, Nepal: A logistic regression analysis. Environmental Management, 32 (3), 312–321. https://doi.org/10.1007/s00267-003-0049-7

Gladstone, W., Curley, B., & Shokri, M. R. (2013). Environmental impacts of tourism in the Gulf and the Red Sea. Marine Pollution Bulletin, 72 (2), 375–388. https://doi.org/10.1016/j.marpolbul.2012.09.017

Article   CAS   PubMed   Google Scholar  

Glasson, J., Therivel, R., & Chadwick, A. (2005). Introduction to environmental assessment (3rd ed.). Routledge.

Book   Google Scholar  

Gössling, S., Scott, D., & Hall, M. (2020). Pandemics, tourism and global change: A rapid assessment of COVID-19. Journal of Sustainable Tourism . https://doi.org/10.1080/09669582.2020.1758708

Graham, R., Nilsen, P., & Payne, R. J. (1988). Visitor management in Canadian national parks. Tourism Management, 9 (1), 44–61. https://doi.org/10.1016/0261-5177(88)90057-X

Gutzwiller, K. J., D’Antonio, A. L., & Monz, C. A. (2017). Wildland recreation disturbance: Broad-scale spatial analysis and management. Frontiers in Ecology and the Environment, 15 (9), 517–524. https://doi.org/10.1002/fee.1631

Hadwen, W. L., Hill, W., & Pickering, C. M. (2008). Linking visitor impact research to visitor impact monitoring in protected areas. JournaL of ecoTourism, 7 (1), 87–93. https://doi.org/10.2167/joe193.0

Hall, C. M. (2019). Constructing sustainable tourism development: The 2030 agenda and the managerial ecology of sustainable tourism. Journal of Sustainable Tourism, 27 (7), 1044–1060. https://doi.org/10.1080/09669582.2018.1560456

Hall, C. M., & Page, S. (2006). The geography of tourism and recreation: Place, space and environment, 3 . Routledge.

Hammitt, W. E., Cole, D. N., & Monz, C. A. (2015). Wildland recreation: Ecology and management . New York: Wiley.

Google Scholar  

Hayes, C. T., Baumbach, D. S., Juma, D., & Dunbar, S. G. (2017). Impacts of recreational diving on hawksbill sea turtle ( Eretmochelys imbricata ) behaviour in a marine protected area. Journal of Sustainable Tourism, 25 (1), 79–95. https://doi.org/10.1080/09669582.2016.1174246

Herrmann, T. M., Costina, M. I., & Costina, A. M. A. (2010). Roost sites and communal behavior of Andean Condors in Chile. Geographical Review, 100 (2), 246–262. https://doi.org/10.1111/j.1931-0846.2010.00025.x

Article   ADS   Google Scholar  

Hockings, M., Dudley, N., Ellio, W., & Ferreira, M. N. (2020). COVID-19 and protected and conserved areas. Parks, 26 (1), 7–24. https://doi.org/10.2305/IUCN.CH.2020.PARKS-26-1MH.en

Hrnčiarová, T., Spulerova, J., Piscova, V., & Dobrovodská, M. (2018). Status and outlook of hiking trails in the central part of the Low Tatra Mountains in Slovakia between 1980–1981 and 2013–2014. Journal of Mountain Science, 15 (8), 1615–1632. https://doi.org/10.1007/s11629-017-4690-3

Huhta, E., & Sulkava, P. (2014). The impact of nature-based tourism on bird communities: A case study in Pallas-Yllästunturi National Park. Environmental Management, 53 (5), 1005–1014. https://doi.org/10.1007/s00267-014-0253-7

Hunsberger, C. A., Gibson, R. B., & Wismer, S. K. (2005). Citizen involvement in sustainability-centred environmental assessment follow-up. Environmental Impact Assessment Review, 25 (6), 609–627. https://doi.org/10.1016/j.eiar.2004.12.003

International Association For Impact Assessment. (1999). Principles of environmental impact assessment best practice. Disponível em: https://www.iaia.org/uploads/pdf/Principles%20of%20IA%2019.pdf

Kerbiriou, C., Le Viol, I., Robert, A., Porcher, E., Goourmelon, F., & Julliard, R. (2009). Tourism in protected areas can threaten wild populations: From individual response to population viability of the chough Pyrrhocorax pyrrhocorax. Journal of Applied Ecology, 46 (3), 657–665. https://doi.org/10.1111/j.1365-2664.2009.01646.x

Kim, M.-K., & Daigle, J. J. (2012). Monitoring of vegetation impact due to trampling on Cadillac Mountain summit using high spatial resolution remote sensing data sets. Environmental Management, 50 (5), 956–968. https://doi.org/10.1007/s00267-012-9905-7

Kuba, K., Monz, C., Bardsen, B. J., & Hausner, V. H. (2018). Role of site management in influencing visitor use along trails in multiple alpine protected areas in Norway. Journal of Outdoor Recreation and Tourism, 22 , 1–8. https://doi.org/10.1016/j.jort.2018.02.002

Kuss, F. R., Graefe, A. R., & Vaske, J. J. (1990). Visitor impact management: A review of research (Vol. 1). National Parks and Conservation Association.

Leopold, L. B., Clarke, F. E., Hanshaw, B. B., & Balsley, J. R. (1971). A procedure for evaluating environmental impact . US Geological Survey. https://doi.org/10.3133/cir645

Leung, Y. F., Newburger, T., Jones, M., Kuhn, B., & Woiderski, B. (2011). Developing a monitoring protocol for visitor-created informal trails in Yosemite National Park, USA. Environmental Management, 47 (1), 93–106. https://doi.org/10.1007/s00267-010-9581-4

Leung, Y. F., & Marion, J. L. (1999). Assessing trail conditions in protected areas: Application of a problem-assessment method in Great Smoky Mountains National Park, USA. Environmental Conservation, 26 (4), 270–279. https://doi.org/10.1017/S0376892999000399

Leung, Y. F., Marion, J. L. (2000). Recreation impacts and management in wilderness: A state-of-knowledge review. In Wilderness science in a time of change conference (Vol. 5, pp. 23–48). USDA Forest Service Ogden, UT.

Leung, Y. F., & Marion, J. L. (2000b). Wilderness: A State-of-Knowledge Review. USDA Forest Service Proceedings RMRS, 5 (15), 23.

Leung, Y. F., Spenceley, A., Hvenegaard, G., & Buckley, R. (2018). Tourism and visitor management in protected areas: Guidelines for sustainability , [s.l.]: IUCN Gland. https://doi.org/10.2305/IUCN.CH.2018.PAG.27.en

Li, W., Ge, X., & Liu, C. (2005). Hiking trails and tourism impact assessment in protected area: Jiuzhaigou Biosphere Reserve, China. Environmental Monitoring and Assessment, 108 (1), 279–293. https://doi.org/10.1007/s10661-005-4327-0

Liddle, M. (1997). Recreation ecology: The ecological impact of outdoor recreation and ecotourism . Chapman & Hall Ltd.

Lloret, J., Zaragoza, N., & Caballero, D. (2008). Impacts of recreational boating on the marine environment of Cap de Creus (Mediterranean Sea). Ocean and Coastal Management, 51 (11), 749–754. https://doi.org/10.1016/j.ocecoaman.2008.07.001

Manning, R. E., Lime, D. W., Hof, M., & Freimund, W. A. (1995). The visitor experience and resource protection (VERP) process: The application of carrying capacity to Arches National Park. In: The George Wright Forum (pp. 41–55). George Wright Society.

Marion, J. L., & Farrell, T. A. (2002). Management practices that concentrate visitor activities: Camping impact management at Isle Royale National Park, USA. Journal of Environmental Management, 66 (2), 201–212. https://doi.org/10.1006/jema.2002.0584

Marion, J. L., Leung, Y. F., Eagleston, H., & Burroughs, K. (2016). A review and synthesis of recreation ecology research findings on visitor impacts to wilderness and protected natural areas. Journal of Forestry, 114 (3), 352–362. https://doi.org/10.5849/jof.15-498

Milano, C., Novelli, M., & Cheer, J. M. (2019). Overtourism and degrowth: A social movements perspective. Journal of Sustainable Tourism, 27 (12), 1857–1875. https://doi.org/10.1080/09669582.2019.1650054

Malepe, K. V., González, A., & Retief, F. P. (2022). Evaluating the quality of environmental impact assessment reports (EIARs) for tourism developments in protected areas: The Kruger to Canyons Biosphere case study. Impact Assessment and Project Appraisal, 40 (5), 384–398. https://doi.org/10.1080/14615517.2022.2091055

Monz, C. A., & Twardock, P. (2010). A classification of backcountry campsites in Prince William Sound, Alaska, USA. Journal of Environmental Management, 91 (7), 1566–1572. https://doi.org/10.1016/j.jenvman.2010.02.030

Monz, C. A., Marion, J. L., Goonan, K. A., Manning, R. E., Wimpey, J., & Carr, C. (2010a). Assessment and monitoring of recreation impacts and resource conditions on mountain summits: Examples from the Northern Forest, USA. Mountain Research and Development, 30 (4), 332–343. https://doi.org/10.1659/MRD-JOURNAL-D-09-00078.1

Monz, C. A., Cole, D. N., Leung, Y. F., & Marion, J. L. (2010b). Sustaining visitor use in protected areas: Future opportunities in recreation ecology research based on the USA experience. Environmental Management, 45 (3), 551–562. https://doi.org/10.1007/s00267-009-9406-5

Monz, C. A., Pickering, C. M., & Hadwen, W. L. (2013). Recent advances in recreation ecology and the implications of different relationships between recreation use and ecological impacts. Frontiers in Ecology and the Environment, 11 (8), 441–446. https://doi.org/10.1890/120358

Morgan, R. K. (2012). Environmental impact assessment: The state of the art. Impact Assessment and Project Appraisal, 30 (1), 5–14. https://doi.org/10.1080/14615517.2012.661557

Morris, P., & Therrivel, R. (2009). Methods of environmental impact assessment (3rd ed.). Taylor & Francis.

Morrison-Saunders, A., Arts, J., Bond, A., Pope, J., & Retief, F. (2021). Reflecting on, and revising, international best practice principles for EIA follow-up. Environmental Impact Assessment Review, 89 , 1–10. https://doi.org/10.1016/j.eiar.2021.106596

Niu, L., & Cheng, Z. (2019). Impact of tourism disturbance on forest vegetation in Wutai Mountain, China. Environmental Monitoring and Assessment, 191 (2), 1–11. https://doi.org/10.1007/s10661-019-7218-5

Obua, J. (1997). Environmental impact of ecotourism in Kibale national park, Uganda. Journal of Sustainable Tourism, 5 (3), 213–223. https://doi.org/10.1080/09669589708667286

Olive, N. D., & Marion, J. L. (2009). The influence of use-related, environmental, and managerial factors on soil loss from recreational trails. Journal of Environmental Management, 90 (3), 1483–1493. https://doi.org/10.1016/j.jenvman.2008.10.004

Partidário, M. R. (2000). Elements of an SEA framework—improving the added-value of SEA. Environmental Impact Assessment Review, 20 (6), 647–663. https://doi.org/10.1016/S0195-9255(00)00069-X

Pickering, C. M., & Barros, A. (2015). Using functional traits to assess the resistance of subalpine grassland to trampling by mountain biking and hiking. Journal of Environmental Management, 164 , 129–136. https://doi.org/10.1016/j.jenvman.2015.07.003

Pickering, C. M., Rossi, S., & Barros, A. (2011). Assessing the impacts of mountain biking and hiking on subalpine grassland in Australia using an experimental protocol. Journal of Environmental Management, 92 (12), 3049–3057. https://doi.org/10.1016/j.jenvman.2011.07.016

Pickering, C., Rossi, S. D., Hernando, A., & Barros, A. (2018). Current knowledge and future research directions for the monitoring and management of visitors in recreational and protected areas. Journal of Outdoor Recreation and Tourism, 21 , 10–18. https://doi.org/10.1016/j.jort.2017.11.002

Popay, J., et al. (2006). Guidance on the conduct of narrative synthesis in systematic reviews. A Product from the ESRC Methods Programme Version, 1 (1), b92.

Pope, J., Wessels, J. A., Douglas, A., Hughes, M., & Saunders, A. M. (2019). The potential contribution of environmental impact assessment (EIA) to responsible tourism: The case of the Kruger National Park. Tourism Management Perspectives, 32 , 100557. https://doi.org/10.1016/j.tmp.2019.100557

Pouwels, R., Sierdsema, H., Foppen, R. P. B., Henkes, R. J. H. G., Opdam, P. F. M., & Eupen, M. (2017). Harmonizing outdoor recreation and bird conservation targets in protected areas: Applying available monitoring data to facilitate collaborative management at the regional scale. Journal of Environmental Management, 198 , 248–255. https://doi.org/10.1016/j.jenvman.2017.04.069

Reid, S. E., & Marion, J. L. (2005). A comparison of campfire impacts and policies in seven protected areas. Environmental Management, 36 (1), 48–58. https://doi.org/10.1007/s00267-003-0215-y

Robina-Ramírez, R., Sánchez, M. S. O., Jiménez-Naranjo, H. V., et al. (2022). Tourism governance during the COVID-19 pandemic crisis: A proposal for a sustainable model to restore the tourism industry. Environment, Development and Sustainability, 24 , 6391–6412. https://doi.org/10.1007/s10668-021-01707-3

Rouphael, A. B., & Inglis, G. J. (2002). Increased spatial and temporal variability in coral damage caused by recreational scuba diving. Ecological Applications, 12 (2), 427–440. https://doi.org/10.2307/3060953

Rouphael, A. B., Abdulla, A., & Said, Y. (2011). A framework for practical and rigorous impact monitoring by field managers of marine protected areas. Environmental Monitoring and Assessment, 180 (1), 557–572. https://doi.org/10.1007/s10661-010-1805-9

Sánchez, L. E. (1993). Os papéis da avaliação de impacto ambiental. Avaliação de impacto ambiental: situação atual e perspectivas. São Paulo: Edusp , 15–33.

Sánchez, L. E. (2006). Avaliação de impacto ambiental e seu papel na gestão de empreendimentos. Modelos e ferramentas de gestão ambiental , 85–114.

Sánchez, L. E. (2008). Avaliação de Impacto Ambiental: conceitos e métodos. São Paulo: Oficina de Textos.

Sánchez, L. E., & Mitchell, R. (2017). Conceptualizing impact assessment as a learning process. Environmental Impact Assessment Review, 62 , 195–204. https://doi.org/10.1016/j.eiar.2016.06.001

Sandham, L. A., Huysamen, C., Retief, F. P., Saunders, A. M., Bond, A. J., Pope, J., & Alberts, R. C. (2020). Evaluating environmental impact assessment report quality in South African national parks. Koedoe: African Protected Area Conservation and Science, 62 (1), 1–9.

Scimago Journal Ranking. World Report. Disponível em: https://www.scimagojr.com/worldreport.php . Acesso em 24 de fev. De 2022

Seddigh, M. R., Shokouhyar, S., & Loghmani, F. (2022a). Approaching towards sustainable supply chain under the spotlight of business intelligence. Annals of Operations Research . https://doi.org/10.1007/s10479-021-04509-y

Article   PubMed   PubMed Central   Google Scholar  

Seddigh, M. R., Shokoohyar, S., Ghanadpour, S., & Moradi, H. (2022b). pharmaceutical supply chain sustainability under the torchlight of social media. Operations and Supply Chain Management: An International Journal, 15 (4), 486–504.

Shokouhyar, S., Seddigh, M. R., & Panahifar, F. (2020). Impact of big data analytics capabilities on supply chain sustainability: A case study of Iran. World Journal of Science, Technology and Sustainable Development, 17 (1), 33–57. https://doi.org/10.1108/WJSTSD-06-2019-0031

Smith, M. K. S., Smit, I. P., Swemmer, L. K., Mokhatla, M. M., Freitag, S., Roux, D. J., & Dziba, L. (2021). Sustainability of protected areas: Vulnerabilities and opportunities as revealed by COVID-19 in a national park management agency. Biological Conservation, 255 , 108985. https://doi.org/10.1016/j.biocon.2021.108985

Svajda, J., Korony, S., Brighton, I., & Esser, S. M. (2016). Trail impact monitoring in Rocky Mountain National Park, USA. Solid Earth, 7 (1), 115–128. https://doi.org/10.5194/se-7-115-2016

Spenceley, A., et al. (2015). Visitor management. In: Protected area governance and management , [s.l.]. ANU Press.

Spenceley, A., McCool, S., Newsome, D., Báez, A., Barborak, J. R., Blye, C. J., Bricker, K., Cahyadi, H. S., Corrigan, K., Halpenny, E., Hvenegaard, G., King, D. M., Leung, Y. F., Mandic, A., Naidoo, R., Ruede, D., Sano, J., Sarhan, M., Santamaria, V., … Zschiegner, A. K. (2021). Tourism in protected and conserved areas amid the COVID-19 pandemic. Parks, 27 , 103–118. https://doi.org/10.2305/IUCN.CH.2021PARKS-27SI.en

Stankey, G. H., Cole, D. N., Lucas, R. C., Petersen, M. E., Frissell, S. S. (1985). The limits of acceptable change (LAC) system for wilderness planning. In The limits of acceptable change (LAC) system for wilderness planning , n. INT-176.

Tessler, M., & Clark, T. A. (2016). The impact of bouldering on rock-associated vegetation. Biological Conservation, 204 , 426–433. https://doi.org/10.1016/j.biocon.2016.10.004

Tomczyk, A. M., & Ewertowski, M. W. (2016). Recreational trails in the Poprad Landscape Park, Poland: The spatial pattern of trail impacts and use-related, environmental, and managerial factors. Journal of Maps, 12 (5), 1227–1235. https://doi.org/10.1080/17445647.2015.1088751

Törn, A., Tolvanen, A., Norokorpi, Y., Tervo, R., & Siikamaki, P. (2009). Comparing the impacts of hiking, skiing and horse riding on trail and vegetation in different types of forest. Journal of Environmental Management, 90 (3), 1427–1434. https://doi.org/10.1016/j.jenvman.2008.08.014

UNEP-WCMC; IUCN; NGS. (2018). Protected Planet Report 2018 . UNEP-WCMC, IUCN and NGS: Cambridge UK; Gland, Switzerland; and Washington, D.C.

UNWTO (United Nations World Tourism Organization) (2013). UNWTO Tourism Highlights , 2013 Edition.

Verschuuren, B., Mallarach, J. M., Bernbaum, E., Spoon, J., Brown., S., Borde, R., Brown, J., Calamia, M., Mitchell, N., Infield, M., & Lee, E. (2021). Cultural and spiritual significance of nature: guidance for protected and conserved area governance and management. Best Practice Protected Area Guidelines Series No. 32, Gland, Switzerland: IUCN. XVI + 88pp. https://doi.org/10.2305/IUCN.CH.2021.PAG.32.e

Walden-Schreiner, C., & Leung, Y. F. (2013). Spatially characterizing visitor use and its association with informal trails in Yosemite Valley meadows. Environmental Management, 52 , 163–178. https://doi.org/10.1007/s00267-013-0066-0

Wang, L., Pan, Y., Cao, Y., Li, B., Wang, Q., Wang, B., Pang, W., Zhang, J., Zhu, Z., & Deng, G. (2018). Detecting early signs of environmental degradation in protected areas: An example of Jiuzhaigou Nature Reserve, China. Ecological Indicators, 91 , 287–298. https://doi.org/10.1016/j.ecolind.2018.03.080

Weaver, D. (2006). Sustainable tourism: Theory and practice (1st ed.). Routledge.

Weaver, D. B., & Lawton, L. J. (2017). A new visitation paradigm for protected areas. Tourism Management, 60 , 140–146. https://doi.org/10.1016/j.tourman.2016.11.018

Woodcock, P., Pullin, A. S., & Kaiser, M. J. (2014). Evaluating and improving the reliability of evidence syntheses in conservation and environmental science: A methodology. Biological Conservation, 176 , 54–62. https://doi.org/10.1016/j.biocon.2014.04.020

Wylie, D. K., Bhattacharjee, S., & Rampedi, I. T. (2018). Evaluating the quality of environmental impact reporting for proposed tourism-related infrastructure in the protected areas of South Africa: A case study on selected EIA reports. African Journal of Hospitality, Tourism and Leisure, 7 (3), 1–14.

Wimpey, J. F., & Marion, J. L. (2010). The influence of use, environmental and managerial factors on the width of recreational trails. Journal of Environmental Management, 91 (10), 2028–2037. https://doi.org/10.1016/j.jenvman.2010.05.017

Wood, C. (2003). Environmental impact assessment in developing countries. International Development Planning Review, 25 (3), 21. https://doi.org/10.3828/idpr.25.35

WTTC (World Travel and Tourism Council). (2021). A net zero roadmap for travel and tourism. In Proposing a new target framework for the travel and tourism sector .

Zhang, J., Xiang, C., & Li, M. (2012). Effects of tourism and topography on vegetation diversity in the subalpine meadows of the Dongling Mountains of Beijing, China. Environmental Management, 49 (2), 403–411. https://doi.org/10.1007/s00267-011-9786-1

Article   ADS   MathSciNet   PubMed   Google Scholar  

Download references

This work was supported by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES—Brazil). The datasets generated during and/or analyzed during the current study are available on request from the corresponding authors.

Author information

Authors and affiliations.

São Carlos School of Engineering, University of São Paulo, São Paulo, Brazil

Gabriela Francisco Pegler & Victor Eduardo Lima Ranieri

Institute of Geography, State University of Rio de Janeiro, Rio de Janeiro, Brazil

Clara Carvalho de Lemos

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by GFP, VELR and CCdL. The first draft of the manuscript was written by GFP, and all authors commented on previus versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Gabriela Francisco Pegler .

Ethics declarations

Conflict of interest.

The authors report there are no competing interests to declare.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Pegler, G.F., de Lemos, C.C. & Ranieri, V.E.L. Exploring the application of environmental impact assessment to tourism and recreation in protected areas: a systematic literature review. Environ Dev Sustain (2024). https://doi.org/10.1007/s10668-024-04532-6

Download citation

Received : 21 December 2022

Accepted : 13 January 2024

Published : 09 February 2024

DOI : https://doi.org/10.1007/s10668-024-04532-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Visitor use
  • Impact assessment
  • Impact monitoring
  • Decision-making processes
  • Find a journal
  • Publish with us
  • Track your research

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Research Impact Assessment

Profile image of Associate Professor Samantha Battams

This short paper discusses research impact assessment, including important factors to consider when trying to maximise research impact.

Related Papers

Health research policy and systems

As governments, funding agencies and research organisations worldwide seek to maximise both the financial and non-financial returns on investment in research, the way the research process is organised and funded is becoming increasingly under scrutiny. There are growing demands and aspirations to measure research impact (beyond academic publications), to understand how science works, and to optimise its societal and economic impact. In response, a multidisciplinary practice called research impact assessment is rapidly developing. Given that the practice is still in its formative stage, systematised recommendations or accepted standards for practitioners (such as funders and those responsible for managing research projects) across countries or disciplines to guide research impact assessment are not yet available.In this statement, we propose initial guidelines for a rigorous and effective process of research impact assessment applicable to all research disciplines and oriented toward...

research paper on impact assessment

Rosa Scoble

In the face of increasing pressure to demonstrate the socio-economic impact of funded research, whether it is funded directly by research councils or indirectly by governmental research block grants, institutions have to tackle the complexity of understanding, tracking, collecting, and analysing the impact of all their research activities. This paper attempts to encapsulate the wider context of research impact by delineating a broad definition of what might be classified as impact. It also suggests a number of different dimensions that can help in the development of a systematic research impact assessment framework. The paper then proceeds to indicate how boundaries and criteria around the definition of impact and these dimensions can be used to refine the impact assessment framework in order to focus on the objectives of the assessor. A pilot project, run at Brunel University, was used to test the validity of the approach and possible consequences. A tool specifically developed for...

Austrian Presidency of the EU Council Conference on the Impact of Social Sciences and Humanities for a European Research Agenda – Valuation of SSH in Mission-oriented Research

Sergio Manrique , Marta Natalia Wróblewska

An interest in the evaluation of research impact – or the influence of scientific research beyond academia – has been observable worldwide. Several countries have introduced national research assessment systems which take into account this new element of evaluation. So far, research on this practice has focused mainly on the practicalities of the different existing policies: the definition of the term ‘research impact’, different approaches to measuring it, their relative challenges and the possible use of such evaluations. But the introduction of a new element of evaluation gives rise not only to challenges of a practical nature, but also to important ethical consequences in terms of academic identity, reflexivity, power structures, distribution of labour in terms of workloads etc. In order to address these questions and the relevant needs of researchers, in this paper we propose a multidimensional model that considers different attributes of research impact: Responsiveness, Accessibility, Reflexivity, Ecology and Adaptability. This holistic, multidimensional model of evaluation, designed particularly for self-assessment or internal assessment, recognises the qualities a project has on these different scales in a broader perspective, rather than offering a simple and single numerical evaluation. This model addresses many of the ethical dilemmas that accompany conducting impact-producing research. To exemplify the usefulness of the proposed model, the authors provide real-life research project assessment examples conducted with the use of the Multidimensional Approach for Research Impact Assessment (MARIA Model).

RAUSP Management Journal

Luisa Sandes-Guimarães

Research Evaluation

Shahram Sedghi

This article explores the models and frameworks developed on “research impact’. We aim to provide a comprehensive overview of related literature through scoping study method. The present research investigates the nature, objectives, approaches, and other main attributes of the research impact models. It examines to analyze and classify models based on their characteristics. Forty-seven studies and 10 reviews published between 1996 and 2020 were included in the analysis. The majority of models were developed for the impact assessment and evaluation purposes. We identified three approaches in the models, namely outcome-based, process-based, and those utilized both of them, among which the outcome-based approach was the most frequently used by impact models and evaluation was considered as the main objective of this group. The process-based ones were mainly adapted from the W.K. Kellogg Foundation logic model and were potentially eligible for impact improvement. We highlighted the scope of processes and other specific features for the recent models. Given the benefits of the process-based approach in enhancing and accelerating the research impact, it is important to consider such approach in the development of impact models. Effective interaction between researchers and stakeholders, knowledge translation, and evidence synthesis are the other possible driving forces contributing to achieve and improve impact.

Nomos Verlagsgesellschaft mbH & Co. KG eBooks

Martina Arabadzhieva

The Journal of Technology Transfer

Catriona Manville

Research Policy

Rachel Blanche

gobinda chowdhury

Rosa Scoble*, Keith Dickson1, Justin Fisher2, and Stephen R Hanney3 1Brunel Business School, 2School of Social Sciences, 3Health Economics Research Group (HERG), Brunel University, Uxbridge, Middlesex UB8 3PH, UK *Corresponding author: [email protected] ...

RELATED PAPERS

Acta Orthop Traumatol …

Muhteşem Ağildere

Canadian Journal of Zoology

Nigel Bennett

Abdulkadir ERTAŞ

Brazilian Journal of Development

Geraldo Griza

Zeszyty Prawnicze

Dobromiła Nowicka

Nigerian Food Journal

Christian Villalobos

Psychiatry Research

Benjamin F Henwood

Ageing Research Reviews

Sergio Espinosa

Lukas Weigl

Transactions of the Japan Society of Mechanical Engineers Series C

tjahjo pranoto

2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology

Nguyen Phuoc Thien B2110312

Zborník príspevkov z medzinárodnej vedeckej konferencie SOCIALIA 2022: Sociálna pedagogika v meniacej sa spoločnosti

Libuša Gužíková , Blahoslav Kraus

CLAUDIA MAYELI CARDENAS

Écrire l'histoire

Catherine Jami

Cancer Science

Michinori Ogura

Progress In Electromagnetics Research Letters

Hemachandra Gorla

Sanny Prasetya

Humberto Agudelo Castañeda

Online learning

Anthony Picciano

Occupational Therapy in Health Care

Tore Bonsaksen

Ansgar Nünning

Angela Di Matteo

Lukács,Anikó: Várospropaganda és idegenforgalom Budapesten 1896-ban. In: Lukács Anikó; Simon Katalin; V. László Zsófia (szerk.): A város anatómusa : Tanulmányok Sipos András tiszteletére Budapest, Magyarország : Budapest Főváros Levéltára (2024) 446 p. pp. 121-155.

Anikó Lukács

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Public Health

Logo of ijph

Environmental Impact Assessment, Human Health and the Sustainable Development Goals

1 Unit for Health Promotion Research, University of Southern Denmark, Esbjerg, Denmark

N. Krishnankutty

E. r. boess.

2 COWI, Lyngby, Denmark

3 Danish Centre for Environmental Assessment, Aalborg University, Aalborg, Denmark

L. Kørnøv

Objectives: Developmental processes influence the determinants of health and, consequently, human health. Yet, assessing human health impacts in impact assessment, with exception of health impact assessment, is still rather vague. Inclusion of Sustainable Development Goal indicators in environmental impact assessment (EIA) is an opportunity to enhance addressing human health in EIA practices.

Methods: We reviewed a list of health-related targets and indicators for SDGs as defined by the Institute of Health Metrics and Evaluation (IHME) in Seattle, WA, United States with the aim of identifying those to be suggested as outcome indicators within EIA.

Results: Among 42 health-related indicators, we identified 17 indicators which could be relevant for impact assessment procedures and categorized them into three groups: 1) direct health indicators (e.g., under five mortality). 2) complex indicators (e.g., cancer). 3) environmental determinant indicators (e.g., mean PM 2.5 ).

Conclusion: All 17 indicators can be employed to improve quantification assessing human health impacts and bring SDGs into EIA processes. Though our assessment has been conducted for Denmark and the set of suggested indicators could be different for contexts in other countries, the process of their identification can be generalized.

Introduction

Environmental impact assessment (EIA), typically understood as project level assessment of a broad set of environmental impacts, has a long-term success history since its first statutory introduction in 1969 [ 1 ]. Since then, most countries introduced EIA into their legislation and made it a central tool to improve developmental processes and inform of their impact on environment [ 2 ]. Seeing as EIA is a widespread mandatory form of impact assessment, it represents an important arena for uncovering potential human health impacts as part of a broader concept of environment. In this way, the EIA complements Health Impact Assessment (HIA), which is usually a voluntary assessment focusing on potential health risks and benefits. Despite the indivisible link between environment and health, impacts on human health have not been a focus of EIA in the early decades and even health determinants (with the exception of environmental determinants) were rare and narrowly discussed within EIA reports [ 3 ]. A major legislative change to this practice occurred in 2014, when the European Commission, via an amendment to the directive (Directive 2014/52/EU), formally introduced impact on human health as a mandatory impact to assess within EIA [ 4 ]. Denmark, as other European Union countries, implemented the Directive to national legislation making assessment of population health impacts mandatory within EIA [ 5 ]. This act provided a requirement to address health more in depth in assessment processes and opened a space for research on tools for assessment and quantification of health impacts. Yet, the historical scope of EIA coupled by a lack of involvement of health expertise in conducting assessments led to the recognition of rather limited inclusion of health in EIA [ 6 – 8 ]. In 2018–19, a reference document was formulated to better address human health in EIA, by the joint venture of International Association for Impact Assessment (IAIA) and the European Public Health Association (EUPHA) [ 9 ]. This document constituted that the identification of relevant health impacts, the development of proper indicators to measure them and the assessment of their impacts is a rather complex task. Despite of the guidance provided, the methodological complexities (both qualitative and quantitative methods) are challenging for the non-health professionals who conduct the screening, scoping and especially risk appraisal procedures in EIA [ 10 ]. A Danish innovation project “Digitally Supported Environmental Assessment for Sustainable Development Goals—DREAMS” addresses among other issues, inclusion of human health into EIA and Strategic Environmental Assessment (SEA) linking the whole process to the United Nations Sustainable Development Goals (SDGs) [ 11 ]. The project aims to explore whether the SDG indicators can be used as target indicators within EIA and SEA. In this article, we focus solely on the project-level assessment within EIA.

The SDGs have provided a global development framework for sustainable development universally applicable to all countries. The 17 goals, 169 targets and 247 unique indicators can be perceived not only as an ambitious set of measures to guide development, but also as an opportunity to compare countries and harmonize processes across different contexts. One of the primary issues with their implementation is how best to operationalize the SDGs in national and local developmental processes, hereunder including environmental assessment procedures. Some authors have proposed that SDGs and environmental assessment can mutually benefit each other, such that SDGs help to provide a sustainable orientation for environmental assessment and bring sustainability objectives into decision-making processes, while environmental assessments simultaneously provide a structured and universally exercised process for measuring SDG fulfillment [ 12 – 14 ]. Yet, the need for localizing the globally developed SDGs remains also a challenge when considering their integration into environmental assessment, and although experimentation in linking SDGs to EIA is beginning to emerge within practice, the integration is predominantly superficial and seemingly disconnected from the potentials constituted through research [ 12 , 15 ]. Literature [ 16 ] and practical application cases recently published [ 17 , 18 ] have likewise raised issues with the implementation of SDGs in health impact assessment. There is therefore a need for literature to assist in operationalizing the SDGs and guide practice to encourage a productive utilization within impact assessment.

In attempts to better operationalize the SDGs and understand their potential function as decision-support tools, conceptual frameworks to link SDGs with environmental assessment have been developed and published [ 12 , 13 ]. Inclusion of SDG indicators as target indicators within impact assessment processes seems to be therefore mutually complementary [ 14 ]. Some studies have also suggested that the SDGs address sustainability parameters that if applied to environmental assessment, may make for more comprehensive assessments also able to remain current with sustainability agendas [ 19 , 20 ]. While implying health as a parameter with the potential for improved assessment, few studies have yet specifically focused on elaborating the overlap between health determinants and SDGs, nor have they addressed SDG indicators as a way to support these assessments [ 17 , 18 ].

The aim of our work and this manuscript is to investigate which SDG indicators can support the assessment of health impacts in EIA processes as measures of final health outcomes related to the assessed project. Our focus is on Denmark predominantly, yet we believe the process allows for the generalization to other contexts as well. Our conceptual framework for selection of indicators can be described by this simplified pathway ( Figure 1 ):

An external file that holds a picture, illustration, etc.
Object name is ijph-67-1604420-g001.jpg

Framework of selection of health indicators. Environmental Impact Assessment, Human Health and the Sustainable Development Goals, Denmark, 2021.

Concrete tools or selection criteria are currently unavailable for selecting or prioritizing relevant health-related SDGs for EIA. Our focus within this work is on project-level EIA; therefore not all SDG health-related targets and indicators may be applicable to review in all EIA projects, as some SDGs and corresponding indicators may pertain more to strategic development than is addressed through a project-level EIA. Since only the indicators substantialize the content of the SDGs and make contributions measurable, it is necessary to select relevant indicators for EIA given a criteria-based approach, which will also aid in narrowing down the 232 SDG indicators that are currently developed on the global plan.

Criteria to Select Relevant Environmental Health SDG Indicators

Though SDGs are predominantly designed for countries and regions, predefined indicators can be used as guidelines for addressing health aspects in EIA. As a first step, all SDG indicators presented by UNSTATS were considered [ 21 ]. The goals, however, are very general and not only applicable to health aspects. In the second step, health-related targets and indicators for SDGs were narrowed down to health-related indicators within a specific country, namely Denmark, by looking at availability of data on indicators in national statistics. Using the metadata of Denmark defined by the Institute of Health Metrics and Evaluation (IHME) in Seattle, WA, United States [ 22 ] and Statistics Denmark, health targets and indicators were identified. The third step aimed to identify outcome indicators relevant for EIA. Through an internal expert consultation, we identified those indicators, which can be linked to developmental processes and therefore used within impact assessment. The internal expert consultation consisted of five experts within the field of public health, HIA, environmental health and EIA. The consultation was guided by following protocol:

  • • We looked at the listing of typology of investment projects subjected to EIA and discussed whether a specific type of investment project can have an impact on environmental determinants of health and selected health indicators.
  • • We appraised whether the indicator addresses a health outcome or an environmental determinant of health that can be used as part of a causal pathway description. Those measuring health outcomes were classified as either direct or complex health outcomes.

The direct health indicators are considered indicators that can be used directly in assessment, whereas the complex indicators may require further break down into more specific health outcomes before being used within EIA. Figure 2 describes the selection flow.

An external file that holds a picture, illustration, etc.
Object name is ijph-67-1604420-g002.jpg

Schematic diagram to select and link SDG health indicators with EIA. Environmental Impact Assessment, Human Health and the Sustainable Development Goals, Denmark, 2021.

The analysis of the criteria-based approach to select and link SDG health indicators within EIA identified a wide range of indicators that relate to the health aspects, i.e., physical health; well-being; access to safe amenities; environmental impacts. From among 42 health-related indicators, 17 indicators were identified to be relevant and two indicators potentially relevant for EIA. The list of indicators considered relevant is in Table 1 .

List of identified indicators. Environmental Impact Assessment, Human Health and the Sustainable Development Goals, Denmark, 2021.

The identified relevant health indicators are categorized to reflect how the health indicators are in relation to project activities in EIA, and how the outcome indicators constitute a consideration of human health in EIA. Indicators are categorized as direct indicators, complex indicators and environmental indicators. The direct indicators are the indicators that are affected during either the operational or construction phase of the developmental activity. When calculating the impacts, the direct indicators directly describe the baseline values and estimate the impact. These are, in most cases, part of national demographics or health statistics. They can often be characterized by a code according to International Classification of Diseases—ICD code [ 23 ]. The complex indicators are characterized either by merging many determinants into one health outcome, covering a group of individual diseases (e.g., cancer) or by being a composite indicator (e.g., DALY). To apply complex indicators within EIA, a human health expertise is required to estimate cumulative impact derived from the selected indicator, which, in some cases, could also be considered sub-indicators. The third classification is environmental determinant indicators, which describe environmental characteristics or the target area of the assessed activity. Their application to assess human health impacts within EIA requires linkage to one of the mentioned direct or complex health indicators via casual pathways.

The two potentially relevant indicators are natural disasters and vulnerability to poverty (SDG indicator number 1.5.1) and maternal mortality (3.1.1). The first one is considered potentially relevant for EIA depending on the subject of the development activity, as well as the geographical and social conditions of the population in the target area of activity. Maternal mortality can be relevant for use in EIA, if the activity influences either social conditions, such as education, or the health system and access to health services in a target area. Environmental factors, which could be a part of the relevant risk factors for maternal mortality, are directly addressed by enlisted indicators.

Addressing human health within EIA processes is a window of opportunity to strengthen the human health agenda within developmental processes at all levels (global-local as continuum). EIA, contrary to health impact assessment (HIA), is a statutory process in most countries of the world and, as such, directly links both to governance and to economic decision making (e.g., financial sectors and loans). Inclusion of SDG indicators as outcome indicators in assessment processes could prove to be mutually complementary. It may better align new projects with SDGs and, at same time, offer a more standardized approach to conducting population health assessment that is more encompassing of the international and normative policies defining future development. A conceptual framework presented by Kørnøv et al. [ 12 ] divides SDG-integration into various levels, differentiating between non-integration, conservative integration and radical integration. Drawing upon SDG indicators when measuring impact on health parameters in EIA would help to substantiate SDG-integration within, at minimum, the third level through conservative integration, in which SDG indicators support scoping and defining significant impacts. However, using SDG-derived health indicators to actively test project impacts or as elements of decision-making throughout the process could allow for the navigation into higher levels of integration.

Another issue where inclusion of SDG indicators might help to address human health impacts within EIA is availability of data. Impact assessment processes are often restricted by a lack of “ready-to-use” data and require specific data collection prolonging the time of assessment [ 24 ]. SDG indicator values are collected on national levels and might be available also on a regional or local level. Countries are developing their own data collection frameworks including surrogate indicators as in Denmark for example the “Vores Mål” report [ 25 ]. Naturally, these indicators can well be used to describe the baseline levels within EIA reports before implementation of the project. Such a set of indicators can be employed in screening, scoping and risk appraisal phases of assessment processes.

Having a set standard of health outcome indicators for assessment of human health impacts within EIA (but also other types of impact assessments) could invite for a discussion on possibilities to standardize impact assessment process through standardization of indicators. Typology of projects subjected to EIA are usually described in annexes of national legislation providing broad, but to some extent, pre-defined types of developmental activities (e.g., transportation infrastructure projects). On the other hand, national health policies (programs) usually pre-define priority areas of human health measures (e.g. cardiovascular disease, cancer, diabetes). What remains to be done is linking the two ends of impact assessment; having a well-defined (standard) set of health outcome indicators could substantially enhance quality of assessment processes. Such standard set of indicators could also contribute to important workforce issues. Human health expertise should always be part of the impact assessment process and having a standard set of outcome indicators can better specify what kind of expertise is necessary to involve.

Impact assessment processes became a significant and positive tool to protect the environment as well as human health, even though the original scope in legislation was oriented towards environmental protection. Recent changes in legislation especially within Europe opened a window of opportunity for improved targeting of human health impacts within EIA. At the same time, the global effort towards the achievement of SDGs as a guiding policy ambition opens the issue of integration of SDGs into impact assessment processes. Our short paper outlines possibilities and potential benefits of such integration on the indicator level and also proposes those indicators that may be relevant for consideration in Danish EIA practices.

Author Contributions

GG and NK conceptualized the manuscript and developed the human health part. EB contributed by SDG indicator subject and IL and LK added the environmental assessment part. GG developed the first draft ad all co-authors edited it.

This work was supported by grant 0177-000218 DREAMS provided by Danish Innovation Fund.

Conflict of Interest

ERB was partially employed by COWI.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

IMAGES

  1. 44+ Free Impact Assessment Templates in Word Excel PDF Formats

    research paper on impact assessment

  2. (PDF) Impact Assessment Research

    research paper on impact assessment

  3. (PDF) Environmental impact assessment: Retrospect and prospect

    research paper on impact assessment

  4. 44+ Free Impact Assessment Templates in Word Excel PDF Formats

    research paper on impact assessment

  5. (PDF) The concept of impact assessment

    research paper on impact assessment

  6. Types of Research Impact

    research paper on impact assessment

VIDEO

  1. environmental impact assessment nptel week 10 assignment

  2. Final Assessment || Make an Impact with Data Analytics #ibm #naanmudhalvan

  3. Impact Assessment and Facilitation Action (IAFA)

  4. 5th assessment model question paper EVS environmental studies answers march2024

  5. Impact Symposium 2023

  6. Environmental IMPACT ASSESSMENT PREVIOUS QUESTION FROM AP EXAMS| TTD

COMMENTS

  1. Evaluating impact from research: A methodological framework

    The drive to evaluate the societal impact of research is exemplified by the assessment of non-academic impact by the UK's Research Excellence Framework in 2014 and 2021 (REF; the system for assessing the quality of research in UK higher education institutions), and the growing trend to evaluate research impact at national scales around the ...

  2. Research impact evaluation and academic discourse

    Introduction. The introduction of 'research impact' as an element of evaluation constitutes a major change in the construction of research evaluation systems. 'Impact', understood broadly ...

  3. Assessment, evaluations, and definitions of research impact: A review

    1. Introduction, what is meant by impact? When considering the impact that is generated as a result of research, a number of authors and government recommendations have advised that a clear definition of impact is required (Duryea, Hochman, and Parfitt 2007; Grant et al. 2009; Russell Group 2009).From the outset, we note that the understanding of the term impact differs between users and ...

  4. A narrative review of research impact assessment models and methods

    Due the dearth of public health and health promotion-specific research impact assessment, papers with a focus on clinical or health services research impact assessment were included. The reference lists of the final papers were checked to ensure inclusion of further relevant papers; where such articles were considered relevant, they were ...

  5. Understanding the impact of environmental impact assessment research on

    1. Introduction. The environmental impact assessment (EIA) community comprises a range of professionals engaged in all aspects of impact assessment practice (including but not limited to EIA, strategic environmental assessment, social and health impact assessment), and which might involve development of policies and procedures for EIA as well as teaching, training, and research in the field.

  6. Full article: The future of impact assessment: problems, solutions and

    ABSTRACT. This contribution explores key sociological and policy challenges facing impact assessment in the 21st century. In so doing, it identifies three trends that will shape the future of IA theory, policy and practice: a shift from a project-by-project approach to better accommodation of cumulative impacts; increased cross-border policymaking to address shared issues and in recognition of ...

  7. PDF Engaging with research impact assessment for an environmental ...

    PERSPECTIVE Engaging with research impact assessment for an environmental science case study Kirstie A. Fryirs 1, Gary J. Brierley 2 & Thom Dixon 3 Impact assessment is embedded in many national ...

  8. Engaging with research impact assessment for an environmental ...

    Impact assessment is embedded in many national and international research rating systems. Most applications use the Research Impact Pathway to track inputs, activities, outputs and outcomes of an ...

  9. How do organisations implement research impact assessment (RIA

    The emerging practice of research impact assessment (RIA) is an area where there have been a number of developments - be these ... We used a search string modified from Deeming et al. , which included terms specific to papers exploring health and medical RIA frameworks. As our intention was to identify literature reporting on the ...

  10. Environmental impact assessment: the state of the art

    The emergence of environmental impact assessment (EIA) as a key component of environmental management over the last 40 years has coincided with the increasing recognition of the nature, scale and implications of environmental change brought about by human actions. During that time, EIA has developed and changed, influenced by the changing needs ...

  11. Social Impact Assessment: A Systematic Review of Literature

    Measuring, analyzing, and evaluating social, environmental, and economic impact is crucial to aligning the sustainable development strategies of international organizations, governments, and businesses. In this sense, society has been a determining factor exerting pressure for urgent solutions. The main objective of this paper is to provide an exhaustive analysis of the literature about the ...

  12. Looking both ways: a review of methods for assessing research impacts

    Research impact assessments predominantly utilised forward tracing approaches while the converse was true for research use assessments. Within each stream, assessments focussed on narrow or broader research/policy units of analysis as the starting point for assessment, each with associated strengths and limitations. ... Briefing Paper: Bridging ...

  13. PDF Social Impact Assessment: A Systematic Review of Literature

    Josa and Aguado (2019) Mancini and Sala (2018) Rawhouser et al. (2019) Title. Life cycle sustainability assessment: A systematic literature review through the application perspective, indicators, and methodolo- gies Evaluation of social impact measurement tools and techniques: a systematic review of the literature.

  14. (PDF) Methodological Guideline For Impact Assessment

    three main types of evaluations can be distinguished: Assessing program theory, assessing program. process and m easuring program outcomes/impacts (Schober et al. 2013). For impact analyses it is ...

  15. Impact assessment and measurement with Sustainable Development Goals

    Section 2 of our paper lays out the context and identifies major challenges in impact assessment and measurement. Section 3 reviews the literature on ESG ratings, reporting and impact measurement. Section 4 delineates our proposed impact-measurement framework. Section 5 discusses further research agenda and presents our conclusions. 2.

  16. Evaluating Research Impact: The Development of a Research for Impact

    The assessment of research impact also suffers from the age-old problem of attribution ... Throughout this paper, impact and value are used interchangeably to mean the benefits of research investment versus the costs. Materials and Methods. The methodology was a combination of literature reviews, workshops with researchers, both Indigenous and ...

  17. A narrative review of research impact assessment models and methods

    Due the dearth of public health and health promotion-specific research impact assessment, papers with a focus on clinical or health services research impact assessment were included. The reference lists of the final papers were checked to ensure inclusion of further relevant papers; where such articles were considered relevant, they were ...

  18. Environmental Impact Assessment Review

    About the journal. Environmental Impact Assessment Review (EIA Review) is a refereed, interdisciplinary journal serving a global audience of practitioners, policy-makers, regulators, academics and others with an interest in the field of impact assessment (IA) and management. Impact assessment is defined by the International Association for ...

  19. Exploring the application of environmental impact assessment ...

    The purpose of this paper is to identify and critically discuss how environmental impact assessment is contributing to improving decision-making and management of public use in protected areas, with a focus on methodological approaches, the extent of its application and reported outcomes. ... there is significant research on the impact of large ...

  20. (PDF) Research Impact Assessment

    The present research investigates the nature, objectives, approaches, and other main attributes of the research impact models. It examines to analyze and classify models based on their characteristics. Forty-seven studies and 10 reviews published between 1996 and 2020 were included in the analysis.

  21. (PDF) Introduction To Environmental Impact Assessment

    impact assessment (EIA) assesses the impacts of. planned activity on the environment in advance, thereby allowing avoidance measures to be taken: prevention is better than cure. Environmental ...

  22. A systematic review of artificial intelligence impact assessments

    Impact assessments are not a new idea and have a long history in the form of social impact assessment ... This paper does not offer the space to develop the idea of AI as an ecosystem in detail, which we have done elsewhere (Stahl 2021 ... Ethics assessment for research and innovation—part 2: ethical impact assessment framework. Brussels: CEN ...

  23. Full article: Organizational culture: a systematic review

    2.1. Definition of organizational culture. OC is a set of norms, values, beliefs, and attitudes that guide the actions of all organization members and have a significant impact on employee behavior (Schein, Citation 1992).Supporting Schein's definition, Denison et al. (Citation 2012) define OC as the underlying values, protocols, beliefs, and assumptions that organizational members hold, and ...

  24. A national-scale assessment of land subsidence in China's ...

    Conclusions. We provided a national-scale, systematic evaluation of China's city subsidence. Of the urban lands in China's major cities, 45% are subsiding with a velocity faster than 3 mm/year, and 16% are subsiding faster than 10 mm/year; these urban lands contain 29 and 7% of urban population, respectively.

  25. Environmental Impact Assessment, Human Health and the Sustainable

    Introduction. Environmental impact assessment (EIA), typically understood as project level assessment of a broad set of environmental impacts, has a long-term success history since its first statutory introduction in 1969 [].Since then, most countries introduced EIA into their legislation and made it a central tool to improve developmental processes and inform of their impact on environment [].