Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 08 April 2024

A case study on the relationship between risk assessment of scientific research projects and related factors under the Naive Bayesian algorithm

  • Xuying Dong 1 &
  • Wanlin Qiu 1  

Scientific Reports volume  14 , Article number:  8244 ( 2024 ) Cite this article

Metrics details

  • Computer science
  • Mathematics and computing

This paper delves into the nuanced dynamics influencing the outcomes of risk assessment (RA) in scientific research projects (SRPs), employing the Naive Bayes algorithm. The methodology involves the selection of diverse SRPs cases, gathering data encompassing project scale, budget investment, team experience, and other pertinent factors. The paper advances the application of the Naive Bayes algorithm by introducing enhancements, specifically integrating the Tree-augmented Naive Bayes (TANB) model. This augmentation serves to estimate risk probabilities for different research projects, shedding light on the intricate interplay and contributions of various factors to the RA process. The findings underscore the efficacy of the TANB algorithm, demonstrating commendable accuracy (average accuracy 89.2%) in RA for SRPs. Notably, budget investment (regression coefficient: 0.68, P < 0.05) and team experience (regression coefficient: 0.51, P < 0.05) emerge as significant determinants obviously influencing RA outcomes. Conversely, the impact of project size (regression coefficient: 0.31, P < 0.05) is relatively modest. This paper furnishes a concrete reference framework for project managers, facilitating informed decision-making in SRPs. By comprehensively analyzing the influence of various factors on RA, the paper not only contributes empirical insights to project decision-making but also elucidates the intricate relationships between different factors. The research advocates for heightened attention to budget investment and team experience when formulating risk management strategies. This strategic focus is posited to enhance the precision of RAs and the scientific foundation of decision-making processes.

Similar content being viewed by others

case study of risk assessment

Prediction of SMEs’ R&D performances by machine learning for project selection

Hyoung Sun Yoo, Ye Lim Jung & Seung-Pyo Jun

case study of risk assessment

Machine learning in project analytics: a data-driven framework and case study

Shahadat Uddin, Stephen Ong & Haohui Lu

case study of risk assessment

Bayesian Analysis Reporting Guidelines

John K. Kruschke

Introduction

Scientific research projects (SRPs) stand as pivotal drivers of technological advancement and societal progress in the contemporary landscape 1 , 2 , 3 . The dynamism of SRP success hinges on a multitude of internal and external factors 4 . Central to effective project management, Risk assessment (RA) in SRPs plays a critical role in identifying and quantifying potential risks. This process not only aids project managers in formulating strategic decision-making approaches but also enhances the overall success rate and benefits of projects. In a recent contribution, Salahuddin 5 provides essential numerical techniques indispensable for conducting RAs in SRPs. Building on this foundation, Awais and Salahuddin 6 delve into the assessment of risk factors within SRPs, notably introducing the consideration of activation energy through an exploration of the radioactive magnetohydrodynamic model. Further expanding the scope, Awais and Salahuddin 7 undertake a study on the natural convection of coupled stress fluids. However, RA of SRPs confronts a myriad of challenges, underscoring the critical need for novel methodologies 8 . Primarily, the intricate nature of SRPs renders precise RA exceptionally complex and challenging. The project’s multifaceted dimensions, encompassing technology, resources, and personnel, are intricately interwoven, posing a formidable challenge for traditional assessment methods to comprehensively capture all potential risks 9 . Furthermore, the intricate and diverse interdependencies among various project factors contribute to the complexity of these relationships, thereby limiting the efficacy of conventional methods 10 , 11 , 12 . Traditional approaches often focus solely on the individual impact of diverse factors, overlooking the nuanced relationships that exist between them—an inherent limitation in the realm of RA for SRPs 13 , 14 , 15 .

The pursuit of a methodology capable of effectively assessing project risks while elucidating the intricate interplay of different factors has emerged as a focal point in SRPs management 16 , 17 , 18 . This approach necessitates a holistic consideration of multiple factors, their quantification in contributing to project risks, and the revelation of their correlations. Such an approach enables project managers to more precisely predict and respond to risks. Marx-Stoelting et al. 19 , current approaches for the assessment of environmental and human health risks due to exposure to chemical substances have served their purpose reasonably well. Additionally, Awais et al. 20 highlights the significance of enthalpy changes in SRPs risk considerations, while Awais et al. 21 delve into the comprehensive exploration of risk factors in Eyring-Powell fluid flow in magnetohydrodynamics, particularly addressing viscous dissipation and activation energy effects. The Naive Bayesian algorithm, recognized for its prowess in probability and statistics, has yielded substantial results in information retrieval and data mining in recent years 22 . Leveraging its advantages in classification and probability estimation, the algorithm presents a novel approach for RA of SRPs 23 . Integrating probability analysis into RA enables a more precise estimation of project risks by utilizing existing project data and harnessing the capabilities of the Naive Bayesian algorithms. This method facilitates a quantitative, statistical analysis of various factors, effectively navigating the intricate relationships between them, thereby enhancing the comprehensiveness and accuracy of RA for SRPs.

This paper seeks to employ the Naive Bayesian algorithm to estimate the probability of risks by carefully selecting distinct research project cases and analyzing multidimensional data, encompassing project scale, budget investment, and team experience. Concurrently, Multiple Linear Regression (MLR) analysis is applied to quantify the influence of these factors on the assessment results. The paper places particular emphasis on exploring the intricate interrelationships between different factors, aiming to provide a more specific and accurate reference framework for decision-making in SRPs management.

This paper introduces several innovations and contributions to the field of RA for SRPs:

Comprehensive Consideration of Key Factors: Unlike traditional research that focuses on a single factor, this paper comprehensively considers multiple key factors, such as project size, budget investment, and team experience. This holistic analysis enhances the realism and thoroughness of RA for SRPs.

Introduction of Tree-Enhanced Naive Bayes Model: The naive Bayes algorithm is introduced and further improved through the proposal of a tree-enhanced naive Bayes model. This algorithm exhibits unique advantages in handling uncertainty and complexity, thereby enhancing its applicability and accuracy in the RA of scientific and technological projects.

Empirical Validation: The effectiveness of the proposed method is not only discussed theoretically but also validated through empirical cases. The analysis of actual cases provides practical support and verification, enhancing the credibility of the research results.

Application of MLR Analysis: The paper employs MLR analysis to delve into the impact of various factors on RA. This quantitative analysis method adds specificity and operability to the research, offering a practical decision-making basis for scientific and technological project management.

Discovery of New Connections and Interactions: The paper uncovers novel connections and interactions, such as the compensatory role of team experience for budget-related risks and the impact of the interaction between project size and budget investment on RA results. These insights provide new perspectives for decision-making in technology projects, contributing significantly to the field of RA for SRPs in terms of both importance and practical value.

The paper is structured as follows: “ Introduction ” briefly outlines the significance of RA for SRPs. Existing challenges within current research are addressed, and the paper’s core objectives are elucidated. A distinct emphasis is placed on the innovative aspects of this research compared to similar studies. The organizational structure of the paper is succinctly introduced, providing a brief overview of each section’s content. “ Literature review ” provides a comprehensive review of relevant theories and methodologies in the domain of RA for SRPs. The current research landscape is systematically examined, highlighting the existing status and potential gaps. Shortcomings in previous research are analyzed, laying the groundwork for the paper’s motivation and unique contributions. “ Research methodology ” delves into the detailed methodologies employed in the paper, encompassing data collection, screening criteria, preprocessing steps, and more. The tree-enhanced naive Bayes model is introduced, elucidating specific steps and the purpose behind MLR analysis. “ Results and discussion ” unfolds the results and discussions based on selected empirical cases. The representativeness and diversity of these cases are expounded upon. An in-depth analysis of each factor’s impact and interaction in the context of RA is presented, offering valuable insights. “ Discussion ” succinctly summarizes the entire research endeavor. Potential directions for further research and suggestions for improvement are proposed, providing a thoughtful conclusion to the paper.

Literature review

A review of ra for srps.

In recent years, the advancement of SRPs management has led to the evolution of various RA methods tailored for SRPs. The escalating complexity of these projects poses a challenge for traditional methods, often falling short in comprehensively considering the intricate interplay among multiple factors and yielding incomplete assessment outcomes. Scholars, recognizing the pivotal role of factors such as project scale, budget investment, and team experience in influencing project risks, have endeavored to explore these dynamics from diverse perspectives. Siyal et al. 24 pioneered the development and testing of a model geared towards detecting SRPs risks. Chen et al. 25 underscored the significance of visual management in SRPs risk management, emphasizing its importance in understanding and mitigating project risks. Zhao et al. 26 introduced a classic approach based on cumulative prospect theory, offering an optional method to elucidate researchers’ psychological behaviors. Their study demonstrated the enhanced rationality achieved by utilizing the entropy weight method to derive attribute weight information under Pythagorean fuzzy sets. This approach was then applied to RA for SRPs, showcasing a model grounded in the proposed methodology. Suresh and Dillibabu 27 proposed an innovative hybrid fuzzy-based machine learning mechanism tailored for RA in software projects. This hybrid scheme facilitated the identification and ranking of major software project risks, thereby supporting decision-making throughout the software project lifecycle. Akhavan et al. 28 introduced a Bayesian network modeling framework adept at capturing project risks by calculating the uncertainty of project net present value. This model provided an effective means for analyzing risk scenarios and their impact on project success, particularly applicable in evaluating risks for innovative projects that had undergone feasibility studies.

A review of factors affecting SRPs

Within the realm of SRPs management, the assessment and proficient management of project risks stand as imperative components. Consequently, a range of studies has been conducted to explore diverse methods and models aimed at enhancing the comprehension and decision support associated with project risks. Guan et al. 29 introduced a new risk interdependence network model based on Monte Carlo simulation to support decision-makers in more effectively assessing project risks and planning risk management actions. They integrated interpretive structural modeling methods into the model to develop a hierarchical project risk interdependence network based on identified risks and their causal relationships. Vujović et al. 30 provided a new method for research in project management through careful analysis of risk management in SRPs. To confirm the hypothesis, the study focused on educational organizations and outlined specific project management solutions in business systems, thereby improving the business and achieving positive business outcomes. Muñoz-La Rivera et al. 31 described and classified the 100 identified factors based on the dimensions and aspects of the project, assessed their impact, and determined whether they were shaping or directly affecting the occurrence of research project accidents. These factors and their descriptions and classifications made significant contributions to improving the security creation of the system and generating training and awareness materials, fostering the development of a robust security culture within organizations. Nguyen et al. concentrated on the pivotal risk factors inherent in design-build projects within the construction industry. Effective identification and management of these factors enhanced project success and foster confidence among owners and contractors in adopting the design-build approach 32 . Their study offers valuable insights into RA in project management and the adoption of new contract forms. Nguyen and Le delineated risk factors influencing the quality of 20 civil engineering projects during the construction phase 33 . The top five risks identified encompass poor raw material quality, insufficient worker skills, deficient design documents and drawings, geographical challenges at construction sites, and inadequate capabilities of main contractors and subcontractors. Meanwhile, Nguyen and Phu Pham concentrated on office building projects in Ho Chi Minh City, Vietnam, to pinpoint key risk factors during the construction phase 34 . These factors were classified into five groups based on their likelihood and impact: financial, management, schedule, construction, and environmental. Findings revealed that critical factors affecting office building projects encompassed both natural elements (e.g., prolonged rainfall, storms, and climate impacts) and human factors (e.g., unstable soil, safety behavior, owner-initiated design changes), with schedule-related risks exerting the most significant influence during the construction phase of Ho Chi Minh City’s office building projects. This provides construction and project management practitioners with fresh insights into risk management, aiding in the comprehensive identification, mitigation, and management of risk factors in office building projects.

While existing research has made notable strides in RA for SRPs, certain limitations persist. These studies limitations in quantifying the degree of influence of various factors and analyzing their interrelationships, thereby falling short of offering specific and actionable recommendations. Traditional methods, due to their inherent limitations, struggle to precisely quantify risk degrees and often overlook the intricate interplay among multiple factors. Consequently, there is an urgent need for a comprehensive method capable of quantifying the impact of diverse factors and revealing their correlations. In response to this exigency, this paper introduces the TANB model. The unique advantages of this algorithm in the RA of scientific and technological projects have been fully realized. Tailored to address the characteristics of uncertainty and complexity, the model represents a significant leap forward in enhancing applicability and accuracy. In comparison with traditional methods, the TANB model exhibits greater flexibility and a heightened ability to capture dependencies between features, thereby elevating the overall performance of RA. This innovative method emerges as a more potent and reliable tool in the realm of scientific and technological project management, furnishing decision-makers with more comprehensive and accurate support for RA.

Research methodology

This paper centers on the latest iteration of ISO 31000, delving into the project risk management process and scrutinizing the RA for SRPs and their intricate interplay with associated factors. ISO 31000, an international risk management standard, endeavors to furnish businesses, organizations, and individuals with a standardized set of risk management principles and guidelines, defining best practices and establishing a common framework. The paper unfolds in distinct phases aligned with ISO 31000:

Risk Identification: Employing data collection and preparation, a spectrum of factors related to project size, budget investment, team member experience, project duration, and technical difficulty were identified.

RA: Utilizing the Naive Bayes algorithm, the paper conducts RA for SRPs, estimating the probability distribution of various factors influencing RA results.

Risk Response: The application of the Naive Bayes model is positioned as a means to respond to risks, facilitating the formulation of apt risk response strategies based on calculated probabilities.

Monitoring and Control: Through meticulous data collection, model training, and verification, the paper illustrates the steps involved in monitoring and controlling both data and models. Regular monitoring of identified risks and responses allows for adjustments when necessary.

Communication and Reporting: Maintaining effective communication throughout the project lifecycle ensures that stakeholders comprehend the status of project risks. Transparent reporting on discussions and outcomes contributes to an informed project environment.

Data collection and preparation

In this paper, a meticulous approach is undertaken to select representative research project cases, adhering to stringent screening criteria. Additionally, a thorough review of existing literature is conducted and tailored to the practical requirements of SRPs management. According to Nguyen et al., these factors play a pivotal role in influencing the RA outcomes of SRPs 35 . Furthermore, research by He et al. underscored the significant impact of team members’ experience on project success 36 . Therefore, in alignment with our research objectives and supported by the literature, this paper identifies variables such as project scale, budget investment, team member experience, project duration, and technical difficulty as the focal themes. To ensure the universality and scientific rigor of our findings, the paper adheres to stringent selection criteria during the project case selection process. After preliminary screening of SRPs completed in the past 5 years, considering factors such as project diversity, implementation scales, and achieved outcomes, five representative projects spanning diverse fields, including engineering, medicine, and information technology, are ultimately selected. These project cases are chosen based on their capacity to represent various scales and types of SRPs, each possessing a typical risk management process, thereby offering robust and comprehensive data support for our study. The subsequent phase involves detailed data collection on each chosen project, encompassing diverse dimensions such as project scale, budget investment, team member experience, project cycle, and technical difficulty. The collected data undergo meticulous preprocessing to ensure data quality and reliability. The preprocessing steps comprised data cleaning, addressing missing values, handling outliers, culminating in the creation of a self-constructed dataset. The dataset encompasses over 500 SRPs across diverse disciplines and fields, ensuring statistically significant and universal outcomes. Particular emphasis is placed on ensuring dataset diversity, incorporating projects of varying scales, budgets, and team experience levels. This comprehensive coverage ensures the representativeness and credibility of the study on RA in SRPs. New influencing factors are introduced to expand the research scope, including project management quality (such as time management and communication efficiency), historical success rate, industry dynamics, and market demand. Detailed definitions and quantifications are provided for each new variable to facilitate comprehensive data processing and analysis. For project management quality, consideration is given to time management accuracy and communication frequency and quality among team members. Historical success rate is determined by reviewing past project records and outcomes. Industry dynamics are assessed by consulting the latest scientific literature and patent information. Market demand is gauged through market research and user demand surveys. The introduction of these variables enriches the understanding of RA in SRPs and opens up avenues for further research exploration.

At the same time, the collected data are integrated and coded in order to apply Naive Bayes algorithm and MLR analysis. For cases involving qualitative data, this paper uses appropriate coding methods to convert it into quantitative data for processing in the model. For example, for the qualitative feature of team member experience, numerical values are used to represent different experience levels, such as 0 representing beginners, 0 representing intermediate, and 2 representing advanced. The following is a specific sample data set example (Table 1 ). It shows the processed structured data, and the values in the table represent the specific characteristics of each project.

Establishment of naive Bayesian model

The Naive Bayesian algorithm, a probabilistic and statistical classification method renowned for its effectiveness in analyzing and predicting multi-dimensional data, is employed in this paper to conduct the RA for SRPs. The application of the Naive Bayesian algorithm to RA for SRPs aims to discern the influence of various factors on the outcomes of RA. The Naive Bayesian algorithm, depicted in Fig.  1 , operates on the principles of Bayesian theorem, utilizing posterior probability calculations for classification tasks. The fundamental concept of this algorithm hinges on the assumption of independence among different features, embodying the “naivety” hypothesis. In the context of RA for SRPs, the Naive Bayesian algorithm is instrumental in estimating the probability distribution of diverse factors affecting the RA results, thereby enhancing the precision of risk estimates. In the Naive Bayesian model, the initial step involves the computation of posterior probabilities for each factor, considering the given RA result conditions. Subsequently, the category with the highest posterior probability is selected as the predictive outcome.

figure 1

Naive Bayesian algorithm process.

In Fig.  1 , the data collection process encompasses vital project details such as project scale, budget investment, team member experience, project cycle, technical difficulty, and RA results. This meticulous collection ensures the integrity and precision of the dataset. Subsequently, the gathered data undergoes integration and encoding to convert qualitative data into quantitative form, facilitating model processing and analysis. Tailored to specific requirements, relevant features are chosen for model construction, accompanied by essential preprocessing steps like standardization and normalization. The dataset is then partitioned into training and testing sets, with the model trained on the former and its performance verified on the latter. Leveraging the training data, a Naive Bayesian model is developed to estimate probability distribution parameters for various features across distinct categories. Ultimately, the trained model is employed to predict new project features, yielding RA results.

Naive Bayesian models, in this context, are deployed to forecast diverse project risk levels. Let X symbolize the feature vector, encompassing project scale, budget investment, team member experience, project cycle, and technical difficulty. The objective is to predict the project’s risk level, denoted as Y. Y assumes discrete values representing distinct risk levels. Applying the Bayesian theorem, the posterior probability P(Y|X) is computed, signifying the probability distribution of projects falling into different risk levels given the feature vector X. The fundamental equation governing the Naive Bayesian model is expressed as:

In Eq. ( 1 ), P(Y|X) represents the posterior probability, denoting the likelihood of the project belonging to a specific risk level. P(X|Y) signifies the class conditional probability, portraying the likelihood of the feature vector X occurring under known risk level conditions. P(Y) is the prior probability, reflecting the antecedent likelihood of the project pertaining to a particular risk level. P(X) acts as the evidence factor, encapsulating the likelihood of the feature vector X occurring.

The Naive Bayes, serving as the most elementary Bayesian network classifier, operates under the assumption of attribute independence given the class label c , as expressed in Eq. ( 2 ):

The classification decision formula for Naive Bayes is articulated in Eq. ( 3 ):

The Naive Bayes model, rooted in the assumption of conditional independence among attributes, often encounters deviations from reality. To address this limitation, the Tree-Augmented Naive Bayes (TANB) model extends the independence assumption by incorporating a first-order dependency maximum-weight spanning tree. TANB introduces a tree structure that more comprehensively models relationships between features, easing the constraints of the independence assumption and concurrently mitigating issues associated with multicollinearity. This extension bolsters its efficacy in handling intricate real-world data scenarios. TANB employs conditional mutual information \(I(X_{i} ;X_{j} |C)\) to gauge the dependency between attributes \(X_{j}\) and \(X_{i}\) , thereby constructing the maximum weighted spanning tree. In TANB, any attribute variable \(X_{i}\) is permitted to have at most one other attribute variable as its parent node, expressed as \(Pa\left( {X_{i} } \right) \le 2\) . The joint probability \(P_{con} \left( {x,c} \right)\) undergoes transformation using Eq. ( 4 ):

In Eq. ( 4 ), \(x_{r}\) refers to the root node, which can be expressed as Eq. ( 5 ):

TANB classification decision equation is presented below:

In the RA of SRPs, normal distribution parameters, such as mean (μ) and standard deviation (σ), are estimated for each characteristic dimension (project scale, budget investment, team member experience, project cycle, and technical difficulty). This estimation allows the calculation of posterior probabilities for projects belonging to different risk levels under given feature vector conditions. For each feature dimension \({X}_{i}\) , the mean \({mu}_{i,j}\) and standard deviation \({{\text{sigma}}}_{i,j}\) under each risk level are computed, where i represents the feature dimension, and j denotes the risk level. Parameter estimation employs the maximum likelihood method, and the specific calculations are as follows.

In Eqs. ( 7 ) and ( 8 ), \({N}_{j}\) represents the number of projects belonging to risk level j . \({x}_{i,k}\) denotes the value of the k -th item in the feature dimension i . Finally, under a given feature vector, the posterior probability of a project with risk level j is calculated as Eq. ( 9 ).

In Eq. ( 9 ), d represents the number of feature dimensions, and Z is the normalization factor. \(P(Y=j)\) represents the prior probability of category j . \(P({X}_{i}\mid Y=j)\) represents the normal distribution probability density function of feature dimension i under category j . The risk level of a project can be predicted by calculating the posterior probabilities of different risk levels to achieve RA for SRPs.

This paper integrates the probability estimation of the Naive Bayes model with actual project risk response strategies, enabling a more flexible and targeted response to various risk scenarios. Such integration offers decision support to project managers, enhancing their ability to address potential challenges effectively and ultimately improving the overall success rate of the project. This underscores the notion that risk management is not solely about problem prevention but stands as a pivotal factor contributing to project success.

MLR analysis

MLR analysis is used to validate the hypothesis to deeply explore the impact of various factors on RA of SRPs. Based on the previous research status, the following research hypotheses are proposed.

Hypothesis 1: There is a positive relationship among project scale, budget investment, and team member experience and RA results. As the project scale, budget investment, and team member experience increase, the RA results also increase.

Hypothesis 2: There is a negative relationship between the project cycle and the RA results. Projects with shorter cycles may have higher RA results.

Hypothesis 3: There is a complex relationship between technical difficulty and RA results, which may be positive, negative, or bidirectional in some cases. Based on these hypotheses, an MLR model is established to analyze the impact of factors, such as project scale, budget investment, team member experience, project cycle, and technical difficulty, on RA results. The form of an MLR model is as follows.

In Eq. ( 10 ), Y represents the RA result (dependent variable). \({X}_{1}\) to \({X}_{5}\) represent factors, such as project scale, budget investment, team member experience, project cycle, and technical difficulty (independent variables). \({\beta }_{0}\) to \({\beta }_{5}\) are the regression coefficients, which represent the impact of various factors on the RA results. \(\epsilon\) represents a random error term. The model structure is shown in Fig.  2 .

figure 2

Schematic diagram of an MLR model.

In Fig.  2 , the MLR model is employed to scrutinize the influence of various independent variables on the outcomes of RA. In this specific context, the independent variables encompass project size, budget investment, team member experience, project cycle, and technical difficulty, all presumed to impact the project’s RA results. Each independent variable is denoted as a node in the model, with arrows depicting the relationships between these factors. In an MLR model, the arrow direction signifies causality, illustrating the influence of an independent variable on the dependent variable.

When conducting MLR analysis, it is necessary to estimate the parameter \(\upbeta\) in the regression model. These parameters determine the relationship between the independent and dependent variables. Here, the Ordinary Least Squares (OLS) method is applied to estimate these parameters. The OLS method is a commonly used parameter estimation method aimed at finding parameter values that minimize the sum of squared residuals between model predictions and actual observations. The steps are as follows. Firstly, based on the general form of an MLR model, it is assumed that there is a linear relationship between the independent and dependent variables. It can be represented by a linear equation, which includes regression coefficients β and the independent variable X. For each observation value, the difference between its predicted and actual values is calculated, which is called the residual. Residual \({e}_{i}\) can be expressed as:

In Eq. ( 11 ), \({Y}_{i}\) is the actual observation value, and \({\widehat{Y}}_{i}\) is the value predicted by the model. The goal of the OLS method is to adjust the regression coefficients \(\upbeta\) to minimize the sum of squared residuals of all observations. This can be achieved by solving an optimization problem, and the objective function is the sum of squared residuals.

Then, the estimated value of the regression coefficient \(\upbeta\) that minimizes the sum of squared residuals can be obtained by taking the derivative of the objective function and making the derivative zero. The estimated values of the parameters can be obtained by solving this system of equations. The final estimated regression coefficient can be expressed as:

In Eq. ( 13 ), X represents the independent variable matrix. Y represents the dependent variable vector. \(({X}^{T}X{)}^{-1}\) is the inverse of a matrix, and \(\widehat{\beta }\) is a parameter estimation vector.

Specifically, solving for the estimated value of regression coefficient \(\upbeta\) requires matrix operation and statistical analysis. Based on the collected project data, substitute it into the model and calculate the residual. Then, the steps of the OLS method are used to obtain parameter estimates. These parameter estimates are used to establish an MLR model to predict RA results and further analyze the influence of different factors.

The degree of influence of different factors on the RA results can be determined by analyzing the value of the regression coefficient β. A positive \(\upbeta\) value indicates that the factor has a positive impact on the RA results, while a negative \(\upbeta\) value indicates that the factor has a negative impact on the RA results. Additionally, hypothesis testing can determine whether each factor is significant in the RA results.

The TANB model proposed in this paper extends the traditional naive Bayes model by incorporating conditional dependencies between attributes to enhance the representation of feature interactions. While the traditional naive Bayes model assumes feature independence, real-world scenarios often involve interdependencies among features. To address this, the TANB model is introduced. The TANB model introduces a tree structure atop the naive Bayes model to more accurately model feature relationships, overcoming the limitation of assuming feature independence. Specifically, the TANB model constructs a maximum weight spanning tree to uncover conditional dependencies between features, thereby enabling the model to better capture feature interactions.

Assessment indicators

To comprehensively assess the efficacy of the proposed TANB model in the RA for SRPs, a self-constructed dataset serves as the data source for this experimental evaluation, as outlined in Table 1 . The dataset is segregated into training (80%) and test sets (20%). These indicators cover the accuracy, precision, recall rate, F1 value, and Area Under the Curve (AUC) Receiver Operating Characteristic (ROC) of the model. The following are the definitions and equations for each assessment indicator. Accuracy is the proportion of correctly predicted samples to the total number of samples. Precision is the proportion of Predicted Positive (PP) samples to actual positive samples. The recall rate is the proportion of correctly PP samples among the actual positive samples. The F1 value is the harmonic average of precision and recall, considering the precision and comprehensiveness of the model. The area under the ROC curve measures the classification performance of the model, and a larger AUC value indicates better model performance. The ROC curve suggests the relationship between True Positive Rate and False Positive Rate under different thresholds. The AUC value can be obtained by accumulating the area of each small rectangle under the ROC curve. The confusion matrix is used to display the prediction property of the model in different categories, including True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN).

The performance of TANB in RA for SRPs can be comprehensively assessed to understand the advantages, disadvantages, and applicability of the model more comprehensively by calculating the above assessment indicators.

Results and discussion

Accuracy analysis of naive bayesian algorithm.

On the dataset of this paper, Fig.  3 reveals the performance of TANB algorithm under different assessment indicators.

figure 3

Performance assessment of TANB algorithm on different projects.

From Fig.  3 , the TANB algorithm performs well in various projects, ranging from 0.87 to 0.911 in accuracy. This means that the overall accuracy of the model in predicting project risks is quite high. The precision also maintains a high level in various projects, ranging from 0.881 to 0.923, indicating that the model performs well in classifying high-risk categories. The recall rate ranges from 0.872 to 0.908, indicating that the model can effectively capture high-risk samples. Meanwhile, the AUC values in each project are relatively high, ranging from 0.905 to 0.931, which once again emphasizes the effectiveness of the model in risk prediction. From multiple assessment indicators, such as accuracy, precision, recall, F1 value, and AUC, the TANB algorithm has shown good risk prediction performance in representative projects. The performance assessment results of the TANB algorithm under different feature dimensions are plotted in Figs.  4 , 5 , 6 and 7 .

figure 4

Prediction accuracy of TANB algorithm on different budget investments.

figure 5

Prediction accuracy of TANB algorithm on different team experiences.

figure 6

Prediction accuracy of TANB algorithm at different risk levels.

figure 7

Prediction accuracy of TANB algorithm on different project scales.

From Figs.  4 , 5 , 6 and 7 , as the level of budget investment increases, the accuracy of most projects also shows an increasing trend. Especially in cases of high budget investment, the accuracy of the project is generally high. This may mean that a higher budget investment helps to reduce project risks, thereby improving the prediction accuracy of the TANB algorithm. It can be observed that team experience also affects the accuracy of the model. Projects with high team experience exhibit higher accuracy in TANB algorithms. This may indicate that experienced teams can better cope with project risks to improve the performance of the model. When budget investment and team experience are low, accuracy is relatively low. This may imply that budget investment and team experience can complement each other to affect the model performance.

There are certain differences in the accuracy of projects under different risk levels. Generally speaking, the accuracy of high-risk and medium-risk projects is relatively high, while the accuracy of low-risk projects is relatively low. This may be because high-risk and medium-risk projects require more accurate predictions, resulting in higher accuracy. Similarly, project scale also affects the performance of the model. Large-scale and medium-scale projects exhibit high accuracy in TANB algorithms, while small-scale projects have relatively low accuracy. This may be because the risks of large-scale and medium-scale projects are easier to identify and predict to promote the performance of the model. In high-risk and large-scale projects, accuracy is relatively high. This may indicate that the impact of project scale is more significant in specific risk scenarios.

Figure  8 further compares the performance of the TANB algorithm proposed here with other similar algorithms.

figure 8

Performance comparison of different algorithms in RA of SRPs.

As depicted in Fig.  8 , the TANB algorithm attains an accuracy and precision of 0.912 and 0.920, respectively, surpassing other algorithms. It excels in recall rate and F1 value, registering 0.905 and 0.915, respectively, outperforming alternative algorithms. These findings underscore the proficiency of the TANB algorithm in comprehensively identifying high-risk projects while sustaining high classification accuracy. Moreover, the algorithm achieves an AUC of 0.930, indicative of its exceptional predictive prowess in sample classification. Thus, the TANB algorithm exhibits notable potential for application, particularly in scenarios demanding the recognition and comprehensiveness requisite for high-risk project identification. The evaluation results of the TANB model in predicting project risk levels are presented in Table 2 :

Table 2 demonstrates that the TANB model surpasses the traditional Naive Bayes model across multiple evaluation metrics, including accuracy, precision, and recall. This signifies that, by accounting for feature interdependence, the TANB model can more precisely forecast project risk levels. Furthermore, leveraging the model’s predictive outcomes, project managers can devise tailored risk mitigation strategies corresponding to various risk scenarios. For example, in high-risk projects, more assertive measures can be implemented to address risks, while in low-risk projects, risks can be managed more cautiously. This targeted risk management approach contributes to enhancing project success rates, thereby ensuring the seamless advancement of SRPs.

The exceptional performance of the TANB model in specific scenarios derives from its distinctive characteristics and capabilities. Firstly, compared to traditional Naive Bayes models, the TANB model can better capture the dependencies between attributes. In project RA, project features often exhibit complex interactions. The TANB model introduces first-order dependencies between attributes, allowing features to influence each other, thereby more accurately reflecting real-world situations and improving risk prediction precision. Secondly, the TANB model demonstrates strong adaptability and generalization ability in handling multidimensional data. SRPs typically involve data from multiple dimensions, such as project scale, budget investment, and team experience. The TANB model effectively processes these multidimensional data, extracts key information, and achieves accurate RA for projects. Furthermore, the paper explores the potential of using hybrid models or ensemble learning methods to further enhance model performance. By combining other machine learning algorithms, such as random forests and support vector regressors with sigmoid kernel, through ensemble learning, the shortcomings of individual models in specific scenarios can be overcome, thus improving the accuracy and robustness of RA. For example, in the study, we compared the performance of the TANB model with other algorithms in RA, as shown in Table 3 .

Table 3 illustrates that the TANB model surpasses other models in terms of accuracy, precision, recall, F1 value, and AUC value, further confirming its superiority and practicality in RA. Therefore, the TANB model holds significant application potential in SRPs, offering effective decision support for project managers to better evaluate and manage project risks, thereby enhancing the likelihood of project success.

Analysis of the degree of influence of different factors

Table 4 analyzes the degree of influence and interaction of different factors.

In Table 4 , the regression analysis results reveal that budget investment and team experience exert a significantly positive impact on RA outcomes. This suggests that increasing budget allocation and assembling a team with extensive experience can enhance project RA outcomes. Specifically, the regression coefficient for budget investment is 0.68, and for team experience, it is 0.51, both demonstrating significant positive effects (P < 0.05). The P-values are all significantly less than 0.05, indicating a significant impact. The impact of project scale is relatively small, at 0.31, and its P-value is also much less than 0.05. The degree of interaction influence is as follows. The impact of interaction terms is also significant, especially the interaction between budget investment and team experience and the interaction between budget investment and project scale. The P value of the interaction between budget investment and project scale is 0.002, and the P value of the interaction between team experience and project scale is 0.003. The P value of the interaction among budget investment, team experience, and project scale is 0.005. So, there are complex relationships and interactions among different factors, and budget investment and team experience significantly affect the RA results. However, the budget investment and project scale slightly affect the RA results. Project managers should comprehensively consider the interactive effects of different factors when making decisions to more accurately assess the risks of SRPs.

The interaction between team experience and budget investment

The results of the interaction between team experience and budget investment are demonstrated in Table 5 .

From Table 5 , the degree of interaction impact can be obtained. Budget investment and team experience, along with the interaction between project scale and technical difficulty, are critical factors in risk mitigation. Particularly in scenarios characterized by large project scales and high technical difficulties, adequate budget allocation and a skilled team can substantially reduce project risks. As depicted in Table 5 , under conditions of high team experience and sufficient budget investment, the average RA outcome is 0.895 with a standard deviation of 0.012, significantly lower than assessment outcomes under other conditions. This highlights the synergistic effects of budget investment and team experience in effectively mitigating risks in complex project scenarios. The interaction between team experience and budget investment has a significant impact on RA results. Under high team experience, the impact of different budget investment levels on RA results is not significant, but under medium and low team experience, the impact of different budget investment levels on RA results is significantly different. The joint impact of team experience and budget investment is as follows. Under high team experience, the impact of budget investment is relatively small, possibly because high-level team experience can compensate for the risks brought by insufficient budget to some extent. Under medium and low team experience, the impact of budget investment is more significant, possibly because the lack of team experience makes budget investment play a more important role in RA. Therefore, team experience and budget investment interact in RA of SRPs. They need to be comprehensively considered in project decision-making. High team experience can compensate for the risks brought by insufficient budget to some extent, but in the case of low team experience, the impact of budget investment on RA is more significant. An exhaustive consideration of these factors and their interplay is imperative for effectively assessing the risks inherent in SRPs. Merely focusing on budget allocation or team expertise may not yield a thorough risk evaluation. Project managers must scrutinize the project’s scale, technical complexity, and team proficiency, integrating these aspects with budget allocation and team experience. This holistic approach fosters a more precise RA and facilitates the development of tailored risk management strategies, thereby augmenting the project’s likelihood of success. In conclusion, acknowledging the synergy between budget allocation and team expertise, in conjunction with other pertinent factors, is pivotal in the RA of SRPs. Project managers should adopt a comprehensive outlook to ensure sound decision-making and successful project execution.

Risk mitigation strategies

To enhance the discourse on project risk management in this paper, a dedicated section on risk mitigation strategies has been included. Leveraging the insights gleaned from the predictive model regarding identified risk factors and their corresponding risk levels, targeted risk mitigation measures are proposed.

Primarily, given the significant influence of budget investment and team experience on project RA outcomes, project managers are advised to prioritize these factors and devise pertinent risk management strategies.

For risks stemming from budget constraints, the adoption of flexible budget allocation strategies is advocated. This may involve optimizing project expenditures, establishing financial reserves, or seeking additional funding avenues.

In addressing risks attributed to inadequate team experience, measures such as enhanced training initiatives, engagement of seasoned project advisors, or collaboration with experienced teams can be employed to mitigate the shortfall in expertise.

Furthermore, recognizing the impact of project scale, duration, and technical complexity on RA outcomes, project managers are advised to holistically consider these factors during project planning. This entails adjusting project scale as necessary, establishing realistic project timelines, and conducting thorough assessments of technical challenges prior to project commencement.

These risk mitigation strategies aim to equip project managers with a comprehensive toolkit for effectively identifying, assessing, and mitigating risks inherent in SRPs.

This paper delves into the efficacy of the TANB algorithm in project risk prediction. The findings indicate that the algorithm demonstrates commendable performance across diverse projects, boasting high precision, recall rates, and AUC values, thereby outperforming analogous algorithms. This aligns with the perspectives espoused by Asadullah et al. 37 . Particular emphasis was placed on assessing the impact of variables such as budget investment levels, team experience, and project size on algorithmic performance. Notably, heightened budget investment and extensive team experience positively influenced the results, with project size exerting a comparatively minor impact. Regression analysis elucidates the magnitude and interplay of these factors, underscoring the predominant influence of budget investment and team experience on RA outcomes, whereas project size assumes a relatively marginal role. This underscores the imperative for decision-makers in projects to meticulously consider the interrelationships between these factors for a more precise assessment of project risks, echoing the sentiments expressed by Testorelli et al. 38 .

In sum, this paper furnishes a holistic comprehension of the Naive Bayes algorithm’s application in project risk prediction, offering robust guidance for practical project management. The paper’s tangible applications are chiefly concentrated in the realm of RA and management for SRPs. Such insights empower managers in SRPs to navigate risks with scientific acumen, thereby enhancing project success rates and performance. The paper advocates several strategic measures for SRPs management: prioritizing resource adjustments and team training to elevate the professional skill set of team members in coping with the impact of team experience on risks; implementing project scale management strategies to mitigate potential risks by detailed project stage division and stringent project planning; addressing technical difficulty as a pivotal risk factor through assessment and solution development strategies; incorporating project cycle adjustment and flexibility management to accommodate fluctuations and mitigate associated risks; and ensuring the integration of data quality management strategies to bolster data reliability and enhance model accuracy. These targeted risk responses aim to improve the likelihood of project success and ensure the seamless realization of project objectives.

Achievements

In this paper, the application of Naive Bayesian algorithm in RA of SRPs is deeply explored, and the influence of various factors on RA results and their relationship is comprehensively investigated. The research results fully prove the good accuracy and applicability of Naive Bayesian algorithm in RA of science and technology projects. Through probability estimation, the risk level of the project can be estimated more accurately, which provides a new decision support tool for the project manager. It is found that budget input and team experience are the most significant factors affecting the RA results, and their regression coefficients are 0.68 and 0.51 respectively. However, the influence of project scale on the RA results is relatively small, and its regression coefficient is 0.31. Especially in the case of low team experience, the budget input has a more significant impact on the RA results. However, it should also be admitted that there are some limitations in the paper. First, the case data used is limited and the sample size is relatively small, which may affect the generalization ability of the research results. Second, the factors concerned may not be comprehensive, and other factors that may affect RA, such as market changes and policies and regulations, are not considered.

The paper makes several key contributions. Firstly, it applies the Naive Bayes algorithm to assess the risks associated with SRPs, proposing the TANB and validating its effectiveness empirically. The introduction of the TANB model broadens the application scope of the Naive Bayes algorithm in scientific research risk management, offering novel methodologies for project RA. Secondly, the study delves into the impact of various factors on RA for SRPs through MLR analysis, highlighting the significance of budget investment and team experience. The results underscore the positive influence of budget investment and team experience on RA outcomes, offering valuable insights for project decision-making. Additionally, the paper examines the interaction between team experience and budget investment, revealing a nuanced relationship between the two in RA. This finding underscores the importance of comprehensively considering factors such as team experience and budget investment in project decision-making to achieve more accurate RA. In summary, the paper provides crucial theoretical foundations and empirical analyses for SRPs risk management by investigating RA and its influencing factors in depth. The research findings offer valuable guidance for project decision-making and risk management, bolstering efforts to enhance the success rate and efficiency of SRPs.

This paper distinguishes itself from existing research by conducting an in-depth analysis of the intricate interactions among various factors, offering more nuanced and specific RA outcomes. The primary objective extends beyond problem exploration, aiming to broaden the scope of scientific evaluation and research practice through the application of statistical language. This research goal endows the paper with considerable significance in the realm of science and technology project management. In comparison to traditional methods, this paper scrutinizes project risk with greater granularity, furnishing project managers with more actionable suggestions. The empirical analysis validates the effectiveness of the proposed method, introducing a fresh perspective for decision-making in science and technology projects. Future research endeavors will involve expanding the sample size and accumulating a more extensive dataset of SRPs to enhance the stability and generalizability of results. Furthermore, additional factors such as market demand and technological changes will be incorporated to comprehensively analyze elements influencing the risks of SRPs. Through these endeavors, the aim is to provide more precise and comprehensive decision support to the field of science and technology project management, propelling both research and practice in this domain to new heights.

Limitations and prospects

This paper, while employing advanced methodologies like TANB models, acknowledges inherent limitations that warrant consideration. Firstly, like any model, TANB has its constraints, and predictions in specific scenarios may be subject to these limitations. Subsequent research endeavors should explore alternative advanced machine learning and statistical models to enhance the precision and applicability of RA. Secondly, the focus of this paper predominantly centers on the RA for SRPs. Given the unique characteristics and risk factors prevalent in projects across diverse fields and industries, the generalizability of the paper results may be limited. Future research can broaden the scope of applicability by validating the model across various fields and industries. The robustness and generalizability of the model can be further ascertained through the incorporation of extensive real project data in subsequent research. Furthermore, future studies can delve into additional data preprocessing and feature engineering methods to optimize model performance. In practical applications, the integration of research outcomes into SRPs management systems could provide more intuitive and practical support for project decision-making. These avenues represent valuable directions for refining and expanding the contributions of this research in subsequent studies.

Data availability

All data generated or analysed during this study are included in this published article [and its Supplementary Information files].

Moshtaghian, F., Golabchi, M. & Noorzai, E. A framework to dynamic identification of project risks. Smart and sustain. Built. Environ. 9 (4), 375–393 (2020).

Google Scholar  

Nunes, M. & Abreu, A. Managing open innovation project risks based on a social network analysis perspective. Sustainability 12 (8), 3132 (2020).

Article   Google Scholar  

Elkhatib, M. et al. Agile project management and project risks improvements: Pros and cons. Mod. Econ. 13 (9), 1157–1176 (2022).

Fridgeirsson, T. V. et al. The VUCAlity of projects: A new approach to assess a project risk in a complex world. Sustainability 13 (7), 3808 (2021).

Salahuddin, T. Numerical Techniques in MATLAB: Fundamental to Advanced Concepts (CRC Press, 2023).

Book   Google Scholar  

Awais, M. & Salahuddin, T. Radiative magnetohydrodynamic cross fluid thermophysical model passing on parabola surface with activation energy. Ain Shams Eng. J. 15 (1), 102282 (2024).

Awais, M. & Salahuddin, T. Natural convection with variable fluid properties of couple stress fluid with Cattaneo-Christov model and enthalpy process. Heliyon 9 (8), e18546 (2023).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Guan, L., Abbasi, A. & Ryan, M. J. Analyzing green building project risk interdependencies using Interpretive Structural Modeling. J. Clean. Prod. 256 , 120372 (2020).

Gaudenzi, B. & Qazi, A. Assessing project risks from a supply chain quality management (SCQM) perspective. Int. J. Qual. Reliab. Manag. 38 (4), 908–931 (2021).

Lee, K. T., Park, S. J. & Kim, J. H. Comparative analysis of managers’ perception in overseas construction project risks and cost overrun in actual cases: A perspective of the Republic of Korea. J. Asian Archit. Build. Eng. 22 (4), 2291–2308 (2023).

Garai-Fodor, M., Szemere, T. P. & Csiszárik-Kocsir, Á. Investor segments by perceived project risk and their characteristics based on primary research results. Risks 10 (8), 159 (2022).

Senova, A., Tobisova, A. & Rozenberg, R. New approaches to project risk assessment utilizing the Monte Carlo method. Sustainability 15 (2), 1006 (2023).

Tiwari, P. & Suresha, B. Moderating role of project innovativeness on project flexibility, project risk, project performance, and business success in financial services. Glob. J. Flex. Syst. Manag. 22 (3), 179–196 (2021).

de Araújo, F., Lima, P., Marcelino-Sadaba, S. & Verbano, C. Successful implementation of project risk management in small and medium enterprises: A cross-case analysis. Int. J. Manag. Proj. Bus. 14 (4), 1023–1045 (2021).

Obondi, K. The utilization of project risk monitoring and control practices and their relationship with project success in construction projects. J. Proj. Manag. 7 (1), 35–52 (2022).

Atasoy, G. et al. Empowering risk communication: Use of visualizations to describe project risks. J. Constr. Eng. Manage. 148 (5), 04022015 (2022).

Dandage, R. V., Rane, S. B. & Mantha, S. S. Modelling human resource dimension of international project risk management. J. Global Oper. Strateg. Sourcing 14 (2), 261–290 (2021).

Wang, L. et al. Applying social network analysis to genetic algorithm in optimizing project risk response decisions. Inf. Sci. 512 , 1024–1042 (2020).

Marx-Stoelting, P. et al. A walk in the PARC: developing and implementing 21st century chemical risk assessment in Europe. Arch. Toxicol. 97 (3), 893–908 (2023).

Awais, M., Salahuddin, T. & Muhammad, S. Evaluating the thermo-physical characteristics of non-Newtonian Casson fluid with enthalpy change. Thermal Sci. Eng. Prog. 42 , 101948 (2023).

Article   CAS   Google Scholar  

Awais, M., Salahuddin, T. & Muhammad, S. Effects of viscous dissipation and activation energy for the MHD Eyring-Powell fluid flow with Darcy-Forchheimer and variable fluid properties. Ain Shams Eng. J. 15 (2), 102422 (2024).

Yang, L., Lou, J. & Zhao, X. Risk response of complex projects: Risk association network method. J. Manage. Eng. 37 (4), 05021004 (2021).

Acebes, F. et al. Project risk management from the bottom-up: Activity Risk Index. Cent. Eur. J. Oper. Res. 29 (4), 1375–1396 (2021).

Siyal, S. et al. They can’t treat you well under abusive supervision: Investigating the impact of job satisfaction and extrinsic motivation on healthcare employees. Rationality Society 33 (4), 401–423 (2021).

Chen, D., Wawrzynski, P. & Lv, Z. Cyber security in smart cities: A review of deep learning-based applications and case studies. Sustain. Cities Soc. 66 , 102655 (2021).

Zhao, M. et al. Pythagorean fuzzy TODIM method based on the cumulative prospect theory for MAGDM and its application on risk assessment of science and technology projects. Int. J. Fuzzy Syst. 23 , 1027–1041 (2021).

Suresh, K. & Dillibabu, R. A novel fuzzy mechanism for risk assessment in software projects. Soft Comput. 24 , 1683–1705 (2020).

Akhavan, M., Sebt, M. V. & Ameli, M. Risk assessment modeling for knowledge based and startup projects based on feasibility studies: A Bayesian network approach. Knowl.-Based Syst. 222 , 106992 (2021).

Guan, L., Abbasi, A. & Ryan, M. J. A simulation-based risk interdependency network model for project risk assessment. Decis. Support Syst. 148 , 113602 (2021).

Vujović, V. et al. Project planning and risk management as a success factor for IT projects in agricultural schools in Serbia. Technol. Soc. 63 , 101371 (2020).

Muñoz-La Rivera, F., Mora-Serrano, J. & Oñate, E. Factors influencing safety on construction projects (FSCPs): Types and categories. Int. J. Environ. Res. Public Health 18 (20), 10884 (2021).

Article   PubMed   PubMed Central   Google Scholar  

Nguyen, P. T. & Nguyen, P. C. Risk management in engineering and construction: A case study in design-build projects in Vietnam. Eng. Technol. Appl. Sci. Res 10 , 5237–5241 (2020).

Nguyen PT, Le TT. Risks on quality of civil engineering projects-an additive probability formula approach//AIP Conference Proceedings. AIP Publishing, 2798(1) (2023).

Nguyen, P.T., Phu, P.C., Thanh, P.P., et al . Exploring critical risk factors of office building projects. 8 (2), 0309–0315 (2020).

Nguyen, H. D. & Macchion, L. Risk management in green building: A review of the current state of research and future directions. Environ. Develop. Sustain. 25 (3), 2136–2172 (2023).

He, S. et al. Risk assessment of oil and gas pipelines hot work based on AHP-FCE. Petroleum 9 (1), 94–100 (2023).

Asadullah, M. et al. Evaluation of machine learning techniques for hypertension risk prediction based on medical data in Bangladesh. Indones. J. Electr. Eng. Comput. Sci. 31 (3), 1794–1802 (2023).

Testorelli, R., de Araujo, F., Lima, P. & Verbano, C. Fostering project risk management in SMEs: An emergent framework from a literature review. Prod. Plan. Control 33 (13), 1304–1318 (2022).

Download references

Author information

Authors and affiliations.

Institute of Policy Studies, Lingnan University, Tuen Mun, 999077, Hong Kong, China

Xuying Dong & Wanlin Qiu

You can also search for this author in PubMed   Google Scholar

Contributions

Xuying Dong and Wanlin Qiu played a key role in the writing of Risk Assessment of Scientific Research Projects and the Relationship between Related Factors Based on Naive Bayes Algorithm. First, they jointly developed clearly defined research questions and methods for risk assessment using the naive Bayes algorithm at the beginning of the research project. Secondly, Xuying Dong and Wanlin Qiu were responsible for data collection and preparation, respectively, to ensure the quality and accuracy of the data used in the research. They worked together to develop a naive Bayes algorithm model, gain a deep understanding of the algorithm, ensure the effectiveness and performance of the model, and successfully apply the model in practical research. In the experimental and data analysis phase, the collaborative work of Xuying Dong and Wanlin Qiu played a key role in verifying the validity of the model and accurately assessing the risks of the research project. They also collaborated on research papers, including detailed descriptions of methods, experiments and results, and actively participated in the review and revision process, ensuring the accuracy and completeness of the findings. In general, the joint contribution of Xuying Dong and Wanlin Qiu has provided a solid foundation for the success of this research and the publication of high-quality papers, promoted the research on the risk assessment of scientific research projects and the relationship between related factors, and made a positive contribution to the progress of the field.

Corresponding author

Correspondence to Wanlin Qiu .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Dong, X., Qiu, W. A case study on the relationship between risk assessment of scientific research projects and related factors under the Naive Bayesian algorithm. Sci Rep 14 , 8244 (2024). https://doi.org/10.1038/s41598-024-58341-y

Download citation

Received : 30 October 2023

Accepted : 27 March 2024

Published : 08 April 2024

DOI : https://doi.org/10.1038/s41598-024-58341-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Naive Bayesian algorithm
  • Scientific research projects
  • Risk assessment
  • Factor analysis
  • Probability estimation
  • Decision support
  • Data-driven decision-making

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

case study of risk assessment

Enterprise Risk Management Case Studies: Heroes and Zeros

By Andy Marker | April 7, 2021

  • Share on Facebook
  • Share on LinkedIn

Link copied

We’ve compiled more than 20 case studies of enterprise risk management programs that illustrate how companies can prevent significant losses yet take risks with more confidence.   

Included on this page, you’ll find case studies and examples by industry , case studies of major risk scenarios (and company responses), and examples of ERM successes and failures .

Enterprise Risk Management Examples and Case Studies

With enterprise risk management (ERM) , companies assess potential risks that could derail strategic objectives and implement measures to minimize or avoid those risks. You can analyze examples (or case studies) of enterprise risk management to better understand the concept and how to properly execute it.

The collection of examples and case studies on this page illustrates common risk management scenarios by industry, principle, and degree of success. For a basic overview of enterprise risk management, including major types of risks, how to develop policies, and how to identify key risk indicators (KRIs), read “ Enterprise Risk Management 101: Programs, Frameworks, and Advice from Experts .”

Enterprise Risk Management Framework Examples

An enterprise risk management framework is a system by which you assess and mitigate potential risks. The framework varies by industry, but most include roles and responsibilities, a methodology for risk identification, a risk appetite statement, risk prioritization, mitigation strategies, and monitoring and reporting.

To learn more about enterprise risk management and find examples of different frameworks, read our “ Ultimate Guide to Enterprise Risk Management .”

Enterprise Risk Management Examples and Case Studies by Industry

Though every firm faces unique risks, those in the same industry often share similar risks. By understanding industry-wide common risks, you can create and implement response plans that offer your firm a competitive advantage.

Enterprise Risk Management Example in Banking

Toronto-headquartered TD Bank organizes its risk management around two pillars: a risk management framework and risk appetite statement. The enterprise risk framework defines the risks the bank faces and lays out risk management practices to identify, assess, and control risk. The risk appetite statement outlines the bank’s willingness to take on risk to achieve its growth objectives. Both pillars are overseen by the risk committee of the company’s board of directors.  

Risk management frameworks were an important part of the International Organization for Standardization’s 31000 standard when it was first written in 2009 and have been updated since then. The standards provide universal guidelines for risk management programs.  

Risk management frameworks also resulted from the efforts of the Committee of Sponsoring Organizations of the Treadway Commission (COSO). The group was formed to fight corporate fraud and included risk management as a dimension. 

Once TD completes the ERM framework, the bank moves onto the risk appetite statement. 

The bank, which built a large U.S. presence through major acquisitions, determined that it will only take on risks that meet the following three criteria:

  • The risk fits the company’s strategy, and TD can understand and manage those risks. 
  • The risk does not render the bank vulnerable to significant loss from a single risk.
  • The risk does not expose the company to potential harm to its brand and reputation. 

Some of the major risks the bank faces include strategic risk, credit risk, market risk, liquidity risk, operational risk, insurance risk, capital adequacy risk, regulator risk, and reputation risk. Managers detail these categories in a risk inventory. 

The risk framework and appetite statement, which are tracked on a dashboard against metrics such as capital adequacy and credit risk, are reviewed annually. 

TD uses a three lines of defense (3LOD) strategy, an approach widely favored by ERM experts, to guard against risk. The three lines are as follows:

  • A business unit and corporate policies that create controls, as well as manage and monitor risk
  • Standards and governance that provide oversight and review of risks and compliance with the risk appetite and framework 
  • Internal audits that provide independent checks and verification that risk-management procedures are effective

Enterprise Risk Management Example in Pharmaceuticals

Drug companies’ risks include threats around product quality and safety, regulatory action, and consumer trust. To avoid these risks, ERM experts emphasize the importance of making sure that strategic goals do not conflict. 

For Britain’s GlaxoSmithKline, such a conflict led to a breakdown in risk management, among other issues. In the early 2000s, the company was striving to increase sales and profitability while also ensuring safe and effective medicines. One risk the company faced was a failure to meet current good manufacturing practices (CGMP) at its plant in Cidra, Puerto Rico. 

CGMP includes implementing oversight and controls of manufacturing, as well as managing the risk and confirming the safety of raw materials and finished drug products. Noncompliance with CGMP can result in escalating consequences, ranging from warnings to recalls to criminal prosecution. 

GSK’s unit pleaded guilty and paid $750 million in 2010 to resolve U.S. charges related to drugs made at the Cidra plant, which the company later closed. A fired GSK quality manager alerted regulators and filed a whistleblower lawsuit in 2004. In announcing the consent decree, the U.S. Department of Justice said the plant had a history of bacterial contamination and multiple drugs created there in the early 2000s violated safety standards.

According to the whistleblower, GSK’s ERM process failed in several respects to act on signs of non-compliance with CGMP. The company received warning letters from the U.S. Food and Drug Administration in 2001 about the plant’s practices, but did not resolve the issues. 

Additionally, the company didn’t act on the quality manager’s compliance report, which advised GSK to close the plant for two weeks to fix the problems and notify the FDA. According to court filings, plant staff merely skimmed rejected products and sold them on the black market. They also scraped by hand the inside of an antibiotic tank to get more product and, in so doing, introduced bacteria into the product.

Enterprise Risk Management Example in Consumer Packaged Goods

Mars Inc., an international candy and food company, developed an ERM process. The company piloted and deployed the initiative through workshops with geographic, product, and functional teams from 2003 to 2012. 

Driven by a desire to frame risk as an opportunity and to work within the company’s decentralized structure, Mars created a process that asked participants to identify potential risks and vote on which had the highest probability. The teams listed risk mitigation steps, then ranked and color-coded them according to probability of success. 

Larry Warner, a Mars risk officer at the time, illustrated this process in a case study . An initiative to increase direct-to-consumer shipments by 12 percent was colored green, indicating a 75 percent or greater probability of achievement. The initiative to bring a new plant online by the end of Q3 was coded red, meaning less than a 50 percent probability of success. 

The company’s results were hurt by a surprise at an operating unit that resulted from a so-coded red risk identified in a unit workshop. Executives had agreed that some red risk profile was to be expected, but they decided that when a unit encountered a red issue, it must be communicated upward when first identified. This became a rule. 

This process led to the creation of an ERM dashboard that listed initiatives in priority order, with the profile of each risk faced in the quarter, the risk profile trend, and a comment column for a year-end view. 

According to Warner, the key factors of success for ERM at Mars are as follows:

  • The initiative focused on achieving operational and strategic objectives rather than compliance, which refers to adhering to established rules and regulations.
  • The program evolved, often based on requests from business units, and incorporated continuous improvement. 
  • The ERM team did not overpromise. It set realistic objectives.
  • The ERM team periodically surveyed business units, management teams, and board advisers.

Enterprise Risk Management Example in Retail

Walmart is the world’s biggest retailer. As such, the company understands that its risk makeup is complex, given the geographic spread of its operations and its large number of stores, vast supply chain, and high profile as an employer and buyer of goods. 

In the 1990s, the company sought a simplified strategy for assessing risk and created an enterprise risk management plan with five steps founded on these four questions:

  • What are the risks?
  • What are we going to do about them?
  • How will we know if we are raising or decreasing risk?
  • How will we show shareholder value?

The process follows these five steps:

  • Risk Identification: Senior Walmart leaders meet in workshops to identify risks, which are then plotted on a graph of probability vs. impact. Doing so helps to prioritize the biggest risks. The executives then look at seven risk categories (both internal and external): legal/regulatory, political, business environment, strategic, operational, financial, and integrity. Many ERM pros use risk registers to evaluate and determine the priority of risks. You can download templates that help correlate risk probability and potential impact in “ Free Risk Register Templates .”
  • Risk Mitigation: Teams that include operational staff in the relevant area meet. They use existing inventory procedures to address the risks and determine if the procedures are effective.
  • Action Planning: A project team identifies and implements next steps over the several months to follow.
  • Performance Metrics: The group develops metrics to measure the impact of the changes. They also look at trends of actual performance compared to goal over time.
  • Return on Investment and Shareholder Value: In this step, the group assesses the changes’ impact on sales and expenses to determine if the moves improved shareholder value and ROI.

To develop your own risk management planning, you can download a customizable template in “ Risk Management Plan Templates .”

Enterprise Risk Management Example in Agriculture

United Grain Growers (UGG), a Canadian grain distributor that now is part of Glencore Ltd., was hailed as an ERM innovator and became the subject of business school case studies for its enterprise risk management program. This initiative addressed the risks associated with weather for its business. Crop volume drove UGG’s revenue and profits. 

In the late 1990s, UGG identified its major unaddressed risks. Using almost a century of data, risk analysts found that extreme weather events occurred 10 times as frequently as previously believed. The company worked with its insurance broker and the Swiss Re Group on a solution that added grain-volume risk (resulting from weather fluctuations) to its other insured risks, such as property and liability, in an integrated program. 

The result was insurance that protected grain-handling earnings, which comprised half of UGG’s gross profits. The greater financial stability significantly enhanced the firm’s ability to achieve its strategic objectives. 

Since then, the number and types of instruments to manage weather-related risks has multiplied rapidly. For example, over-the-counter derivatives, such as futures and options, began trading in 1997. The Chicago Mercantile Exchange now offers weather futures contracts on 12 U.S. and international cities. 

Weather derivatives are linked to climate factors such as rainfall or temperature, and they hedge different kinds of risks than do insurance. These risks are much more common (e.g., a cooler-than-normal summer) than the earthquakes and floods that insurance typically covers. And the holders of derivatives do not have to incur any damage to collect on them.

These weather-linked instruments have found a wider audience than anticipated, including retailers that worry about freak storms decimating Christmas sales, amusement park operators fearing rainy summers will keep crowds away, and energy companies needing to hedge demand for heating and cooling.

This area of ERM continues to evolve because weather and crop insurance are not enough to address all the risks that agriculture faces. Arbol, Inc. estimates that more than $1 trillion of agricultural risk is uninsured. As such, it is launching a blockchain-based platform that offers contracts (customized by location and risk parameters) with payouts based on weather data. These contracts can cover risks associated with niche crops and small growing areas.

Enterprise Risk Management Example in Insurance

Switzerland’s Zurich Insurance Group understands that risk is inherent for insurers and seeks to practice disciplined risk-taking, within a predetermined risk tolerance. 

The global insurer’s enterprise risk management framework aims to protect capital, liquidity, earnings, and reputation. Governance serves as the basis for risk management, and the framework lays out responsibilities for taking, managing, monitoring, and reporting risks. 

The company uses a proprietary process called Total Risk Profiling (TRP) to monitor internal and external risks to its strategy and financial plan. TRP assesses risk on the basis of severity and probability, and helps define and implement mitigating moves. 

Zurich’s risk appetite sets parameters for its tolerance within the goal of maintaining enough capital to achieve an AA rating from rating agencies. For this, the company uses its own Zurich economic capital model, referred to as Z-ECM. The model quantifies risk tolerance with a metric that assesses risk profile vs. risk tolerance. 

To maintain the AA rating, the company aims to hold capital between 100 and 120 percent of capital at risk. Above 140 percent is considered overcapitalized (therefore at risk of throttling growth), and under 90 percent is below risk tolerance (meaning the risk is too high). On either side of 100 to 120 percent (90 to 100 percent and 120 to 140 percent), the insurer considers taking mitigating action. 

Zurich’s assessment of risk and the nature of those risks play a major role in determining how much capital regulators require the business to hold. A popular tool to assess risk is the risk matrix, and you can find a variety of templates in “ Free, Customizable Risk Matrix Templates .”

In 2020, Zurich found that its biggest exposures were market risk, such as falling asset valuations and interest-rate risk; insurance risk, such as big payouts for covered customer losses, which it hedges through diversification and reinsurance; credit risk in assets it holds and receivables; and operational risks, such as internal process failures and external fraud.

Enterprise Risk Management Example in Technology

Financial software maker Intuit has strengthened its enterprise risk management through evolution, according to a case study by former Chief Risk Officer Janet Nasburg. 

The program is founded on the following five core principles:

  • Use a common risk framework across the enterprise.
  • Assess risks on an ongoing basis.
  • Focus on the most important risks.
  • Clearly define accountability for risk management.
  • Commit to continuous improvement of performance measurement and monitoring. 

ERM programs grow according to a maturity model, and as capability rises, the shareholder value from risk management becomes more visible and important. 

The maturity phases include the following:

  • Ad hoc risk management addresses a specific problem when it arises.
  • Targeted or initial risk management approaches risks with multiple understandings of what constitutes risk and management occurs in silos. 
  • Integrated or repeatable risk management puts in place an organization-wide framework for risk assessment and response. 
  • Intelligent or managed risk management coordinates risk management across the business, using common tools. 
  • Risk leadership incorporates risk management into strategic decision-making. 

Intuit emphasizes using key risk indicators (KRIs) to understand risks, along with key performance indicators (KPIs) to gauge the effectiveness of risk management. 

Early in its ERM journey, Intuit measured performance on risk management process participation and risk assessment impact. For participation, the targeted rate was 80 percent of executive management and business-line leaders. This helped benchmark risk awareness and current risk management, at a time when ERM at the company was not mature.

Conduct an annual risk assessment at corporate and business-line levels to plot risks, so the most likely and most impactful risks are graphed in the upper-right quadrant. Doing so focuses attention on these risks and helps business leaders understand the risk’s impact on performance toward strategic objectives. 

In the company’s second phase of ERM, Intuit turned its attention to building risk management capacity and sought to ensure that risk management activities addressed the most important risks. The company evaluated performance using color-coded status symbols (red, yellow, green) to indicate risk trend and progress on risk mitigation measures.

In its third phase, Intuit moved to actively monitoring the most important risks and ensuring that leaders modified their strategies to manage risks and take advantage of opportunities. An executive dashboard uses KRIs, KPIs, an overall risk rating, and red-yellow-green coding. The board of directors regularly reviews this dashboard.

Over this evolution, the company has moved from narrow, tactical risk management to holistic, strategic, and long-term ERM.

Enterprise Risk Management Case Studies by Principle

ERM veterans agree that in addition to KPIs and KRIs, other principles are equally important to follow. Below, you’ll find examples of enterprise risk management programs by principles.

ERM Principle #1: Make Sure Your Program Aligns with Your Values

Raytheon Case Study U.S. defense contractor Raytheon states that its highest priority is delivering on its commitment to provide ethical business practices and abide by anti-corruption laws.

Raytheon backs up this statement through its ERM program. Among other measures, the company performs an annual risk assessment for each function, including the anti-corruption group under the Chief Ethics and Compliance Officer. In addition, Raytheon asks 70 of its sites to perform an anti-corruption self-assessment each year to identify gaps and risks. From there, a compliance team tracks improvement actions. 

Every quarter, the company surveys 600 staff members who may face higher anti-corruption risks, such as the potential for bribes. The survey asks them to report any potential issues in the past quarter.

Also on a quarterly basis, the finance and internal controls teams review higher-risk profile payments, such as donations and gratuities to confirm accuracy and compliance. Oversight and compliance teams add other checks, and they update a risk-based audit plan continuously.

ERM Principle #2: Embrace Diversity to Reduce Risk

State Street Global Advisors Case Study In 2016, the asset management firm State Street Global Advisors introduced measures to increase gender diversity in its leadership as a way of reducing portfolio risk, among other goals. 

The company relied on research that showed that companies with more women senior managers had a better return on equity, reduced volatility, and fewer governance problems such as corruption and fraud. 

Among the initiatives was a campaign to influence companies where State Street had invested, in order to increase female membership on their boards. State Street also developed an investment product that tracks the performance of companies with the highest level of senior female leadership relative to peers in their sector. 

In 2020, the company announced some of the results of its effort. Among the 1,384 companies targeted by the firm, 681 added at least one female director.

ERM Principle #3: Do Not Overlook Resource Risks

Infosys Case Study India-based technology consulting company Infosys, which employees more than 240,000 people, has long recognized the risk of water shortages to its operations. 

India’s rapidly growing population and development has increased the risk of water scarcity. A 2020 report by the World Wide Fund for Nature said 30 cities in India faced the risk of severe water scarcity over the next three decades. 

Infosys has dozens of facilities in India and considers water to be a significant short-term risk. At its campuses, the company uses the water for cooking, drinking, cleaning, restrooms, landscaping, and cooling. Water shortages could halt Infosys operations and prevent it from completing customer projects and reaching its performance objectives. 

In an enterprise risk assessment example, Infosys’ ERM team conducts corporate water-risk assessments while sustainability teams produce detailed water-risk assessments for individual locations, according to a report by the World Business Council for Sustainable Development .

The company uses the COSO ERM framework to respond to the risks and decide whether to accept, avoid, reduce, or share these risks. The company uses root-cause analysis (which focuses on identifying underlying causes rather than symptoms) and the site assessments to plan steps to reduce risks. 

Infosys has implemented various water conservation measures, such as water-efficient fixtures and water recycling, rainwater collection and use, recharging aquifers, underground reservoirs to hold five days of water supply at locations, and smart-meter usage monitoring. Infosys’ ERM team tracks metrics for per-capita water consumption, along with rainfall data, availability and cost of water by tanker trucks, and water usage from external suppliers. 

In the 2020 fiscal year, the company reported a nearly 64 percent drop in per-capita water consumption by its workforce from the 2008 fiscal year. 

The business advantages of this risk management include an ability to open locations where water shortages may preclude competitors, and being able to maintain operations during water scarcity, protecting profitability.

ERM Principle #4: Fight Silos for Stronger Enterprise Risk Management

U.S. Government Case Study The terrorist attacks of September 11, 2001, revealed that the U.S. government’s then-current approach to managing intelligence was not adequate to address the threats — and, by extension, so was the government’s risk management procedure. Since the Cold War, sensitive information had been managed on a “need to know” basis that resulted in data silos. 

In the case of 9/11, this meant that different parts of the government knew some relevant intelligence that could have helped prevent the attacks. But no one had the opportunity to put the information together and see the whole picture. A congressional commission determined there were 10 lost operational opportunities to derail the plot. Silos existed between law enforcement and intelligence, as well as between and within agencies. 

After the attacks, the government moved toward greater information sharing and collaboration. Based on a task force’s recommendations, data moved from a centralized network to a distributed model, and social networking tools now allow colleagues throughout the government to connect. Staff began working across agency lines more often.

Enterprise Risk Management Examples by Scenario

While some scenarios are too unlikely to receive high-priority status, low-probability risks are still worth running through the ERM process. Robust risk management creates a culture and response capacity that better positions a company to deal with a crisis.

In the following enterprise risk examples, you will find scenarios and details of how organizations manage the risks they face.

Scenario: ERM and the Global Pandemic While most businesses do not have the resources to do in-depth ERM planning for the rare occurrence of a global pandemic, companies with a risk-aware culture will be at an advantage if a pandemic does hit. 

These businesses already have processes in place to escalate trouble signs for immediate attention and an ERM team or leader monitoring the threat environment. A strong ERM function gives clear and effective guidance that helps the company respond.

A report by Vodafone found that companies identified as “future ready” fared better in the COVID-19 pandemic. The attributes of future-ready businesses have a lot in common with those of companies that excel at ERM. These include viewing change as an opportunity; having detailed business strategies that are documented, funded, and measured; working to understand the forces that shape their environments; having roadmaps in place for technological transformation; and being able to react more quickly than competitors. 

Only about 20 percent of companies in the Vodafone study met the definition of “future ready.” But 54 percent of these firms had a fully developed and tested business continuity plan, compared to 30 percent of all businesses. And 82 percent felt their continuity plans worked well during the COVID-19 crisis. Nearly 50 percent of all businesses reported decreased profits, while 30 percent of future-ready organizations saw profits rise. 

Scenario: ERM and the Economic Crisis  The 2008 economic crisis in the United States resulted from the domino effect of rising interest rates, a collapse in housing prices, and a dramatic increase in foreclosures among mortgage borrowers with poor creditworthiness. This led to bank failures, a credit crunch, and layoffs, and the U.S. government had to rescue banks and other financial institutions to stabilize the financial system.

Some commentators said these events revealed the shortcomings of ERM because it did not prevent the banks’ mistakes or collapse. But Sim Segal, an ERM consultant and director of Columbia University’s ERM master’s degree program, analyzed how banks performed on 10 key ERM criteria. 

Segal says a risk-management program that incorporates all 10 criteria has these characteristics: 

  • Risk management has an enterprise-wide scope.
  • The program includes all risk categories: financial, operational, and strategic. 
  • The focus is on the most important risks, not all possible risks. 
  • Risk management is integrated across risk types.
  • Aggregated metrics show risk exposure and appetite across the enterprise.
  • Risk management incorporates decision-making, not just reporting.
  • The effort balances risk and return management.
  • There is a process for disclosure of risk.
  • The program measures risk in terms of potential impact on company value.
  • The focus of risk management is on the primary stakeholder, such as shareholders, rather than regulators or rating agencies.

In his book Corporate Value of Enterprise Risk Management , Segal concluded that most banks did not actually use ERM practices, which contributed to the financial crisis. He scored banks as failing on nine of the 10 criteria, only giving them a passing grade for focusing on the most important risks. 

Scenario: ERM and Technology Risk  The story of retailer Target’s failed expansion to Canada, where it shut down 133 loss-making stores in 2015, has been well documented. But one dimension that analysts have sometimes overlooked was Target’s handling of technology risk. 

A case study by Canadian Business magazine traced some of the biggest issues to software and data-quality problems that dramatically undermined the Canadian launch. 

As with other forms of ERM, technology risk management requires companies to ask what could go wrong, what the consequences would be, how they might prevent the risks, and how they should deal with the consequences. 

But with its technology plan for Canada, Target did not heed risk warning signs. 

In the United States, Target had custom systems for ordering products from vendors, processing items at warehouses, and distributing merchandise to stores quickly. But that software would need customization to work with the Canadian dollar, metric system, and French-language characters. 

Target decided to go with new ERP software on an aggressive two-year timeline. As Target began ordering products for the Canadian stores in 2012, problems arose. Some items did not fit into shipping containers or on store shelves, and information needed for customs agents to clear imported items was not correct in Target's system. 

Target found that its supply chain software data was full of errors. Product dimensions were in inches, not centimeters; height and width measurements were mixed up. An internal investigation showed that only about 30 percent of the data was accurate. 

In an attempt to fix these errors, Target merchandisers spent a week double-checking with vendors up to 80 data points for each of the retailer’s 75,000 products. They discovered that the dummy data entered into the software during setup had not been altered. To make any corrections, employees had to send the new information to an office in India where staff would enter it into the system. 

As the launch approached, the technology errors left the company vulnerable to stockouts, few people understood how the system worked, and the point-of-sale checkout system did not function correctly. Soon after stores opened in 2013, consumers began complaining about empty shelves. Meanwhile, Target Canada distribution centers overflowed due to excess ordering based on poor data fed into forecasting software. 

The rushed launch compounded problems because it did not allow the company enough time to find solutions or alternative technology. While the retailer fixed some issues by the end of 2014, it was too late. Target Canada filed for bankruptcy protection in early 2015. 

Scenario: ERM and Cybersecurity System hacks and data theft are major worries for companies. But as a relatively new field, cyber-risk management faces unique hurdles.

For example, risk managers and information security officers have difficulty quantifying the likelihood and business impact of a cybersecurity attack. The rise of cloud-based software exposes companies to third-party risks that make these projections even more difficult to calculate. 

As the field evolves, risk managers say it’s important for IT security officers to look beyond technical issues, such as the need to patch a vulnerability, and instead look more broadly at business impacts to make a cost benefit analysis of risk mitigation. Frameworks such as the Risk Management Framework for Information Systems and Organizations by the National Institute of Standards and Technology can help.  

Health insurer Aetna considers cybersecurity threats as a part of operational risk within its ERM framework and calculates a daily risk score, adjusted with changes in the cyberthreat landscape. 

Aetna studies threats from external actors by working through information sharing and analysis centers for the financial services and health industries. Aetna staff reverse-engineers malware to determine controls. The company says this type of activity helps ensure the resiliency of its business processes and greatly improves its ability to help protect member information.

For internal threats, Aetna uses models that compare current user behavior to past behavior and identify anomalies. (The company says it was the first organization to do this at scale across the enterprise.) Aetna gives staff permissions to networks and data based on what they need to perform their job. This segmentation restricts access to raw data and strengthens governance. 

Another risk initiative scans outgoing employee emails for code patterns, such as credit card or Social Security numbers. The system flags the email, and a security officer assesses it before the email is released.

Examples of Poor Enterprise Risk Management

Case studies of failed enterprise risk management often highlight mistakes that managers could and should have spotted — and corrected — before a full-blown crisis erupted. The focus of these examples is often on determining why that did not happen. 

ERM Case Study: General Motors

In 2014, General Motors recalled the first of what would become 29 million cars due to faulty ignition switches and paid compensation for 124 related deaths. GM knew of the problem for at least 10 years but did not act, the automaker later acknowledged. The company entered a deferred prosecution agreement and paid a $900 million penalty. 

Pointing to the length of time the company failed to disclose the safety problem, ERM specialists say it shows the problem did not reside with a single department. “Rather, it reflects a failure to properly manage risk,” wrote Steve Minsky, a writer on ERM and CEO of an ERM software company, in Risk Management magazine. 

“ERM is designed to keep all parties across the organization, from the front lines to the board to regulators, apprised of these kinds of problems as they become evident. Unfortunately, GM failed to implement such a program, ultimately leading to a tragic and costly scandal,” Minsky said.

Also in the auto sector, an enterprise risk management case study of Toyota looked at its problems with unintended acceleration of vehicles from 2002 to 2009. Several studies, including a case study by Carnegie Mellon University Professor Phil Koopman , blamed poor software design and company culture. A whistleblower later revealed a coverup by Toyota. The company paid more than $2.5 billion in fines and settlements.

ERM Case Study: Lululemon

In 2013, following customer complaints that its black yoga pants were too sheer, the athletic apparel maker recalled 17 percent of its inventory at a cost of $67 million. The company had previously identified risks related to fabric supply and quality. The CEO said the issue was inadequate testing. 

Analysts raised concerns about the company’s controls, including oversight of factories and product quality. A case study by Stanford University professors noted that Lululemon’s episode illustrated a common disconnect between identifying risks and being prepared to manage them when they materialize. Lululemon’s reporting and analysis of risks was also inadequate, especially as related to social media. In addition, the case study highlighted the need for a system to escalate risk-related issues to the board. 

ERM Case Study: Kodak 

Once an iconic brand, the photo film company failed for decades to act on the threat that digital photography posed to its business and eventually filed for bankruptcy in 2012. The company’s own research in 1981 found that digital photos could ultimately replace Kodak’s film technology and estimated it had 10 years to prepare. 

Unfortunately, Kodak did not prepare and stayed locked into the film paradigm. The board reinforced this course when in 1989 it chose as CEO a candidate who came from the film business over an executive interested in digital technology. 

Had the company acknowledged the risks and employed ERM strategies, it might have pursued a variety of strategies to remain successful. The company’s rival, Fuji Film, took the money it made from film and invested in new initiatives, some of which paid off. Kodak, on the other hand, kept investing in the old core business.

Case Studies of Successful Enterprise Risk Management

Successful enterprise risk management usually requires strong performance in multiple dimensions, and is therefore more likely to occur in organizations where ERM has matured. The following examples of enterprise risk management can be considered success stories. 

ERM Case Study: Statoil 

A major global oil producer, Statoil of Norway stands out for the way it practices ERM by looking at both downside risk and upside potential. Taking risks is vital in a business that depends on finding new oil reserves. 

According to a case study, the company developed its own framework founded on two basic goals: creating value and avoiding accidents.

The company aims to understand risks thoroughly, and unlike many ERM programs, Statoil maps risks on both the downside and upside. It graphs risk on probability vs. impact on pre-tax earnings, and it examines each risk from both positive and negative perspectives. 

For example, the case study cites a risk that the company assessed as having a 5 percent probability of a somewhat better-than-expected outcome but a 10 percent probability of a significant loss relative to forecast. In this case, the downside risk was greater than the upside potential.

ERM Case Study: Lego 

The Danish toy maker’s ERM evolved over the following four phases, according to a case study by one of the chief architects of its program:

  • Traditional management of financial, operational, and other risks. Strategic risk management joined the ERM program in 2006. 
  • The company added Monte Carlo simulations in 2008 to model financial performance volatility so that budgeting and financial processes could incorporate risk management. The technique is used in budget simulations, to assess risk in its credit portfolio, and to consolidate risk exposure. 
  • Active risk and opportunity planning is part of making a business case for new projects before final decisions.
  • The company prepares for uncertainty so that long-term strategies remain relevant and resilient under different scenarios. 

As part of its scenario modeling, Lego developed its PAPA (park, adapt, prepare, act) model. 

  • Park: The company parks risks that occur slowly and have a low probability of happening, meaning it does not forget nor actively deal with them.
  • Adapt: This response is for risks that evolve slowly and are certain or highly probable to occur. For example, a risk in this category is the changing nature of play and the evolution of buying power in different parts of the world. In this phase, the company adjusts, monitors the trend, and follows developments.
  • Prepare: This category includes risks that have a low probability of occurring — but when they do, they emerge rapidly. These risks go into the ERM risk database with contingency plans, early warning indicators, and mitigation measures in place.
  • Act: These are high-probability, fast-moving risks that must be acted upon to maintain strategy. For example, developments around connectivity, mobile devices, and online activity are in this category because of the rapid pace of change and the influence on the way children play. 

Lego views risk management as a way to better equip itself to take risks than its competitors. In the case study, the writer likens this approach to the need for the fastest race cars to have the best brakes and steering to achieve top speeds.

ERM Case Study: University of California 

The University of California, one of the biggest U.S. public university systems, introduced a new view of risk to its workforce when it implemented enterprise risk management in 2005. Previously, the function was merely seen as a compliance requirement.

ERM became a way to support the university’s mission of education and research, drawing on collaboration of the system’s employees across departments. “Our philosophy is, ‘Everyone is a risk manager,’” Erike Young, deputy director of ERM told Treasury and Risk magazine. “Anyone who’s in a management position technically manages some type of risk.”

The university faces a diverse set of risks, including cybersecurity, hospital liability, reduced government financial support, and earthquakes.  

The ERM department had to overhaul systems to create a unified view of risk because its information and processes were not linked. Software enabled both an organizational picture of risk and highly detailed drilldowns on individual risks. Risk managers also developed tools for risk assessment, risk ranking, and risk modeling. 

Better risk management has provided more than $100 million in annual cost savings and nearly $500 million in cost avoidance, according to UC officials. 

UC drives ERM with risk management departments at each of its 10 locations and leverages university subject matter experts to form multidisciplinary workgroups that develop process improvements.

APQC, a standards quality organization, recognized UC as a top global ERM practice organization, and the university system has won other awards. The university says in 2010 it was the first nonfinancial organization to win credit-rating agency recognition of its ERM program.

Examples of How Technology Is Transforming Enterprise Risk Management

Business intelligence software has propelled major progress in enterprise risk management because the technology enables risk managers to bring their information together, analyze it, and forecast how risk scenarios would impact their business.

ERM organizations are using computing and data-handling advancements such as blockchain for new innovations in strengthening risk management. Following are case studies of a few examples.

ERM Case Study: Bank of New York Mellon 

In 2021, the bank joined with Google Cloud to use machine learning and artificial intelligence to predict and reduce the risk that transactions in the $22 trillion U.S. Treasury market will fail to settle. Settlement failure means a buyer and seller do not exchange cash and securities by the close of business on the scheduled date. 

The party that fails to settle is assessed a daily financial penalty, and a high level of settlement failures can indicate market liquidity problems and rising risk. BNY says that, on average, about 2 percent of transactions fail to settle.

The bank trained models with millions of trades to consider every factor that could result in settlement failure. The service uses market-wide intraday trading metrics, trading velocity, scarcity indicators, volume, the number of trades settled per hour, seasonality, issuance patterns, and other signals. 

The bank said it predicts about 40 percent of settlement failures with 90 percent accuracy. But it also cautioned against overconfidence in the technology as the model continues to improve. 

AI-driven forecasting reduces risk for BNY clients in the Treasury market and saves costs. For example, a predictive view of settlement risks helps bond dealers more accurately manage their liquidity buffers, avoid penalties, optimize their funding sources, and offset the risks of failed settlements. In the long run, such forecasting tools could improve the health of the financial market. 

ERM Case Study: PwC

Consulting company PwC has leveraged a vast information storehouse known as a data lake to help its customers manage risk from suppliers.

A data lake stores both structured or unstructured information, meaning data in highly organized, standardized formats as well as unstandardized data. This means that everything from raw audio to credit card numbers can live in a data lake. 

Using techniques pioneered in national security, PwC built a risk data lake that integrates information from client companies, public databases, user devices, and industry sources. Algorithms find patterns that can signify unidentified risks.

One of PwC’s first uses of this data lake was a program to help companies uncover risks from their vendors and suppliers. Companies can violate laws, harm their reputations, suffer fraud, and risk their proprietary information by doing business with the wrong vendor. 

Today’s complex global supply chains mean companies may be several degrees removed from the source of this risk, which makes it hard to spot and mitigate. For example, a product made with outlawed child labor could be traded through several intermediaries before it reaches a retailer. 

PwC’s service helps companies recognize risk beyond their primary vendors and continue to monitor that risk over time as more information enters the data lake.

ERM Case Study: Financial Services

As analytics have become a pillar of forecasting and risk management for banks and other financial institutions, a new risk has emerged: model risk . This refers to the risk that machine-learning models will lead users to an unreliable understanding of risk or have unintended consequences.

For example, a 6 percent drop in the value of the British pound over the course of a few minutes in 2016 stemmed from currency trading algorithms that spiralled into a negative loop. A Twitter-reading program began an automated selling of the pound after comments by a French official, and other selling algorithms kicked in once the currency dropped below a certain level.

U.S. banking regulators are so concerned about model risk that the Federal Reserve set up a model validation council in 2012 to assess the models that banks use in running risk simulations for capital adequacy requirements. Regulators in Europe and elsewhere also require model validation.

A form of managing risk from a risk-management tool, model validation is an effort to reduce risk from machine learning. The technology-driven rise in modeling capacity has caused such models to proliferate, and banks can use hundreds of models to assess different risks. 

Model risk management can reduce rising costs for modeling by an estimated 20 to 30 percent by building a validation workflow, prioritizing models that are most important to business decisions, and implementing automation for testing and other tasks, according to McKinsey.

Streamline Your Enterprise Risk Management Efforts with Real-Time Work Management in Smartsheet

Empower your people to go above and beyond with a flexible platform designed to match the needs of your team — and adapt as those needs change. 

The Smartsheet platform makes it easy to plan, capture, manage, and report on work from anywhere, helping your team be more effective and get more done. Report on key metrics and get real-time visibility into work as it happens with roll-up reports, dashboards, and automated workflows built to keep your team connected and informed. 

When teams have clarity into the work getting done, there’s no telling how much more they can accomplish in the same amount of time.  Try Smartsheet for free, today.

Discover why over 90% of Fortune 100 companies trust Smartsheet to get work done.

How to Do a Risk Assessment: A Case Study

John Pellowe

Healthy , Organizational Leadership , Winning Strategy | Execution , Organizational Health Management , Risk management , Strategic planning

how to do a risk assessment  a case study

Christian Leadership Reflections

An exploration of Christian ministry leadership led by CCCC's CEO John Pellowe

There’s no shortage of consultants and authors to tell boards and senior leaders that risk assessment is something that should be done. Everyone knows that. But in the chronically short-staffed world of the charitable sector, who has time to do it well? It’s too easy to cross your fingers and hope disaster won’t happen to you!

If that’s you crossing your fingers, the good news is that risk assessment isn’t as complicated as it sounds, so don’t be intimidated by it. It doesn’t have to take a lot of time, and you can easily prioritize the risks and attack them a few at a time. I recently did a risk assessment for CCCC and the process of creating it was quite manageable while also being very thorough.

I’ll share my experience of creating a risk assessment so you can see how easy it is to do.

Step 1: Identify Risks

The first step is obvious – identify the risks you face. The trick is how you identify those risks. On your own, you might get locked into one way of thinking about risk, such as people suing you, so you become fixated on legal risk. But what about technological risks or funding risks or any other kind of risk?

I found a helpful way to identify the full range of risks is to address risk from three perspectives:

  • Two of the mission-related risks we identified at CCCC were 1) if we gave wrong information that a member relied upon to their detriment; and 2) if a Certified member had a public scandal.
  • We listed several risks to organization health for CCCC. Among them were 1) a disaster that would shut down our operations at least temporarily, and 2) a major loss from an innovation that did not work.
  • We identified a risk related to the sociopolitical environment.

I began the risk assessment by reviewing CCCC from these three perspectives on my own. I scanned our theory of change, our strategy map, and our programs to identify potential risks. I then reviewed everything we had that related to organizational health, which included our Vision 2020 document (written to proactively address organizational health over the next five years),  financial trends, a consultant’s report on a member survey, and a review of our operations by an expert in Canadian associations. I also thought about our experience over the past few years and conversations I’ve had with people. Finally, I went over everything we know about our environments and did some Internet research to see what else was being said that might affect us.

With all of this information, I then answered questions such as the following:

  • What assumptions have I made about current or future conditions? How valid are the assumptions?
  • What are my nightmare scenarios?
  • What do I avoid thinking about or just hope never happens?
  • What have I heard that went wrong with other organizations like ours?
  • What am I confident will never happen to us? Hubris is the downfall of many!
  • What is becoming more scarce or difficult for us?

At this point, I created a draft list of about ten major risks and distributed it to my leadership team for discussion. At that meeting we added three additional risks. Since the board had asked for a report from staff for them to review and discuss at the next board meeting, we did not involve them at this point.

case study of risk assessment

Step 2: Probability/Impact Assessment

Once you have the risks identified, you need to assess how significant they are in order to prioritize how you deal with them. Risks are rated on two factors:

  • How likely they are to happen (That is, their Probability )
  • How much of an effect could they have on your ministry (Their anticipated Impact )

Each of these two factors can be rated High , Medium , or Low . Here’s how I define those categories:

  • High : The risk either occurs regularly (such as hurricanes in Florida) or something specific is brewing and becoming more significant over time, such that it could affect your ministry in the next few years.
  • Medium : The risk happens from time to time each year, and someone will suffer from it (such as a fire or a burglary). You may have an elevated risk of suffering the problem or you might have just a general risk, such as everyone else has. There may also be a general trend that is not a particular problem at present but it could affect you over the longer term,
  • Low : It’s possible that it could happen, but it rarely does. The risk is largely hypothetical.
  • High : If the risk happened, it would be a critical life or death situation for the ministry. At the least, if you survive it would change the future of the ministry and at its worst, the ministry may not be able to recover from the damage and closure would be the only option.
  • Medium : The risk would create a desperate situation requiring possibly radical solutions, but there would be a reasonable chance of recovering from the effects of the risk without long term damage.
  • Low : The risk would cause an unwelcome interruption of normal activity, but the damage could be overcome with fairly routine responses. There would be no question of what to do, it would just be a matter of doing it.

I discussed my assessments of the risks with staff and then listed them in the agreed-upon priority order in six Probability/Impact combinations:

  • High/High – 2 risks
  • High/Medium – 1 risk
  • Medium/High – 2 risks
  • Medium/Medium – 3 risks
  • Low/High – 3 risks
  • Low/Medium – 2 risks

I felt that the combinations High/Low, Medium/Low, and Low/Low weren’t significant enough to include in the assessment. The point of prioritizing is to help you be a good steward as you allocate time and money to address the significant risks. With only thirteen risks, CCCC can address them all, but we know which ones need attention most urgently.

Step 3: Manage Risk

After you have assessed the risks your ministry faces (steps 1 and 2), you arrive at the point where you can start managing  the risks. The options for managing boil down to three strategies:

  • Prevent : The risk might be avoided by changing how you do things. It may mean purchasing additional equipment or redesigning a program. In most cases, though, you probably won’t actually be able to prevent the risk from ever happening. More likely you will only be able to mitigate the risk.
  • Mitigate : Mitigate means to make less severe, serious, or painful. There are two ways to mitigate risk: 1) find ways to make it less likely to happen; and 2) lessen the impact of the risk if it happens. Finding ways to mitigate risk and then implementing the plan will take up most of the time you spend on risk assessment and management. This is where you need to think creatively about possible strategies and action steps. You will also document the mitigating steps you have already taken.
  • Transfer  or Eliminate : If you can’t prevent the risk from happening or mitigate the likelihood or impact of the risk, you are left with either transferring the risk to someone else (such as by purchasing insurance) or getting rid of whatever is causing the risk so that the risk is no longer applicable. For example, a church with a rock climbing wall might purchase insurance to cover the risk or it might simply take the wall down so that the risk no longer exists.

Step 4: Final Assessment

Armed with all this information, it’s time to prepare a risk report for final review by management and then the board. I’ve included a download in this post to help you write the report. It is a template document with an executive summary and then a detailed report. They are partially filled out so you can see how it is used.

case study of risk assessment

After preparing your report, review it and consider whether or not the mitigating steps and recommendations are sufficient, Do you really want to eliminate some aspect of your ministry to avoid risk? Do you believe that whatever action has been recommended is satisfactory and in keeping with the ministry’s mission and values? Are there any other ways to get the same goal achieved or purpose fulfilled without attracting risk?

Finally, after all the risk assessment and risk management work has been done, the ministry is left with two choices:

  • Accept whatever risk is left and get on with the ministry’s work
  • Reject the remaining risk and eliminate it by getting rid of the source of the risk

Step 5: Ongoing Risk Management

On a regular basis, in keeping with the type of risk and its threat, the risk assessment and risk management plan should be reviewed to see if it is still valid. Have circumstances changed? Are the plans working? Review the plan and adjust as necessary.

Key Thought: You have to deal with risk to be a good steward, and it is not hard to do.

Share this post

Sign up for Christian Leadership Reflections today!

More from christian leadership reflections.

  • The Long-Term Benefits of a Sabbatical (Jun. 14, 2023)
  • How to Release Your Mission Statement’s Power (May. 20, 2023)
  • A Theology of Strategy Development (May. 8, 2023)
  • God’s Christmas Gift to Us: Peace through Christ (Dec. 13, 2022)
  • Looking Around: Corporate Values (Oct. 18, 2022)
  • Adaptive(17)
  • Ample Resources(9)
  • Best Practices(10)
  • Christian(39)
  • Christian Faith(25)
  • Christian Fundraising(10)
  • Christian Identity(6)
  • Christian Mission(1)
  • Christian Spirituality(5)
  • Christian Witness(7)
  • Church-agency(2)
  • Collaborative(9)
  • Community Leadership(44)
  • COVID-19(1)
  • Effective(76)
  • Employee engagement(1)
  • Exemplary(46)
  • Flourishing People(34)
  • Fundraising(2)
  • Governance(25)
  • Great Leadership(115)
  • Great Leadership(1)
  • Healthy(180)
  • Impeccable(12)
  • Intellectual Creativity(15)
  • Leadership(2)
  • Leadership - Theology(6)
  • No category(2)
  • Organizational Health(2)
  • Organizational Leadership(54)
  • Personal Leadership(60)
  • Personal Reflection(7)
  • Planful(12)
  • Religious Philosophy(1)
  • Skillful Execution(9)
  • Spirituality of Leadership(32)
  • Strategy(34)
  • Team Leadership(29)
  • Teamship(5)
  • Thoughtful(38)
  • Trailblazing(16)
  • Uncategorized(8)
  • Winning Strategy(24)
  • A Milestone 360(3)
  • Appreciation at Work(3)
  • Christian Identity(3)
  • Conflict Resolution(4)
  • Corporate life as corporate witness(6)
  • Dad's Passing(2)
  • Delegation God's Way(1)
  • Essential Church Leadership(1)
  • Faithful Strategy Development(18)
  • Harvard Business School(12)
  • Hearing God speak(4)
  • How a board adds value(5)
  • Loving Teamship(3)
  • Oxford University(4)
  • Pastors: A Hope and a Future(24)
  • Program Evaluation(7)
  • Sabbatical(37)
  • Sector Narrative(4)
  • Stanford University(3)
  • Who We Serve
  • What We Value
  • 50th Anniversary
  • Board of Directors
  • Financials and Policies
  • Membership Options
  • Accreditation Program
  • Member Stories
  • Sector Representation
  • Legal Defence Fund
  • Member Support Team
  • Professional Associate Directory
  • HR Consulting
  • Employee Group Benefit Plans
  • Pension Plan
  • Christian Charity Jobs
  • Canadian Ministry Compensation Survey
  • Learning Table Online Courses
  • CCCC Knowledge Base
  • Live Webinars
  • Completing Your T3010
  • Free Resources
  • Spiritual Resources
  • Property and Liability Insurance
  • Protection for Vulnerable Persons
  • Give Confidently
  • CCCC Community Trust Fund
  • Donor Information
  • Fundraiser Information
  • Browse All Articles
  • Newsletter Sign-Up

RiskManagement →

No results found in working knowledge.

  • Were any results found in one of the other content buckets on the left?
  • Try removing some search filters.
  • Use different search filters.

National Academies Press: OpenBook

Issues in Risk Assessment (1993)

Chapter: analysis of case studies.

identification in the case studies are as important in the presentation of hazard data as they are for health risk assessment.

Discussion of other questions suggested that the scope and definition of ecological risk assessment might be broader than the scope and definition of human health risk assessment in the Red Book. For example, risk management considerations (management and political pressures, social costs, economic considerations, and regulatory outcomes) were ingredients in all case studies and related discussions. Much attention was paid to the influence of management on the scope and design of assessment. Such considerations are absent from discussions of health risk assessment. Some participants also felt that generation of new data should be treated as an aspect of risk assessment, rather than restricting risk assessment to evaluation of data that are already in hand.

Discussion leaders questioned the role of valuation in hazard identification, but this issue was not discussed in detail. In view of repeated references to the question of end point selection as a valuation decision, additional examination on this point is needed.

The case studies illustrated the importance of a systematic presentation and evaluation of data used to identify hazard. Discussion leaders noted that presentation of hazard data was highly variable in the case studies and suggested that some of the hazard identification principles that guide health hazard evaluation might be useful, including emphasis on a complete and balanced picture of relevant hazard information. Specific criteria and questions that are critical to identifying ecological risk are needed to develop an operational definition of complete and balanced .

Analysis of Case Studies

Examination of the case studies revealed a variety of approaches to ecological hazard identification.

For the tributyltin study, hazard identification was based initially on field studies. Retrospective epidemiological studies included a monitoring program (both biological and chemical) and laboratory investigation of cause-effect relationships.

In pesticide risk assessments, as exemplified by the agricultural chemicals case study, neither laboratory nor field studies are required to establish a hazard. Instead, there is a regulatory presumption of hazard.

The scientific basis, inference assumptions, regulatory uses, and research needs in risk assessment are considered in this two-part volume.

The first part, Use of Maximum Tolerated Dose in Animal Bioassays for Carcinogenicity, focuses on whether the maximum tolerated dose should continue to be used in carcinogenesis bioassays. The committee considers several options for modifying current bioassay procedures.

The second part, Two-Stage Models of Carcinogenesis, stems from efforts to identify improved means of cancer risk assessment that have resulted in the development of a mathematical dose-response model based on a paradigm for the biologic phenomena thought to be associated with carcinogenesis.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

case study of risk assessment

You are using an outdated browser. Please upgrade your browser to improve your experience.

Case Studies | Machine Safety Specialists

Case studies.

Live Event: Open Enrollment for Machine Safety Specialists (MSS) Virtual Machine Safety and Risk Assessment Training Class. Click Here to enroll today!

case study of risk assessment

What are “Unbiased Risk Assessments”?

Unbiased  Risk Assessments are guided by safety experts that have your best interests in mind.  Product companies, integrators, and solution providers may steer you toward expensive, overly-complex technical solutions.   Machine Safety Specialists provides unbiased Risk Assessments.   See examples below.

Biased  risk assessments can happen when a safety products company, integrator, or solution provider participates in the risk assessment. The participant has a conflict of interest and may steer you towards overly expensive or complex solutions that they want to sell you.  Some safety product companies will do anything to get involved in the risk assessment, knowing they will “make up for it” by selling you overly expensive solutions.  Safety product companies have sales targets and you could be one of them.

case study of risk assessment

Machine Safety Specialists are experts in OSHA, ANSI, NFPA, RIA and ISO/EN safety standards. We can solve your machine safety compliance issues, provide  unbiased  Risk Assessments, or help you develop your corporate Machine Safety and Risk Assessment program.

Case Study: Machine Safety Verification and Validation

A multi-national food processing company had a problem.  A recent amputation at a U.S. food processing plant generated negative publicity, earned another OSHA citation, and caused significant financial losses due to lost production.  Another amputation, if it occurred, would likely result in more lost production, an OSHA crackdown, and, if posted on social media, irreparable damage to the company’s brand.

After multiple injuries and OSHA citations, the company contacted MSS for help.  First, the company needed to know if the existing machine safeguarding systems provided “effective alternative protective measures” as required by OSHA.  MSS was contracted through the company’s legal counsel to audit three (3) plants with various type of machines and deliver detailed Machine Safety compliance reports for each machine to the client under Attorney Client Privilege.  A summary in our safeguarding audit reports for one plant was as follows:

case study of risk assessment

For the 19 high risk and 8 medium risk (poorly guarded) machines, action by applying risk reduction measures through the use of the hierarchy of controls was required.  MSS provided a Machine Safeguarding specification for the machines and worked with our client to select qualified local fabricators and integrators that performed the work in an aggressive schedule.  MSS provided specifications and consulting services, and our client contracted the fabrication and integration contractors directly, under the guidance of MSS.

During the design phase of the Machine Safeguarding implementation, MSS provided safety  verification services  and detailed design reviews.  Due to the stringent legal requirements and the need for global compliance, the safety design verification included  SISTEMA analysis  of the functional safety systems.  MSS provided a detailed compliance report with written compliance statements covering OSHA compliance, hazardous energy controls (Lockout/Tagout, LOTO, OSHA 1910.147) and effective alternative protective methods, per OSHA’s minor servicing exception.  Due to the complexity of the machines and global safety requirements, MSS verified and validated the machine to numerous U.S. and international safety standards, including ANSI Z244.1, ANSI B11.19, ANSI B11.26, ISO 14120, ISO 13854, ISO 13855, ISO 13857 and ISO 13849.

Then, after installation and before placing the machines into production, MSS was contracted to perform safety validation services as required by the standards.  During the validation phase of the project, MSS traveled to site to inspect all machine safeguarding and validate the functional safety systems.  This safety validation included all aspects of the safety system, including barrier guards, interlocked barrier guards, light curtains, area scanners, the safety controllers and safety software, safety servo systems, variable frequency safety drives (safety VFDs), and pneumatic (air) systems.

After the machine safeguarding design verification, installation, and safety system validation, MSS was pleased to provide the following data in the executive summary of the report.

case study of risk assessment

By involving Machine Safety Specialists (MSS) early in the project, we can ensure your project complies with OSHA, ANSI/RIA, NFPA and ISO safety standards.  By helping you implement the project, we kept the safety project on-track.  Our validation testing and detailed test reports provide peace of mind, and evidence of due diligence if OSHA pays you a visit.   Contact MSS  for all of your Machine Safety Training, Safeguarding Verification, and on-site functional safety validation needs.

Case Study:  Collaborative Robot System

case study of risk assessment

  • Is this Collaborative Robot system safe?
  • How can we validate the safety of the Collaborative Robot system before duplicating it?
  • If we ship these globally, will we comply with global safety standards?
  • What if the Collaborative Robot hurts someone?
  • What about OSHA?

The OEM called Machine Safety Specialists (MSS) to solve these concerning problems.   Prepared to help, our TÜV certified Machine Safety engineers discussed the Collaborative Robot system, entered an NDA, and requested system drawings and technical information.  On-site, we inspected the Collaborative Robot, took measurements, gathered observations and findings, validated safety functions, and spoke with various plant personnel (maintenance, production, EHS, engineering, etc.).  As part of our investigation, we prepared a gap analysis of the machine relative to RIA TR R15.606, ISO 10218-2, OSHA, ANSI, and ISO standards.  The final report included Observations, Risk Assessments, and specific corrective actions needed to achieve US and global safety compliance.   Examples of our findings and corrective actions include:

  • Identification of the correct safeguarding modes (according to RIA TR R15.606-2016 and ISO/TS 15066-2016).
  • Observation that Area Scanners (laser scanners) provided by the machine builder were not required, given the Cobot’s modes of operation. Recommended removal of the area scanners, greatly simplifying the system.
  • Observation that the safety settings for maximum force, given the surface area of the tooling, provided pressure that exceeds US and global safety requirements. Recommended a minimum surface area for the tooling and provided calculations to the client’s engineers.
  • Observation that the safety settings for maximum speed were blank (not set) and provided necessary safety formulas and calculations to the client’s engineers.
  • Recommended clear delineation of the collaborative workspace with yellow/black marking tape around the perimeter.

With corrective actions complete, we re-inspected the machine and confirmed all safety settings.  MSS provided a Declaration of Conformance to all applicable US and global safety standards.  The customer then duplicated the machines and successfully installed the systems at 12 plants globally, knowing the machines were safe and that global compliance was achieved.   Another success story by MSS…

[drawattention ID=”5446″]

Case Study:  Robot Manufacturing

case study of risk assessment

The manufacturer hired a robotics integrator and a brief engineering study determined that speed and force requirements required a high-performance Industrial Robot (not a  Cobot ).  The client issued a PO to the integrator, attached a manufacturing specification, and generically required the system to meet “OSHA Standards”.  Within 3 months, the robot integrator had the prototype system working beautifully in their shop and was requesting final acceptance of the system.   This is when the second problem hit –  the US manufacturer experienced a serious robot-related injury .

In the process of handling the injury and related legal matters, the manufacturer learned that generic “OSHA Standards” were not sufficient for robotic systems.  To prevent fines and damages in excess of $250,000, our client needed to make their existing industrial robots safe, while also correcting any new systems in development.   The manufacturer then turned to Machine Safety Specialists (MSS) for help.

Prepared to help, our TÜV certified and experienced robot safety engineers discussed the Industrial Robot application with the client.  MSS entered an NDA and a formal agreement with the client and the client’s attorney.  On-site, MSS inspected the Industrial Robot system, took measurements, gathered observations and findings, tested (validated) safety functions, and met with the client’s robotics engineer to complete a compliance checklist.   As part of our investigation, we prepared a Risk Assessment in compliance with ANSI/RIA standards, an RIA compliance matrix, and performed a gap analysis of the industrial robot systems relative to ANSI/RIA standards.  The final report included a formal Risk Assessment, a compliance matrix, our observations, and specific corrective actions needed to achieve safety compliance.

Examples of our findings and corrective actions included:

  • A formal Risk Assessment was required in compliance with ANSI/RIA standards (this was completed by MSS and the client as part of the scope of work).
  • Critical interlock circuity needed upgrading to Category 3, PL d, as defined by ISO-13849. (MSS provided specific mark-ups to the electrical drawings and worked with the integrator to ensure proper implementation).
  • The light curtain reset button was required to be relocated. (MSS provided specific placement guidance.)
  • The safeguarding reset button was required to be accompanied by specific administrative. (MSS worked with the integrator to implement these into the HMI system and documentation).
  • The robot required safety soft limits to be properly configured and tested (Fanuc: DCS, ABB: SafeMove2).
  • Specific content needed to be added to the “Information for Use” (operation and maintenance manuals).

With corrective actions complete, MSS re-inspected the machine, verified safety wiring, validated the safety functions and provided a Declaration of Conformance for the robot system. The customer then accepted the system, commissioned, and placed it into production.  The project was then deemed a huge success by senior management.   The industrial robot system now produces high-quality assemblies 24/7, the project team feels great about safety compliance, and the attorneys are now seeking other opportunities.   Another success story by MSS…

Case Study: Manufacturing Company

Risk Assessment Case Study

Another question….

Q:  Which safety product company do we trust to perform a risk assessment with your best interest in mind? A:  None of them. Companies selling safety products have a hidden agenda – sell the most products and charge insane dollars for installation! Machine Safety Specialists are safety engineers and consultants who have your best interest in mind. We will conduct an unbiased Risk Assessment and recommend the most sensible, lowest cost, compliant safeguards on the market – with no hidden sales agenda!

Case Study: Machine Safeguarding Example

One photo, two points of view…., safety product company recommendation:.

“Wow – This Customer needs $50K of functional safety equipment on each machine. Add light curtains, safety system, software, etc….”.   Problem solved for $50,000.

MSS Recommendation:

“Bolt down the existing guard, add end cap, remove sharp edges and secure the air line. Add a warning sign with documented training….”.  Problem solved for $50.   Once again, this really happened  – don’t let it happen to you !

Case Study: Risk Reduction

“machine safety specialists’ comprehensive approach to  risk reduction ensured the most complete, sensible, and least expensive solution for compliance” – safety manager.

Green Circle (right): We use all methods of Risk Reduction (elimination, signs, training) – not just guards and protective devices. This is the least expensive and most comprehensive approach. Red Circle (right): Guarding company methods of risk reduction (guards and protective devices) are very expensive, time consuming, and do not mitigate all of the risk.

Case Study - Why Perform a Risk Assessment?

Another frequently asked question is: “ Why do I need a Risk Assessment?” To answer this, please see Case Study: “ Applicable U.S. Machine Safety Codes and Standards”, then please see below… Why Perform a Risk Assessment? A written workplace hazard assessment is required by law.  In section 1910.132(d)(2), OSHA requires a workplace hazard analysis to be performed.  The proposed Risk Assessment fulfils this requirement with respect to the machine(s).

1910.132(d)(2): “The employer shall verify that the required workplace hazard assessment has been performed through a written certification that identifies the workplace evaluated; the person certifying that the evaluation has been performed; the date(s) of the hazard assessment; and, which identifies the document as a certification of hazard assessment.”

A Risk Assessment (RA) is required by the following US standards:

  • ANSI Z244.1
  • ANSI B11.19
  • ANSI B155.1
  • ANSI / RIA R15.06

Please note the following excerpt from an actual OSHA citation :

“The machines which are not covered by specific OSHA standards are required under the Occupational Safety and Health Act (OSHA Act) and Section 29 CFR 1910.303(b)(1) to be free of recognized hazards which may cause death or serious injuries.”

In addition, the risk assessment forms the basis of design for the machine safeguarding system.  The risk assessment is a process by which the team assesses risk, risk reduction methods, and team acceptance of the solution.  This risk reduction is key in determining the residual risks to which personnel are exposed.  Without a risk assessment in place, you are in violation of US Safety Standards, and you may be liable for injuries from the un-assessed machines.

Contact Us Today for your Free Risk Assessment Spreadsheet

Download your Free Risk Assessment Spreadsheet

ANSI/RIA Risk Assessment Spreadsheet-Enhanced Three State

Case Study:  Applicable U.S. Machine Safety Codes and Standards

We are often asked: “What must I do for minimum OSHA compliance at our plant?  Do I have to follow ANSI standards?  Why?” The following information explains our answer… Please note the following excerpt from an actual OSHA citation:

 “These machines must be designed and maintained to meet or exceed the requirements of the applicable industry consensus standards.  In such situations, OSHA, may apply standards published by the American National Standards Institute (ANSI), such as standards contained in ANSI/NFPA 79, Electrical Standard for Industrial Machinery, to cover hazards that are not covered by specific OSHA standards .”

  U.S. regulations and standards used in our assessments include:

  • OSHA 29 CFR 1910, Subpart O
  • Plus, others as applicable….

Please note the following key concepts in the U.S. Safety Standards:

  • Control Reliability as defined in ANSI B11.19 and RIA 15.06
  • Risk assessment methods in ANSI B11.0, RIA 15.06, and ANSI/ASSE Z244.1
  • E-Stop function and circuits as defined in NFPA 79 and ANSI B11.19
  • OSHA general safety regulations as defined in OSHA 29 CFR 1910 Subpart O – Section 212
  • Power transmission, pinch and nip points as defined in OSHA 29 CFR 1910 Subpart O -Section 219
  • Electrical Safety as defined in NFPA 79 and ANSI B11.19.

Note:  OSHA is now citing for failure to meet ANSI B11.19 and NFPA 79 .

SISTEMA

Telephone: (740) 816-9178 E-mail: [email protected]

Contact Us Today – We Can Be There Tomorrow!

  • Work E-mail *

Live Event:

I am interested in:.

  • Machine Safety Audits and Risk Assessments
  • Functional Safety and Control Reliability Design Reviews (Verification)
  • Stop-time Measurement
  • Instructor Lead Training (ILT)
  • Online Instructor Lead Training (ILT)
  • E-Learning Courses and Certificate Program
  • Machine Safeguarding Plans
  • SISTEMA Analysis
  • Safety System Testing (Validation) and Testing Procedures
  • Industrial or Collaborative Robot Safety
  • Machine Safeguarding Specification
  • Consulting and Expert Witness
  • Machine SafetyProTM - Mobile Risk Assessment Software
  • Free Risk Assessment Spreadsheet
  • En-Tronic FT-50 / FT-100 Parts
  • Safety Signs and Labels
  • Safety Sign/Label Assessment
  • Where did you hear about Machine Safety Specialists?
  • Upload Machine Photos Drop files here or Select files Max. file size: 300 MB. empty to support CSS :empty selector. --> Photo Uploads: Please upload photos of machines to evaluate here and provide any additional instructions in the Message field

P.O. Box 1111 Sunbury, OH 43074-9013

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

A case study exploring field-level risk assessments as a leading safety indicator

Profile image of Emily Haas

Health and safety indicators help mine sites predict the likelihood of an event, advance initiatives to control risks, and track progress. Although useful to encourage individuals within the mining companies to work together to identify such indicators, executing risk assessments comes with challenges. Specifically, varying or inaccurate perceptions of risk, in addition to trust and buy-in of a risk management system, contribute to inconsistent levels of participation in risk programs. This paper focuses on one trona mine's experience in the development and implementation of a field-level risk assessment program to help its organization understand and manage risk to an acceptable level. Through a trans-formational process of ongoing leadership development, support and communication, Solvay Green River fostered a culture grounded in risk assessment, safety interactions and hazard correction. The application of consistent risk assessment tools was critical to create a participatory workforce that not only talks about safety but actively identifies factors that contribute to hazards and potential incidents. In this paper, reflecting on the mine's previous process of risk-assessment implementation provides examples of likely barriers that sites may encounter when trying to document and manage risks, as well as a variety of mini case examples that showcase how the organization worked through these barriers to facilitate the identification of leading indicators to ultimately reduce incidents.

Related Papers

Dragan Komljenovic , Georges Loiselle

The daily operations in the mining industry are still a significant source of risk with regard to occupational safety and health (OS & H). Various research studies and statistical data worldwide show that the number of serious injuries and fatalities still remains high despite substantial efforts the industry has put in recent years in decreasing those numbers. This paper argues that the next level of safety performance will have to consider a transition from coping solely with workplace dangers, to a more sys-temic model taking organizational risks in consideration. In this aspect, lessons learned from the nuclear industry may be useful, as organizational learning processes are believed to be more universal than the technologies in which they are used. With the notable exception of major accidents, organizational performance has not received all the attention it deserves. A key element for reaching the next level of performance is to include organizational factors in low level events analyses, and approach the management as a risk control system. These factors will then appear not only in the event analysis, but in supervision activities, audits, change management and the like. Many recent event analyses across various industries have shown that organizational factors play a key role in creating conditions for triggering major accidents (aviation, railway transportation, nuclear industry, oil exploitation, mining, etc.). In this paper, a perspective that may be used in supervisory activities, self-assessments and minor events investigations, is presented. When ingrained in an organizational culture, such perspective has the highest potential for continuous safety improvement.

case study of risk assessment

Mining, Metallurgy, and Exploration

Currently, the US mining industry is encouraged, but not required to adopt a formal health and safety management system. Previous research has shown that the adoption of such systems has been more difficult in some subsectors of the mining industry than others. Given the interdependencies between management systems and safety climate in addition to their predictive utility of incidents, it is important to assess differences in the perceptions of safety climate among mining subsectors in the USA. If significant differences exist, then mining subsectors may not necessarily be able to adopt a one-size approach to system implementation. To that end, the National Institute for Occupational Safety and Health assessed mineworkers’ perceptions of several individual and organizational safety climate constructs. Participants consisted of 2945 mineworkers at coal, industrial mineral, and stone/sand/gravel mine sites throughout 18 states. Linear regressions were used to answer the research question. The results suggest that coal miners, in comparison to those miners in industrial mineral and stone/sand/gravel sectors, had significantly less favorable perceptions on each of the organizational climate constructs measured (i.e., organizational support, supervisor support and communication, coworker communication, engagement/involvement, and training) (p < 0.001 in all cases). Importantly, these results parse out organizational indicators to show that perceptions are not only lower in one area of organizational or supervisor support. Rather, engagement, training, and communication practices were all significantly lower among coal miners, prompting considerations for these significant differences and actions that can be taken to improve system practices.

Transactions

International Journal of Risk Assessment and Management

Dragan Komljenovic

International Journal of Engineering and Management Research

Paola Parra

In the history of the mining sector, in its beginnings it faced very high levels of risks both safety and health. The data are limited to serious accidents, and these are mainly associated with falls from land, transport and machinery. Analysis of these data suggests that the leading causes of death tend to be the same as those of serious injuries, while disasters have a different profile. Over the past decade, mining disasters have been associated with explosions due to flammable gases, a fire on a conveyor belt, a flood of mud and water, and rock outbursts. Mandatory compliance with a company&#39;s safety, health and environmental regulations is a minimum and can be significantly improved by adding a long-term management planning and implementation process with a deeper cultural shift towards continuous improvement in safety and quality. Note that the purpose of integrating health and safety into other management systems is the need for health and safety management to be central, ...

International Journal of Mining Science and Technology

sharad chaudhari

Safety and Health at Work

Ann Pegoraro

Proceedings of the 2012 Coal Operators' Conference

Philipp Kirsch , Darren Sprott

SSRN Electronic Journal

Igor Linkov

Journal of Safety, Health, and Environmental Research

Health and safety management system (HSMS) document reviews show occupational health and safety policies as a primary system element. One way that companies operationalize tasks and communicate expectations to their employees is through their health and safety policies. As a result, policies should be visible and clearly promote desired practices. However, limited research exists on the quantity and scope of health and safety practices within company policies. In response, this study analyzed the publicly available health and safety policies of 26 mining companies to determine the quantity of health and safety practices that mining companies encourage in relation to the plan-do-check-act cycle. A thematic content analysis of the policies identified elements and practices within the text. On average, companies communicated information on about seven elements (range 1 to 14, SD = 3.49) and discussed 15 practices (range 2–34, SD = 9.13). The elements in which companies highlighted the most practices were risk management, emergency management, leadership development, and occupational health. A discussion of the policy trends shows areas that mine sites can improve upon within their plan-do-check-act cycle, in addition to encouraging the use of both leading and lagging indicators when checking and acting to manage health and safety performance.

RELATED PAPERS

Rodrigo López

Journal of AOAC International

Edgardo Diaz

The Impact of German Engineers on the Early Space Race

Richard Kazares

Megan Sykes

IJRASET Publication

Sairam Mallajosyula

Stella Sylaiou

Chemosphere

sara norström

Journal of Theoretical Chemistry

Rita Kakkar

Ciudadanías digitales: perspectivas desde los medios, el periodismo y la educomunicación, 2019, ISBN 978-958-5544-31-4, págs. 155-172

María Gómez y Patiño

Metaphorik. de

Veronika Koller

faiq tobroni

László Kinyó

Clío &amp; Asociados La historia enseñada

Florencia Bottazzi

Journal of Food Composition and Analysis

Isabel Seiquer

Lucia Araujo Chaveron

Heart & Lung

RePEc: Research Papers in Economics

Philippe Crist

Antimicrobial Agents and Chemotherapy

Tuncer Turhan

Agricultural Water Management

Juan Manuel Peragón

The journal of the INCE of Japan

Takayuki Kageyama

Computational Statistics & Data Analysis

Barry Pittendrigh

Contemporary mathematics

Joaquim Martín

IEEE Transactions on Nuclear Science

Journal of global antimicrobial resistance

Sudhir Mehta

See More Documents Like This

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.17(Suppl 1); 2019 Jul

Logo of efsa

Advancing human health risk assessment

Anna lanzoni.

1 European Food Safety Authority, IT

Anna F Castoldi

George en kass, andrea terron, guilhem de seze, anna bal‐price.

2 European Commission, Joint Research Centre, Ispra IT

Frédéric Y Bois

3 French National Institute for Industrial Environment and Risks, FR

K Barry Delclos

4 National Center for Toxicological Research US Food and Drug Administration, USA

Daniel R Doerge

Ellen fritsche.

5 Leibniz Research Institute for Environmental Medicine, DE

Thorhallur Halldorsson

6 University of Iceland, IS

Marike Kolossa‐Gehring

7 German Environment Agency, DE

Susanne Hougaard Bennekou

8 National Food Institute, Technical University of Denmark, DK

Frits Koning

9 Leiden University Medical Centre, NL

Alfonso Lampen

10 German Federal Institute for Risk Assessment, Berlin DE

Marcel Leist

11 University of Konstanz, DE

Ellen Mantus

12 The National Academies of Sciences, Engineering, and Medicine, USA

Christophe Rousselle

13 French Agency for Food, Occupational and Environmental Health, FR

Michael Siegrist

14 ETH Zurich CH

Pablo Steinberg

15 Max‐Rubner Institute, DE

Angelika Tritscher

16 World Health Organisation, Geneva CH

Bob Van de Water

17 Drug Discovery and Safety, Leiden Academic Centre for Drug Research Leiden University, NL

Paolo Vineis

18 Imperial College London, UK

Nigel Walker

19 National Toxicology Program/National Institute of Environmental Health Sciences, USA

Heather Wallace

20 Institute of Medical Sciences, University of Aberdeen, Scotland UK

Maurice Whelan

Maged younes.

21 Formerly World Health Organisation, Geneva CH

The current/traditional human health risk assessment paradigm is challenged by recent scientific and technical advances, and ethical demands. The current approach is considered too resource intensive, is not always reliable, can raise issues of reproducibility, is mostly animal based and does not necessarily provide an understanding of the underlying mechanisms of toxicity. From an ethical and scientific viewpoint, a paradigm shift is required to deliver testing strategies that enable reliable, animal‐free hazard and risk assessments, which are based on a mechanistic understanding of chemical toxicity and make use of exposure science and epidemiological data. This shift will require a new philosophy, new data, multidisciplinary expertise and more flexible regulations. Re‐engineering of available data is also deemed necessary as data should be accessible, readable, interpretable and usable. Dedicated training to build the capacity in terms of expertise is necessary, together with practical resources allocated to education. The dialogue between risk assessors, risk managers, academia and stakeholders should be promoted further to understand scientific and societal needs. Genuine interest in taking risk assessment forward should drive the change and should be supported by flexible funding. This publication builds upon presentations made and discussions held during the break‐out session ‘Advancing risk assessment science – Human health’ at EFSA 's third Scientific Conference ‘Science, Food and Society’ (Parma, Italy, 18–21 September 2018).

1. Introduction

The current human health risk assessment is substantially hazard driven and based on world‐wide recognised protocols largely relying on animal studies. Translatability of results from these studies to humans and considerations on the replacement, reduction and refinement of animal studies are a matter of debate. In parallel, biological and toxicological sciences today can benefit from paramount scientific and technological advances. In relation to hazard assessment, new tools are emerging that enable a better understanding of the mechanisms leading to adverse effects, more accurate predictions of biological responses and so help to establish causality, offering the benefit of being, in most cases, non‐animal test models. Exposure science is also rapidly developing and epidemiological research is facing a transition from empirical observations alone to a molecular epidemiology paradigm incorporating exposure and pathogenesis. All these developments are promising and support a shift from the current risk assessment paradigm to a more holistic approach, improving assessment of human health risk while reducing animal testing.

A driver for this shift is the National Academies of Sciences, Engineering and Medicine (NASEM) that, since 2007, has published reports providing a vision and a proposal for strategy for the toxicology of the 21st century based on new tools and approaches, developing the exposure science and proposing the integration of these tools to further advance risk assessments. These reports served as the basis for various initiatives in the United States and world‐wide accelerating data generation and, in general, the effort for such a paradigm shift. Elaboration on the need/trend of a new conceptual framework in human health risk assessment, supported by new concepts and tools in hazard assessment and exposure has been proposed in the European Union (EU) in a SCHER/SCENIHR/SCCS report in 2013. 1

However, it is recognised that the implementation of new approaches and tools and the use of new data generated face challenges impacting the paradigm shift, as well as acceptance by regulators. Transparent and understandable communication of the strengths and weaknesses of new approaches is pivotal to their application and acceptance by the scientific community, regulators and laypeople.

This publication is intended to present an overview of the general concepts at the basis of a possible shift from traditional to holistic risk assessment and to apply the available relevant scientific methodologies and technological advances that may entail such a shift, together with challenges and needs, including:

  • opportunities and challenges related to the application of new tools and technologies for the identification and characterisation of adverse effects;
  • the role of human epidemiology in the identification and characterisation of health effects induced by chemicals;
  • the integration of human biomarkers in exposure assessment.

While focusing on chemicals, many of the considerations discussed would be applicable to other stressors.

Concrete case studies utilising new tools, new approaches for exposure assessment and their integration are also presented.

This publication builds upon presentations made and discussions held during the break‐out session ‘Advancing Risk Assessment Science – Human Health’ at EFSA's third Scientific Conference ‘Science, Food and Society’ (Parma, Italy, 18–21 September 2018). Additional discussions relevant to the topic are presented in this issue by Cavalli et al. ( 2019 ), Hartung ( 2019 ) and Hougaard Bennekou ( 2019 ).

1.1. Advancing human health risk assessment – concepts

1.1.1. the nasem reports envisioning the future of risk assessment: opportunities and challenges.

In 2007, the NASEM released the report ‘Toxicity testing in the 21st century: a vision and a strategy’ (NRC, 2007 ), which capitalised on the advances in biology and related fields and increases in computational power to envision a future in which toxicity testing primarily relies on high‐throughput in vitro assays and computational tools to assess potential adverse effects from chemical exposure. A vision for exposure science was articulated several years later in the National Academies report, ‘Exposure science in the 21st century: a vision and a strategy’, (NRC, 2012 ), which expanded the breadth and depth of exposure science given advances in, for example, monitoring technologies, analytical techniques and computational tools. Since the release of those reports, various agencies and organisations have started to collaborate within and outside the United States to advance the visions. Generation of diverse data streams from government, industry and academic laboratories has accelerated. Although scientists and others expect that implementation of the visions will take decades to be fully achieved, the recent National Academies report, ‘Using 21st century science to improve risk‐related evaluations’ (National Academies of Sciences, Engineering, and Medicine, 2017 ), examines how the data being generated today can be used in risk assessment applications. Four areas (priority setting, chemical assessment, site‐specific assessment and assessment of new chemicals) have been identified that could benefit from incorporating the 21st century science, and case studies have been described. Although there are many technical issues still to be resolved, the report identified one particular challenge that looms large. Technology has evolved far faster than our ability to analyse, interpret and integrate the diverse, complex and large data streams for risk assessment. The path forward to address the challenges entails a research agenda that develops, explores and documents case studies capturing various scenarios of data availability for risk assessment applications. Multidisciplinary collaboration will also be critical. Ultimately, application and acceptance of the new approaches will depend on communicating the strengths and weaknesses in a transparent and understandable way.

1.2. Holistic human health risk assessment: challenges to a fit‐for‐purpose approach

There is consensus that the 21st century paradigm shift in human health risk assessment will be based on the understanding of mechanisms of toxicity rather than on the identification of apical endpoints of toxicity. This mechanistic shift has great potential for improving human health risk assessment and tailoring it to different problem formulations. The transition to a mechanism‐based risk assessment of chemicals would require a holistic approach in which several aspects should be considered: the identification and use of adverse outcome pathway (AOP) as a framework that integrates new approach methods (NAMs) supporting mechanistic understanding and predictive screening; electronic data availability; rigorous analysis of uncertainties; definition of protection goals; and flexible and where necessary tailored data requirements, and harmonised approaches. A key question is whether current EU regulations are flexible enough to take full advantage of this potential. In this context, several aspects should be considered:

  • Are the standard requirements for risk assessment of human health fit for purpose?
  • Is there the need for more flexibility in requirements and in the accompanying guidance documents?
  • Are existing data used at their best?
  • Can mutual recognition of the risk assessment outputs be recognised?

This discussion is further developed in this issue by Hougaard Bennekou ( 2019 ).

1.3. New approach methods in toxicology for mechanism‐based hazard assessment

Many animal‐based test methods have never been formally validated; their predictivity for complex endpoints, such as cancer or developmental toxicity, is sometimes poor (60–70% range) and may for specific cases (e.g. murine liver tumours) be questionable altogether. Moreover, interspecies extrapolations are a large challenge. Animal‐based testing, e.g. for developmental neurotoxicity (DNT), is also extremely demanding in terms of resources. Finally, this approach has a low throughput and yields little or no information on the mechanism of toxicity (Hartung and Leist, 2008 ; Leist and Hartung, 2013 ; Daneshian et al., 2015 ; Meigs et al., 2018 ).

An alternative approach to animal testing, as suggested by the NASEM (Leist et al., 2008 ), and large groups of scientists world‐wide (Basketter et al., 2012 ; Ramirez et al., 2013 ; Leist et al., 2014 , 2017 ; van Vliet et al., 2014 ; Gordon et al., 2015 ; Rovida et al., 2015 ; Marx et al., 2016 ) is the use of combinations of in vitro (e.g. cell cultures, organoids, zebra fish embryos) and in silico (e.g. QSAR, PBTK) methods. To distinguish these novel approaches for identification and quantification of chemical hazard from traditional animal experiments, they have been called NAM. Notably, in some fields, the term NAM is used with a more extended scope to also include new types of animal studies, or to comprise the use of pre‐existing animal data for known compounds in a read‐across procedure to predict safety/hazard of new compounds. However, the term is used in this paper to imply only animal‐free new approaches as pursued by the H2020 research project EU‐ToxRisk (Daneshian et al., 2016 ). To use NAMs in a regulatory context, a process and criteria to set out their readiness are essential. The readiness evaluation extends the scope of classical method validation by allowing different (fit‐for‐purpose) readiness levels for various applications. The field of DNT testing can serve to exemplify the application of readiness criteria. The number of chemicals not tested for DNT, and the testing cost per chemical are over‐whelming for in vivo DNT testing. So, there is a need for inexpensive, high‐throughput NAM approaches, to obtain initial information on potential hazards, and to allow prioritisation for further testing (Bal‐Price et al., 2015 , 2018 ; Aschner et al., 2017 ; Fritsche et al., 2017 , 2018 ). Based on readiness criteria, a (semi)‐quantitative analysis process was assembled on test readiness of 17 NAMs with respect to various uses (e.g. prioritisation/screening, risk assessment). The scoring results suggest that several assays are currently at high readiness levels. Therefore, DNT NAMs may be assembled into an integrated approach to testing and assessment (IATA). Furthermore, NAM development may be guided by knowledge of signalling pathways necessary for normal brain development, DNT pathophysiology and relevant AOPs. There is, however, an educational need on all sides to understand strengths and weaknesses of the new approaches (Kadereit et al., 2012 ; van Thriel et al., 2012 ; Smirnova et al., 2014 ; Schmidt et al., 2016 ; Hartung et al., 2019 ).

1.4. Use of epidemiological studies for chemical risk assessment: strengths and limitations

Chemical risk assessment should ideally be based on studies carried out under controlled experimental conditions. However, for humans, such experiments cannot be performed for potentially harmful substances. Although human observational studies can be used as an alternative, the time needed to generate reliable results for new substances and methodological limitations have traditionally hampered their use. For these reasons, procedures and regulatory framework for chemical risk assessment have largely been driven by reliance on experimental studies in animals. The limitations of that approach are uncertainties related to extrapolating findings from animals to humans and use of doses that are usually far higher than those observed in humans. Although these limitations can be partly reduced by use of safety factors, it is increasingly acknowledged that precision in risk assessment can be improved by incorporating findings from human observational studies. Recent advancements in analytical chemistry in terms of rapid method development, reduction in sample volume and cost, as well as improved access to computerised health data have made human observational studies of sufficient quality more frequently available for risk assessors. Apart from general limitations in terms of bias (including confounding), use of human studies compared with animal studies is more complicated due to occurrence of other co‐exposures and the fact that unexposed individuals usually do not exist. The quality of the exposure assessment is crucial. The high variability in term of susceptibility to chemical exposures and interaction with other lifestyle factors means that results from different human studies can be conflicting. The current framework for chemical risk assessment is generally not compatible with these complexities and compromises are needed. A better understanding of strengths and weaknesses of traditional toxicological studies and human observational studies is the key for further advancing the methodology for chemical risk assessment.

1.5. Risk communication and how simple heuristics influence laypeople's risk perception

Among aspects to be considered facing this change in risk assessment paradigm, of utmost relevance is the risk communication versus laypeople. The qualitative characteristics of a hazard, and not relevant quantitative information, strongly influence laypeople's risk perception. Recent research (Siegrist and Sütterlin, 2017 ; Scott et al., 2018 ) has focused on the role of simple heuristics: natural is good and synthetic is bad, for example, in people's hazard evaluations. The results of such studies indicate that not only the negative consequences of a hazard, but also whether the hazard is human‐made (e.g. glyphosate) or naturally occurring (e.g. Campylobacter ) has a significant influence on people's perceptions. Indeed, negative outcomes are perceived to be more severe if they are anthropogenic than if they stem from nature. Perceiving gene technology to be unnatural also seems to be a significant reason why the risks as well as the benefits associated with this technology are perceived differently when compared with the risks and benefits associated with conventional breeding technology. Another heuristic that people may apply when evaluating the healthiness of foods is that the absence of certain substances implies the product is healthier. Therefore, products with ‘free from’ labels (e.g. free from palm oil, free from genetically modified (GM) organisms) are perceived to be healthier than products without such labels. Biased decisions that result from people's reliance on simple heuristics have been observed in different contexts, and they may result in non‐optimal decisions being made. However, providing information to laypeople seems to have only a limited impact on how hazards are perceived. This poses a challenge for risk communication intended to change laypeople's perceptions so that they will fall more in line with the best available scientific evidence.

2. Advancing human health risk assessment – new tools, new approaches in exposure assessment and examples of their integration

2.1. new tools, 2.1.1. nams for new chemical safety testing strategies: the eu‐toxrisk project.

The large‐scale EU‐ToxRisk project ( http://www.eu-toxrisk.eu/ ) is an integrated European ‘flagship’ programme with the vision to establish a paradigm shift in toxicity testing and risk assessment for the 21st century by implementing mechanism‐based integrated testing strategies using non‐animal NAMs. To accomplish this, the EU‐ToxRisk project has united all relevant scientific disciplines covering in silico QSAR modelling, cellular toxicology, bioinformatics and physiologically based pharmacokinetic (PBPK) modelling. The project tests the integration of the different NAMs in various case studies, ultimately establishing: (i) pragmatic, read‐across procedures incorporating mechanistic and toxicokinetic (TK) knowledge; and (ii) ab initio hazard and risk assessment strategies for chemicals with little background information. The case studies are focused on repeated dose systemic toxicity (RDT) targeting either liver, kidney, lung or nervous system toxicity, as well as developmental/reproduction toxicity. The integration of the various NAMs in defined case studies allows the assessment of the overall applicability domain of these NAMs in chemical hazard and ultimate risk assessment. Case studies are centred around AOPs and include, for example, the application of NAMs for the assessment of: (i) microvesicular liver steatosis induced by valproic acid analogues; (ii) the prediction of teratogenic effects of valproic acid analogues; and (iii) the application of NAMs to assess the AOP pathway related to inhibition of the mitochondrial respiratory chain complex 1 of nigra striatal neurons leading to parkinsonian motor deficits. Importantly, the activities in the case studies are supported and guided by both cosmetics, (agro)‐chemical, pharma industry stakeholders as well as various European regulatory authorities. The final goal is to deliver testing strategies to enable reliable, animal‐free hazard and risk assessment of chemicals based on a mechanistic understanding of chemical toxicity.

2.1.2. Assessment of chemical mixture‐induced developmental neurotoxicity using human in vitro model

Chemicals that are known to trigger‐specific DNT effects belong to different chemical classes including industrial chemicals, persistent organic pollutants (POPs), metals and pesticides. They belong to multiple regulatory silos related to food and food quality such as pesticides, food contact materials and food additives, including flavourings, colourings and preservatives. These examples illustrate that common, similar or related toxic effects triggered by various chemicals may be differently regulated and that the combined effects of these chemicals across different regulatory domains are not currently considered. At the same time, it is well documented in the existing literature that ‘mixture effects’ can be greater than effects triggered by the most potent single chemical in a mixture, and the mixture effects may be additive, or in some cases even synergistic. Therefore, implementation of mixture risk assessment (MRA) for DNT evaluation, is strongly advocated as infants and children are indisputably co‐exposed to more than one chemical at the time. Indeed, for example, breast milk has been found to contain chemicals regulated as pesticides, along with those regulated as cosmetics (including UV filters parabens, phthalates), together with POPs including polychlorinated biphenyls (PCBs), confirming that simultaneous co‐exposure to multiple chemicals occurs in babies and during pregnancy (Schlumpf et al., 2010 ; de Cock et al., 2014 ). A challenge in evaluation of DNT effects induced by chemicals is that the neurodevelopmental outcome depends not only on the kind of exposure (dose, duration) but also on the developmental stage of the brain at the time of exposure.

Therefore, in this study, it was proposed to use a mixed culture of neuronal and glial cells derived from human induced pluripotent stem cells as this in vitro model makes it possible to evaluate a chemical impact on key neurodevelopmental processes (including cell proliferation, migration and morphological/functional neuronal and glial differentiation) mimicking critical stages of human brain development. Moreover, the applied in vitro assays were anchored to the selected neurodevelopmental processes that overlapped with common key events identified in AOPs relevant to impairment of learning and memory in children; this is the most frequent adverse outcome identified in the existing DNT AOPs (Bal‐Price and Meek, 2017 ). The effects of the selected compounds (administered as a single chemical or in mixtures) were assessed on human neural precursor cells undergoing differentiation to determine synergistic, antagonistic or additive effects on brain‐derived neurotrophic factor (BDNF) level, neurite outgrowth and synaptogenesis after short‐ (72 h) or long‐term exposure (14 days in vitro ). The obtained data suggest that low, non‐/cytotoxic concentrations, below lowest‐observed‐effect concentrations (LOAECs) of single chemicals become neurotoxic in mixture, especially for the chemicals working through a similar mode of action (MOA) and after 14 days of exposure.

2.1.3. The use of toxicogenomics in chemical risk assessment

‘Omics methods addressing the whole genome (genomics), the transcriptome (transcriptomics), the proteome (proteomics) and the metabolome (metabolomics) were established to be substantial in toxicological research over the past decade. Currently, the impact of the AOP concept that tries to link adverse to molecular effects is the way forward to regulatory toxicology as proposed already by the Organisation for Economic Co‐operation and Development (OECD), WHO and others. In risk assessment, however, ‘omics methods play a significant role in hazard identification, but not in risk characterisation due to the lack of relevant quantitative data on dose–response relationships. So, one goal in the future may be the integration of ‘omics data into the risk characterisation of chemicals.

Based on an example from food toxicology, it was shown how ‘omics data could be used in risk assessment and which way forward for their future role in risk assessment is possible. By using in silico methods (QSAR), mutagenic and carcinogenic chemicals were identified among more than 800 heat‐induced processing contaminants and a priority list of compounds was established. Using this approach, 3‐monochloropropanediol (3‐MCPD) was identified. A detailed analysis of the proteome and transcriptome of 3‐MCPD‐treated rats identified molecular targets for 3‐MCPD, e.g. related to glucose utilisation and oxidative stress. The antioxidant protein DJ‐1 was strongly deregulated at the protein level in kidney, liver and testis, and giving new insights in the MOA of this relevant food contaminant. These new results were recently taken up by EFSA in the course of the risk assessment of 3‐MCPD, showing that ‘omics data found its way into risk assessment in the MOA section. So far, there has been only limited use of ‘omics techniques in standard toxicity tests performed to identify adverse effects, because of existing limitations. However, by implementation of relevant MOA data into the AOP concept, it will be possible to link MOA effects observed by ‘omics methods to adversity. Furthermore, it will also be possible to develop appropriate in vitro test systems to predict adverse outcomes with significant evidence. Then, it may be possible to modulate the margin of exposure concept by comparing relevant human in vitro ‘omics data together with biological endpoints (AOP data) with human endogenic endpoints (metabolomics data, biomarker of exposure). In summary, new techniques such as in silico methods (e.g. QSAR, PBPK modelling) as well as ‘omics data together with endogenic biomarkers will fundamentally improve risk assessment in the future.

2.1.4. Integrating pharmacokinetics and pharmacodynamics in AOPs for next generation risk assessments

Quantitative analysis and modelling of data is one of the most important aspects of risk analysis and assessment. Relevant modelling activities are often divided into pharmacokinetics (PK; related to exposure assessment) and pharmacodynamics (PD; related to dose–response) in a rather simplistic way. We tend toward a fusion of the two disciplines into systems toxicology, at the point where they meet. In any case, modelling has always been important for low‐dose extrapolation, exposure route adjustments or assessing the impact of interindividual variability. Yet, new challenges are emerging: quantitative in vitro to in vivo extrapolation, high‐throughput and high content data integration, and integration within the AOP framework. In response to the need for in vitro data integration and extrapolation, pharmacokinetic modelling has definitely taken a physiological (PBPK) turn in the last 10 years. Models of drug distribution of chemicals in the animal and human body have dramatically improved, but new models are now being developed to address the complexity of the new in vitro systems. The zebrafish model is usable for human and ecological risk assessments. A whole series of pharmacokinetic models of in vitro systems is also being developed in ongoing projects such as EU‐ToxRisk. In parallel, the methods for fast simulations and calibration of complex models with experimental data have also been considerably improved over the last decade. AOP models are also being actively developed. Given their potential number and complexity, the best mathematical tools to use are not precisely now known. For extrapolation purposes, systems toxicology models would probably be favoured, being fundamentally mechanistic, like PBPK models. Yet, they can be extremely complex and data hungry. Statistical models (such as linked non‐linear regression relationships or Bayesian networks) might be simpler to develop, but they may have more restricted applications. In between, there is a whole range of pharmacodynamic models, such as the effect compartment model, often used in pharmacology, but much less so in toxicology. Research is very active in those areas, and it is likely that, for a quite while, the various approaches will co‐exist. PK/PD modelling of the effects of random mixtures of aromatase inhibitors on the dynamics of women's menstrual cycles was assessed. Using high‐speed computer code, random exposures to millions of potential mixtures of 86 aromatase inhibitors present both in the US EPA ToxCast and ExpoCast databases were simulated. A PK model of intake and disposition of the chemicals was used to predict their internal concentration as a function of time (up to 2 years). In vitro concentration–inhibition relationships for aromatase were collected from ToxCast and corrected for cytotoxicity. The resulting total aromatase inhibition was input to a mathematical model of the hormonal hypothalamus–pituitary–ovarian control of ovulation in women. At aromatase inhibitor concentrations leading to over 10% inhibition of oestradiol synthesis, noticeable (eventually reversible) effects on ovulation were predicted. Exposures to single chemicals never led to such effects. However, a few per cent of the combined exposure scenarios were predicted to have potential impacts on ovulation, and hence fertility. These results demonstrate the possibility to predict large‐scale mixture effects for endocrine disrupters with a predictive toxicology approach, suitable for high‐throughput ranking and risk assessment.

2.1.5. Modelling tools to assess risks related to cadmium exposure for workers and consumers

This was a case study to provide a practical example of application of modelling tools to risk assessment. Specifically, the French Agency Anses (Agence nationale de sécurité sanitaire de l'alimentation, de l'environnement et du travail) was asked to revise the dietary toxicological reference value (TRV) for cadmium and to propose cadmium maximum levels in fertilising materials and culture media to control soil pollution and in turn the contamination of plants for food use.

For non‐smokers, food is the main source of cadmium exposure for its high environmental persistence and high rate of soil‐to‐plant transfer. Anses identified bone effects as the key effects and used the 2011 and 2012 epidemiological studies by Engström and colleagues (Engström et al., 2011 , 2012 ) as the key studies for setting a TRV. The no‐observed‐adverse‐effect level (NOAEL) for this effect for a population over 60 years of age corresponded to an internal dose of 0.5 μg cadmium/g urinary creatinine. Based on a PBPK model (Kjellström and Nordberg, 1978 ; Ruiz et al., 2010 ) that included data on the variation of creatinine excretion as a function of both age and body weight, and that related cadmium urinary concentrations to Cd oral concentrations, a tolerable daily intake (TDI) of 0.35 μg cadmium/kg body weight (bw) per day could be derived. The PBPK model also made it possible to estimate the urinary cadmium excretion limit (cadmium health‐based guidance value (HBGV) in μg/g of creatinine as a function of age) not to be exceeded at any age to prevent exceedance of the internal TRV at adulthood, i.e. 0.5 μg/g creatinine at 50 years of age (Béchaux et al., 2014 ). Depending on the input of cadmium in soils (according to different scenarios of soil fertilisation), a predictive model was drawn up for estimating the trend of cadmium contamination in plants for human consumption over the following 99 years.

The construction of the model was carried out in two stages:

  • Firstly, the transfer of cadmium from its input via fertilisers on agricultural soils to plant produce (potato and wheat grain) was modelled. This part of the model was built on the basis of a ‘mass‐balance’ approach, taking into account: (i) all the routes of cadmium entry into the agricultural soil (fertilising materials, atmospheric deposition, irrigation water); (ii) routes of cadmium release from the soil (food crops, leaching); (iii) variabilities; and also (iv) French specificities along this transfer. This first phase of the model made it possible to study the cadmium contamination of agricultural soils and crops as well as the cadmium leached, as a function of cadmium inputs via fertiliser materials and their agricultural practice in the next 99 years;
  • In a second step, the transfer of cadmium through food from the plant produce to the consumers was modelled to estimate the impact on consumer exposure. Simulations of various fertilisation scenarios were run with an updated Anses model to predict changes in Cd concentration in wheat grain and potatoes. The obtained variations of the cadmium concentration in plants made it possible to estimate the impact on consumer cadmium exposure.

So, the prepared model based on cadmium flux is a predictive support to estimate cadmium levels in the plants and in the final related food products. The output data of the model allow derivation of the adult and child consumer's average chronic exposure and 95th percentile, as a function of the projection time of the modelling (10, 20, 60, 99 years), in correlation with the study of the trend of cadmium contamination in crops (wheat grain and potato) linked to fertilisation scenarios. It is also feasible to estimate a possible percentage of excess of the TRV. These mathematical models (from field to fork) are useful tools to support the risk assessment and decision‐making processes. Based on such simulations, acceptable levels of cadmium pollution in fertilisers, soils and at the end food items may be determined.

2.1.6. Predictive tools in the risk assessment of new proteins in genetically modified organisms (GMOs): the case of coeliac disease

Coeliac disease (CD) is a disease of the small intestine characterised by flattening of the intestinal surface, resulting in a variety of clinical symptoms including malabsorption, failure to thrive, diarrhoea and stomach ache. The disease is caused by an uncontrolled intestinal CD4 T‐cell response to gluten proteins in wheat ( Triticum ssp.) and to the gluten‐like hordeins and secalins in barley ( Hordeum vulgare ) and rye ( Secale cereale ). Oat ( Avena sativa ) is generally considered safe for patients although exceptions have been reported. The only available treatment is a life‐long gluten‐free diet including the exclusion of all food products that contain wheat, barley and rye or gluten and gluten‐like proteins from these grains. CD has a strong genetic component as it is associated with particular immune response genes, called HLA in man (Koning et al., 2015 ). Most CD patients express certain HLA‐DQ‐molecules. HLA‐DQ molecules are dimers of an alpha‐ (DQA1) and a beta‐ (DQB1) chain. As for all HLA‐molecules, HLA‐DQ molecules bind short peptides and present these to T cells of the immune system. The large majority of CD patients expresses HLA‐DQ2.5 while the remainders are usually HLA‐DQ8 positive. In patients, but not in healthy individuals, pro‐inflammatory gluten‐specific CD4 +  T cells are present in the lamina propria of the affected duodenum. Importantly, these CD4 +  T cells recognise gluten peptides only when presented by the disease associated HLA‐DQ molecules. In essence, in patients with CD the immune system makes a mistake: the harmless gluten proteins in the food are recognised as if derived from a pathogen, leading to a pro‐inflammatory response as long as gluten is consumed. Elimination of gluten from the diet constitutes an effective treatment as the T‐cell stimulatory gluten peptides are no longer present. Unfortunately, once a gluten‐specific T‐cell response has developed, this results in immunological memory. Therefore, every subsequent exposure to gluten will reactivate the gluten‐reactive T cells and results in inflammation. A life‐long gluten‐free diet is so required. T‐cell epitopes derived from the α‐, γ‐ and ω‐gliadins as well as from the HMW and LMW glutenins have been reported. In addition, T‐cell epitopes in both hordeins and secalins have been identified that are highly homologous or even similar to those found in wheat. A detailed knowledge of these known disease‐causative sequences in gluten allows the design of a specific strategy to identify potential harmful sequences in other proteins. This strategy is presented in the EFSA guidance on allergenicity assessment of GM plants (EFSA GMO Panel, 2017 ).

2.2. New approaches in exposure assessment

2.2.1. human biomonitoring.

The European Human Biomonitoring Initiative (HBM4EU, https://www.hbm4eu.eu/ ) follows an innovative approach to generate the knowledge that policy makers need to improve policy in environment and health. The over‐arching goal of HBM4EU is to generate new knowledge, to inform the safe management of chemicals, and so protect human health in Europe (Ganzleben et al., 2017 ). Human Biomonitoring (HBM) data supply information on the aggregate exposure from all sources and by all pathways, serving as the basis to assess the risks from human exposure to chemicals. The research programme is based on the policy needs and priority chemicals identified after consultation with European and national policy makers. It builds upon existing knowledge from national and EU monitoring and research programmes. HBM4EU consists of more than 110 partner organisations from 28 countries, 27 European countries plus Israel, and is organised around 16 work packages led by key players of national HBM studies and research programmes. Major fields of activities are the science policy transfer, HBM studies and research to elucidate the impact of exposure on health. Data management under HBM4EU collects existing HBM data, currently fragmented in Europe. Exposure data valid for the whole of Europe, the identification of vulnerable or highly exposed subpopulations and the analysis of spatial and temporal exposure trends are major goals of HBM4EU. HBM is also considered to be key for addressing exposure to mixtures, as it reveals the extent and quality of multiple chemicals exposures. These data also demonstrate the need to develop concepts for health risk assessment beyond traditional single substance evaluation methods. Intensive communication with policy makers from the state of planning on will ensure that HBM4EU results are used in the further development and design of new chemicals policies, as well as in the evaluation of existing measures.

Among the recommendations for a better inclusion of HBM in risk assessment there are:

  • the creation of awareness on capabilities of HBM at EU and national level;
  • developing harmonised guidance for the use of HBM data;
  • setting HBM HBGVs.

2.3. Holistic assessment of exposures to environmental and endogenous oestrogens by internal dosimetrics

Exposure to oestrogenic compounds through the diet and environment is an ongoing public health focus. This focus is based on a hypothesis that some endocrine‐active compounds bind to oestrogen receptors to a sufficient degree to affect genomic signalling and so adversely impact normal endocrine function in animals and humans that, over time, leads to a number of diseases. For example, extensive research and risk assessment activity has centred on the potential for adverse effects from exposure to the food contact‐associated oestrogenic chemical, bisphenol A (BPA), especially during the perinatal period. A large body of pharmacokinetic evidence from rodents, non‐human primates and humans, which includes exposures during early neonatal and adult life stages, has been incorporated into PBPK models for BPA (Yang et al., 2015 ). Circulating concentrations of BPA in individuals with average and high consumption of canned foods are consistently in the low picomolar range. The modelled outputs for internal dosimetry from rodent and human models can also provide chemical‐specific factors for use in computing HBGVs from toxicological studies in rodents. In addition, the plausibility of oestrogen receptor‐mediated effects from BPA, based on measurements in serum and/or urine of BPA, dietary oestrogens [genistein (GEN), daidzein (DDZ)] and endogenous hormones [oestrone (E1), oestradiol (E2), oestriol (E3) and the fetal liver‐derived oestetrol (E4)], was evaluated using mathematical calculations of fractional receptor occupancy (FRO) and relative responses (RR) for activation of oestrogen receptors (ERα and ERβ) in the presence of serum binding proteins (sex hormone binding globulin (SHBG) and albumin) in a cohort of pregnant women (Pande et al., 2019 ). These comparisons were made to critically evaluate the hypothesis that serum BPA must contribute sufficient added activity to shift total oestrogenicity by a meaningful increment over normal intraindividual daily variability to be considered important. The median FRO for BPA was five orders of magnitude lower than E1, E2 or E3 and three orders of magnitude lower than E4, GEN or DDZ. Similarly, based on the RR values, E3 was the most potent serum oestrogen during pregnancy (median RR values of 0.746 and 0.794 for ERα and ERβ receptors, respectively). The median RR values for E2 were 0.243 for ERα and 0.167 for ERβ. The RR values for the remaining oestrogens were consistently less than 0.01. Moreover, RR values were even lower for the dietary oestrogens, GEN and DDZ and BPA. Also, these minor interactions with BPA were dwarfed by the intraday and interindividual variability in the activity from endogenous oestrogens present in the pregnant women. Similarly, the receptor binding levels of endogenous oestrogens in normally cycling non‐pregnant women suggest that BPA interactions would also be negligible. A consistent body of evidence comprising: (i) classical pharmacokinetic and PBPK modelling approaches that indicate minimal internal exposures from realistic doses; and (ii) the implausibility of observable oestrogenic actions in ordinary pregnant women, reaffirms the conclusions of most regulatory bodies world‐wide that exposure to BPA resulting from approved food contact uses is safe (US Food and Drug Administration, 2014 ; EFSA, 2015 ).

2.4. The exposome in practice

The identification of hazardous environmental pollutants is complex, particularly in relation to chronic, non‐communicable diseases. The main contributors to this complexity are the diversity of hazards that may exist, the typically low levels of environmental contaminants/pollutants, long latency periods and largely unknown modes of action. The unravelling of environmental causes of disease is also limited by the technical difficulties in defining, and accurately measuring, exposures and by considerable spatial, temporal and intraindividual variation. The complex and partially unknown interaction with underlying genetic and other factors that modulate susceptibility and response to environmental exposures further complicates the process of delineating and understanding environmental hazards. To address such difficulties, the concept of the ‘exposome’ was proposed, initially by Wild ( 2005 ), with more recent detailed development in relation to its application to population‐based studies (Wild, 2012 ). The original concept was expanded by others, particularly Rappaport and Smith ( 2010 ), who functionalised the exposome in terms of chemicals detectable in biospecimens. The exposome concept refers to the totality of exposures from a variety of sources including, but not limited to, chemical agents, biological agents, radiation and psychosocial component from conception onward, over a complete lifetime, and offers a conceptual leap in studying the role of the environment in human disease (Rappaport and Smith, 2010 ; Wild, 2012 ; Vineis et al., 2017 ).

There are two broad interpretations of the exposome concept, and they are complementary. One, called ‘top‐down’, is mainly interested in identifying new causes of disease by an agnostic approach based on ‘omic technologies, similar to that applied in genetics with the genome‐wide association study (GWAS) design. This approach is sometimes called an exposome‐wide association study (EWAS), and utilises tools such as metabolomics or adductomics to generate new hypotheses on disease aetiology. The second general approach is called ‘bottom‐up’ and starts with a set of exposures or environmental compartments to determine the pathways or networks by which such exposures lead to disease, i.e. which pathways/networks are perturbed. We have used the latter approach in the EU‐funded EXPOsOMICS project (Vineis et al., 2017 ), that was focused on air pollution and water contamination. The experience of air pollution is particularly instructive. While in the 1970s and early 1980s air pollution was considered as a relatively marginal exposure in terms of attributable risks, the most recent estimate is that it accounts for 7.6% of global deaths and 4.2% of global disability‐adjusted life years (DALYs) world‐wide (Vineis and Fecht, 2018 ). The change in appreciation of the role of air pollution has been mainly due to the refinement of exposure assessment methods and the new generations of longitudinal studies. Mechanistic evidence via ‘omic technologies is now rapidly increasing, so lending credibility to previous epidemiological (‘black box’) associations (Vineis, 2018 ; Vineis and Fecht, 2018 ). In the EXPOsOMICS project, a few priorities for research were selected, with relevant practical implications for policy making and stakeholders: can our knowledge be consolidated on the health effects of those two important exposures, air pollution and water contaminants, reinforcing causal assessment? Can variation in exposures in a finer way than with the standard tools of epidemiology be detected? Can the effects of low and very low levels of exposure using ‘omic biomarkers be detected? How can ‘omic measurements to study pollutant mixtures be exploited? Can improved exposure assessment to calibrate estimates of risk and burden of disease be used? As a result, developed methodologies for the validation of a set of five ‘omics measured in the same subjects (for a total number of more than 2,000 individuals), and statistical tools to allow the analysis of very complex data sets were developed. Compared with air pollution, much less information is known about other environmental contaminants, some of which are widespread and pervasive, so suggesting the need for the same rigorous methods as those applied to air pollution.

2.5. Integrating new tools and new approaches in exposure assessment – examples

2.5.1. integrated safety assessment of genetically modified food/feed: the experience from the eu projects grace and g‐twyst.

The application of the classical tools developed for the risk assessment of chemicals to the evaluation of risk derived from GM plants has been very controversially discussed, particularly as regards animal studies on whole food and feed. The EFSA GMO Panel indicated the possibility to use a 90‐day study on whole food/feed based on the OECD Test Guideline 408 (OECD TG 408, 1998 ) in case a specific hypothesis was identified in the course of the preliminary analysis of the GMO (e.g. comparative assessment of the compositional and agronomic‐phenotypic characteristics of the GM crop) (EFSA GMO Panel, 2011 ). EFSA's Scientific Committee developed principles and guidance for the establishment of protocols for 90‐day whole food/feed studies in rodents, adapting the existing OECD Test Guideline 408 to the peculiar test item ‘food/feed’ (EFSA Scientific Committee, 2011 ). Regulation (EU) No 503/2013 2 on applications for EU market authorisation of GM food and feed in accordance with Regulation (EC) 1829/2003 3 made mandatory the 90‐day rodent feeding study on the whole GM food/feed for single transformation events, even in the absence of hypotheses. In a later explanatory statement (EFSA, 2014 ), EFSA provided further instructions on how to apply the general principles described in the EFSA Scientific Committee Guidance for the study design and analysis of such 90‐day studies for GMO risk assessment and described two possible scenarios (scenario 1: a specific hypothesis is available, i.e. the preceding analyses have identified a potential risk(s); scenario 2: no specific hypothesis is available, i.e. no potential risk has been identified). Upon request from the European Commission, EFSA also prepared a scientific report that would support the future establishment of protocols for chronic toxicity and/or carcinogenicity studies in rodents with whole food/feed (EFSA, 2013 ). Two EU‐funded projects, GRACE (GMO Risk Assessment and Communication of Evidence) and G‐TwYST (Genetically modified plants Two Year Safety Testing), performed animal feeding trials and alternative in vitro methods with two different GM maize varieties to determine how suitable they are and what useful scientific information they provide for the health risk assessment of GM food and feed. Subchronic and chronic toxicity as well as carcinogenicity testing in rats was conducted based on OECD Test Guidelines for the testing of chemicals and on the above‐mentioned EFSA documents. Under the GRACE project, 90‐day feeding trials as well as a 1‐year feeding trial with two GM MON810 maize varieties and several different near‐isogenic varieties were performed (Zeljenková et al., 2014 , 2016 ; Schmidt et al., 2017 ). Moreover, ‘omics as well as in vitro (cell culture) approaches were performed to evaluate their possible added value in the overall risk assessment of GM crops (van Dijk et al., 2014 ; Sharbati et al., 2017 ). Based on the EFSA explanatory statement, the OECD Test Guideline 453 as well as the scientific report by EFSA on the applicability of the OECD Test Guideline 453 to whole food/feed testing (EFSA, 2013 ) and taking into account possible concerns raised by a publication on the long‐term toxicity of the GM maize NK603 (Séralini et al., 2012 ), the G‐TwYST consortium performed two 90‐day feeding trials as well as a combined 2‐year chronic toxicity/carcinogenicity study in rats with the GM maize NK603. The main findings of the different experimental approaches in the GRACE and G‐TwYST projects were presented. In these projects, it was concluded that rodent feeding trials do not provide added value to the risk assessment of GM crops in the case that relevant changes and/or specific hazards have not been identified in preceding analyses. Based on the experience gained in these two EU‐funded research projects, it was highlighted that relevant aspects of the study design include the choice of the rodent strain, the incorporation rate of the GM crop to be tested and consideration of cage effects. A new development in the statistical analysis of the data obtained in these rodent feeding trials included the equivalence testing, in addition to the difference testing, to support the interpretation of differences between animals given the GM crop and controls. An ‘omics technique (metabolomics) was used to further characterise the composition of the GM crop; this technique was considered promising, but needs further discussion as regards its integration into the risk assessment process. In vitro techniques to investigate effects of the GM crop on the intestine and on the immune system did not show changes in rats given the GM crops as compared with concurrent controls. However, in the absence of a positive control, it is difficult to fully set out their relevance in this context. In the above‐mentioned projects, attention was given to communication, transparency, engagement of stakeholders and Responsible Research and Innovation (RRI) principles.

2.5.2. CLARITY‐BPA Project: Core NCTR/NTP study on BPA and lesson learnt on integrating regulatory and academic investigations in hazard assessments by the US National Toxicology Program

One of the challenges faced in the regulatory setting is the integration of data from investigative academic studies with those Good Laboratory Practice (GLP) studies conducted according to test guidelines for submission to regulatory agencies for making risk assessment decisions. The way in which these different types of studies are conducted and reported is a challenge when trying to integrate different data streams. These issues are often related to exposure levels, study design and conduct, chemical purity, statistical power, reporting of study details, selective reporting of data, risk of bias, directness of end‐point measures to a health outcome, and data reporting transparency. One such a case is that of BPA. BPA is a chemical produced in large quantities for use primarily in the production of polycarbonate plastics and epoxy resins, that are used as lacquers to coat metal products such as food cans, bottle tops and water supply pipes. Human exposure to BPA is widespread, with 93% of Americans 6 years and older having detectable levels of BPA in their urine. The health impact of low‐level exposure to BPA is a topic of considerable debate world‐wide. On the one side a few GLP‐ and guideline‐compliant studies support BPA safety at current exposure levels, on the other side hundreds of smaller scale research studies have indicated possible low‐dose effects of this substance. In two subsequent presentations, the experience with the research programme developed by the US National Institute of Environmental Health and Safety (NIEHS), the National Toxicology Programme (NTP), and the Food and Drug Administration (FDA), the so‐called Consortium Linking Academic and Regulatory Insights on BPA Toxicity (CLARITY‐BPA; Schug et al., 2013 ) was presented. The CLARITY‐BPA research programme was initiated with the aim of filling the gap between guideline‐compliant research and hypothesis‐based research projects on the toxicity of BPA (Birnbaum et al., 2013 ). This project investigated a broader range of potential health effects from exposure to BPA especially in the low‐dose range that could inform regulatory decision making.

The CLARITY‐BPA research programme has two components: (i) A ‘core’ modified guideline‐compliant chronic study conducted at FDA's National Center for Toxicological Research (NCTR) according to FDA GLP regulations (2‐year perinatal only or chronic BPA exposure, including perinatal); and (ii) CLARITY‐BPA grantee studies of various health endpoints, conducted by NIEHS–funded researchers at 14 academic institutions using tissues and animals born to the same pregnant rats and exposed under identical conditions as the core GLP study (Heindel et al., 2015 ).

In the core study, the toxicity of BPA administered by oral gavage from gestation day 6 until labour and then directly to pups by daily gavage from post‐natal day 1 was examined in Sprague–Dawley rats. Study materials were monitored for background BPA levels throughout. A wide range of BPA doses was used ranging from as close as feasible to estimated human exposure levels to a reasonable margin of exposure (2.5, 25, 250, 2,500, and 25,000 μg/kg bw per day). Because many of the reported effects of BPA are associated with oestrogen‐signalling pathways, two doses (0.05 and 0.5 μg/kg bw per day) of ethinyl oestradiol (EE2) were also included to monitor the response of the model to an oestrogen. In addition to animals dosed daily throughout the study (continuous‐dose arm), a stop‐dose study arm was included for the BPA doses only, with animals dosed until post‐natal day 21 and then held without further treatment until termination, to assess any effects that were due to early exposure. In both study arms, animals were terminated at 1 year (interim) and 2 years (terminal). Statistical comparisons were conducted within sex, study arm, and sacrifice time and BPA and EE2 groups were analysed separately. Data collected included survival, body weights, litter parameters, age at vaginal opening, vaginal cytology, including an assessment of the onset of aberrant cycles, clinical chemistry (interim sacrifice only), sperm parameters (interim sacrifice only), organ weights (interim sacrifice only) and histopathology (both interim and terminal sacrifices). The grantee studies assessed a range of molecular, structural and functional endpoints that are not typically assessed in guideline‐compliant studies (Heindel et al., 2015 ). As the safety assessment of BPA is outside the scope of this paper, for the results of the core and grantee studies the reader should consult the NTP website ( https://manticore.niehs.nih.gov/cebssearch/program/CLARITY-BPA ).

The key strengths of this consortium approach included: (i) the identical BPA exposure conditions used for both components of the consortium, which were provided at the same facility (NCTR); (ii) blinding of the core study samples received by the academic grantees, therefore minimising the potential risk of bias; and (iii) the development of an a priori list of endpoints to be collected per study and the requirement that all data be deposited in a private workspace in the NTP's database before decoding. This allowed for confidential data acquisition and blinded deposition of data and also ensured that subsequent public access to data had no bias in end‐point data acquisition. There were limitations to this approach though. Academic investigators were limited to using a specific shared design and model that may not have been optimal for the specific endpoints proposed. Sample acquisition was centralised and coordinated so highly specialised sample preparation or animal handling procedures required additional coordination, training and resources. Thirdly the scheme for peer review and selection of grantee proposals followed traditional National Institutes of Health (NIH) peer review procedures such that guidance was more general in nature and submitted proposals were not specifically aligned to address specific regulatory needs, but rather hypothesis generated research questions. Looking forward, a key lesson learnt from the CLARITY‐BPA programme is that, for future initiatives, a less resource intensive approach is needed with much more targeted and integrated problem formulation and consortia development phase more direct communication between what the regulatory scientists need to make decisions and what the academic scientists can provide. This would result in closer alignment between the identified regulatory data gaps and the design of the studies and would maximise the utility of such collaborative programmes, while decreasing their costs.

2.5.3. Setting a health‐based guidance value using epidemiological studies

In chemical risk assessment, the use of benchmark dose (BMD) for deriving a HBGV is increasingly being preferred over use of a single point estimate, such as the traditional NOAEL. In line with this development EFSA's Scientific Committee (EFSA Scientific Committee, 2017 ) recommends that scientific panels should apply the BMD when setting HBGV. The use of BMD for human observational studies has been partly hampered by how findings from epidemiological studies are conventionally reported, highlighting the need for more dialogue between risk assessors and epidemiologists. More importantly, there is currently no consensus on how to derive BMD for human studies as the BMD methodology has mostly been developed for use in controlled studies in experimental animals. Although the same principles generally apply for human studies, existing guidance may not always be directly applicable. For example, in the updated EFSA guidance on BMD, model averaging based on a default set of pre‐selected models is recommended. However, the recommended models do not include linear or other polynomial models. This may be logical for animal studies in which there are well defined unexposed controls (zero exposure) and the doses used often cover > 100 differences in exposure (making a linear response over the full exposure range highly unlikely). For human studies the observed exposure range is, in contrast, usually much narrower and the dose–response is often approximate linear. In addition, in observational settings ‘unexposed individuals’ (zero exposure) usually do not exist, making the reference point highly dependent on the study population and how it is selected. Furthermore, the high variability observed in human studies and the use of biomarker concentrations to assess exposure creates several additional challenges when deriving HBGV, and varying sample size the use of lower bound benchmark doses (BMDLs) for human data needs some careful considerations. In conclusion, there are no major obstacles for using human data to derive HBGV. BMD analyses can easily be performed, but existing conventions may not be directly applicable, and more work is needed (as has been carried out for animal data).

3. Conclusions and recommendations

A shift from the current risk assessment paradigm largely relying on animal studies to a more holistic approach is possible and desirable. Such a shift would be based on NAMs offering a better understanding of mechanisms leading to adverse effects and more accurate predictions of biological responses, helping to establish causality and waiving the use of animal studies. This would be integrated by novel approaches and tools in exposure and epidemiological sciences. Ultimately, this holistic approach would improve human health assessment while reducing animal testing. Envisaged by NASEM since 2007 and proposed in the EU in a SCHER/SCENIHR/SCCS report in 2013, such a change in human health risk assessment paradigm can already benefit from the availability of scientifically and technically robust tools and comprehensive data sets. Overall, this shift would help to optimise the way risk assessment is performed by both accelerating risk assessment pace and embracing experimental models of greater human relevance, while addressing societal concerns about animal experimentation.

To support the paradigm shift to a holistic approach, efforts are needed, in the first instance, to promote the use of mechanism‐based test systems, exploring molecular initiating events and early key events complementing or replacing experimental animals tests designed to show adverse effects. This is considered to be pivotal to support the understanding of biological pathways and would ultimately enable more accurate predictions of biological responses to single or multiple stressors and help to establish causality of effects. However, human‐relevant in vitro assays, alternative models (e.g. zebrafish) and predictive modelling should be further developed and validated.

Efforts should be made to integrate NAMs with existing in vivo data matrices into existing AOPs but also to support the development of new ones. This would allow risk assessors to use AOPs informed approaches, to make sensible use of all available data, and to enhance confidence on the mechanistic understanding underlying a ‘regulatory’ adverse outcome. This could be achieved by fostering the causal link between molecular and cellular effects of substances, and their effects at the level of organs, organisms and populations.

The advances in new exposure and epidemiology sciences, as well as the availability of human biomonitoring data set should be considered, and strategies should be developed for their incorporation into a holistic exposure assessment. The use of already available data in the current risk assessment faces challenges. A rethinking of how to conduct and use epidemiological studies/data is also needed. These should be designed to be part of the path being incorporated in the AOP framework and used to consolidate human adverse outcomes in the testing paradigm.

Also relevant is the development of methodologies and guidance to refine the prediction of blood and tissue concentrations from exposure through TK or PBTK modelling. Conversely, in vitro ‐derived potency information needs to be converted into dosimetry information that, in turn, can be translated into corresponding external doses using in vitro–in vivo extrapolation (IVIVE) tools.

Data gathering, organisation, curation and use are perceived as priorities in this context. The development of databases and softwares for high‐throughput screening (HTS) and high content image analysis (HCA) is needed to facilitate international harmonisation and promote the use of NAMs and the large amount of data produced by these technologies.

Strategies are required for re‐engineering already available data matrices and making them accessible, readable, interpretable, usable and integrable with the new data streams. To facilitate and standardise a transparent risk assessment, electronic submission of toxicological, exposure and epidemiological raw data should be promoted by creating a digital network of relevant information.

The successful implementation of NAMs and other new developments in future risk assessment will necessitate cooperation of academia, risk assessors and international bodies such as the OECD, and the identification of these areas as high research priorities for long‐term and flexible funding at the European level, this being supported by an effective dialogue between funding bodies, academia and risk assessors.

Overall, the recommendations from this symposium can be summarised as follows:

  • bridging of the gap between traditional risk assessment methodologies, the assessment of non‐standard endpoints and new approach methodologies to enable a shift in the risk assessment paradigm over the coming years;
  • promoting the regulatory acceptance of reliable and predictive NAMs;
  • provision of political support and long‐term research programmes that are fit for purpose and that provide long‐term and flexible research funding to make robust science available to accelerate the pace of risk assessment;
  • promoting education and dedicated training programmes that engage risk assessment bodies, academia and relevant stakeholders to build the necessary capacity in terms of multidisciplinary expertise.

Abbreviations

Suggested citation: Lanzoni A, Castoldi AF, Kass GEN, Terron A, De Seze G, Bal‐Price A, Bois FY, Delclos KB, Doerge DR, Fritsche E, Halldorsson T, Kolossa‐Gehring M, Hougaard Bennekou S, Koning F, Lampen A, Leist M , Mantus E, Rousselle C, Siegrist M, Steinberg P, Tritscher A, Van de Water B, Vineis P, Walker N, Wallace H, Whelan M and Younes M, 2019. Advancing human health risk assessment . EFSA Journal 2019;17(S1):e170712, 21 pp. 10.2903/j.efsa.2019.e170712 [ PMC free article ] [ PubMed ] [ CrossRef ]

Acknowledgements: The European Food Safety Authority (EFSA) and authors wish to thank the participants of the break‐out session ‘Advancing risk assessment science – Human health’ at EFSA's third Scientific Conference ‘Science, Food and Society’ (Parma, Italy, 18–21 September 2018) for their active and valuable contributions to the discussion. We also thank Hans Verhagen for carefully proofreading it.

Disclaimer: The views or positions expressed in this article do not necessarily represent in legal terms the official position of the European Food Safety Authority (EFSA) or reflect the views of the US Food and Drug Administration. EFSA assumes no responsibility or liability for any errors or inaccuracies that may appear. This article does not disclose any confidential information or data. Mention of proprietary products is solely for the purpose of providing specific information and does not constitute an endorsement or a recommendation by EFSA for their use.

Approved: 9 May 2019

1 SCHER (Scientific Committee on Health and Environmental Risks), SCENHIR (Scientific Committee on Emerging and Newly Identified Health Risks), SCCS (Scientific Committee on Consumer Safety), 2013. Addressing the New Challenges for Risk Assessment. European Commission, ISSN 2315‐0106 ISBN 987‐92‐79‐31206‐9 https://doi.org/10.2772/37863

2 Commission Regulation (EU) No 503/2013 of 3 April 2013 on applications for authorisation of genetically modified food and feed in accordance with Regulation (EC) No 1829/2003 of the European Parliament and of the Council and amending Commission Regulations (EC) No 641/2004 and (EC) No 1981/2006. OJ L157, 8.6.2013, pp. 1–48.

3 Regulation (EC) No 1829/2003 of the European Parliament and of the Council of 22 September 2003 on genetically modified food and feed. OJ L 268, 18.10.2003, pp. 1–23.

  • Aschner M, Ceccatelli S, Daneshian M, Fritsche E, Hasiwa N, Hartung T, Hogberg HT, Leist M, Li A, Mundi WR, Padilla S, Piersma AH, Bal‐Price A, Seiler A, Westerink RH, Zimmer B and Lein PJ, 2017. Reference compounds for alternative test methods to indicate developmental neurotoxicity (DNT) potential of chemicals: example lists and criteria for their selection and use . Altex , 34 , 49–74. 10.14573/altex.1604201 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bal‐Price A and Meek MEB, 2017. Adverse outcome pathways: application to enhance mechanistic understanding of neurotoxicity . Pharmacology and Therapeutics , 179 , 84–95. 10.1016/j.pharmthera.2017.05.006 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bal‐Price A, Crofton KM, Leist M, Allen S, Arand M, Buetler T, Delrue N, FitzGerald RE, Hartung T, Heinonen T, Hogberg H, Bennekou SH, Lichtensteiger W, Oggier D, Paparella M, Axelstad M, Piersma A, Rached E, Schilter B, Schmuck G, Stoppini L, Tongiorgi E, Tiramani M, Monnet‐Tschudi F, Wilks MF, Ylikomi T and Fritsche E, 2015. International STakeholder NETwork (ISTNET): creating a developmental neurotoxicity (DNT) testing road map for regulatory purposes . Archives of Toxicology , 89 , 269–287. 10.1007/s0020-015-1464-2 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bal‐Price A, Hogberg HT, Crofton KM, Daneshian M, FitzGerald RE, Fritsche E, Heinonen T, Hougaard Bennekou S, Klima S, Piersma AH, Sachana M, Shafer TJ, Terron A, Monnet‐Tschudi F, Viviani B, Waldmann T, Westerink RHS, Wilks MF, Witters H, Zurich MG and Leist M, 2018. Recommendation on test readiness criteria for new approach methods in toxicology: exemplified for developmental neurotoxicity . Altex , 35 , 306–352. 10.14573/altex.1712081 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Basketter DA, Clewell H, Kimber I, Rossi A, Blaauboer B, Burrier R, Daneshian M, Eskes C, Goldberg A, Hasiwa N, Hoffmann S, Jaworska J, Knudsen TB, Landsiedel R, Leist M, Locke P, Maxwell G, McKim J, McVey EA, Ouédraogo G, Patlewicz G, Pelkonen O, Roggen E, Rovida C, Ruhdel I, Schwarz M, Schepky A, Schoeters G, Skinner N, Trentz K, Turner M, Vanparys P, Yager J, Zurlo J and Hartung T, 2012. A roadmap for the development of alternative (non‐animal) methods for systemic toxicity testing . Altex , 29 , 3–91. 10.14573/altex.2012.1.003 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Béchaux C, Bodin L, Clémençon S and Crépet A, 2014. PBPK and population modelling to interpret urine cadmium concentrations of the French population . Toxicology and Applied Pharmacology , 279 , 364–372. [ PubMed ] [ Google Scholar ]
  • Birnbaum LS, Aungst J, Schug TT and Goodman JL, 2013. Working together: research‐ and science‐based regulation of BPA . Environmental Health Perspectives , 121 , A206–A207. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cavalli E, Gilsenan M, Van Doren J, Grahek‐Ogden D, Richardson J, Abbinante F, Cascio C, Devalier P, Brun N, Linkov I, Marchal K, Meek B, Pagliari C, Pasquetto I, Pirolli P, Sloman S, Tossounidis T, Waigmann E, Schünemann H and Verhagen H, 2019. Managing evidence in food safety and nutrition . EFSA Journal , Special Issue July 2019, Third EFSA Conference on Science, Food and Society. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • de Cock M, de Boer MR, Lamoree M, Legler J and van de Bor M, 2014. Prenatal exposure to endocrine disrupting chemicals in relation to thyroid hormone levels in infants – a Dutch prospective cohort study . Environmental Health: A Global Access Science Source , 13 , 106 10.1186/1476-069X-13-106 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Daneshian M, Busquet F, Hartung T and Leist M, 2015. Animal use for science in Europe . Altex , 32 , 61–74. 10.14573/altex.1509081 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Daneshian M, Kamp H, Hengstler J, Leist M and van de Water B, 2016. Highlight report: launch of a large integrated European in vitro toxicology project: EU‐ToxRisk . Archives of Toxicology , 90 , 1021–1024. 10.1007/s00204-016-1698-7 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • van Dijk JP, de Mello CS, Voorhuijzen MM, Hutten RC, Arisi AC, Jansen JJ, Buydens LM, van der Voet H and Kok EJ, 2014. Safety assessment of plant varieties using transcriptomics profiling and a one‐class classifier . Regulatory, Toxicology and Pharmacology , 70 , 297–303. 10.1016/j.yrtph.2014.07.013 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • EFSA (European Food Safety Authority), 2013. Considerations on the applicability of OECD TG 453 to whole food/feed testing . EFSA Journal 2013; 11 ( 7 ):3347, 18 pp. 10.2903/j.efsa.2013.3347 [ CrossRef ] [ Google Scholar ]
  • EFSA (European Food Safety Authority), 2014. Explanatory statement for the applicability of the Guidance of the EFSA Scientific Committee on conducting repeated‐dose 90‐day oral toxicity study in rodents on whole food/feed for GMO risk assessment . EFSA Journal 2014; 12 ( 10 ):3871, 25 pp. 10.2903/j.efsa.2014.3871 [ CrossRef ] [ Google Scholar ]
  • EFSA (European Food Safety Authority), 2015. Scientific Opinion on the risks to public health related to the presence of bisphenol A (BPA) in foodstuffs . EFSA Journal 2015; 13 ( 1 ):3978, 23 pp. 10.2903/j.efsa.2015.3978 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • EFSA GMO Panel (EFSA Panel on Genetically Modified Organisms), 2011. Scientific Opinion on guidance for risk assessment of food and feed from GM plants . EFSA Journal 2011; 9 ( 5 ):2150, 37 pp. 10.2903/j.efsa.2011.2150 [ CrossRef ] [ Google Scholar ]
  • EFSA GMO Panel (EFSA Panel on Genetically Modified Organisms), Naegeli H, Birch AN, Casacuberta J, De Schrijver A, Gralak MA, Guerche P, Jones H, Manachini B, Messean A, Nielsen EE, Nogue F, Robaglia C, Rostoks N, Sweet J, Tebbe C, Visioli F, Wal J‐M, Eigenmann P, Epstein M, Hoffmann‐Sommergruber K, Koning F, Lovik M, Mills C, Moreno FJ, van Loveren H, Selb R and Fernandez Dumont A, 2017. Guidance on allergenicity assessment of genetically modified plants . EFSA Journal 2017; 15 ( 5 ):4862, 49 pp. 10.2903/j.efsa.2017.4862 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • EFSA Scientific Committee , 2011. EFSA guidance on conducting repeated‐dose 90‐day oral toxicity study in rodents on whole food/feed . EFSA Journal 2011; 9 ( 12 ):2438, 21 pp. 10.2903/j.efsa.2011.2438 [ CrossRef ] [ Google Scholar ]
  • EFSA Scientific Committee , Hardy A, Benford D, Halldorsson T, Jeger MJ, Knutsen KH, More S, Mortensen A, Naegeli H, Noteborn H, Ockleford C, Ricci A, Rychen G, Silano V, Solecki R, Turck D, Aerts M, Bodin L, Davis A, Edler L, Gundert‐Remy U, Sand S, Slob W, Bottex B, Abrahantes JC, Marques DC, Kass G and Schlatter JR, 2017. Update: Guidance on the use of the benchmark dose approach in risk assessment . EFSA Journal 2017; 15 ( 1 ):4658, 41 pp. 10.2903/j.efsa.2017.4658 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Engström A, Michaëlsson K, Suwazono Y, Wolk A, Vahter M and Åkesson A, 2011. Long‐term cadmium exposure and the association with bone mineral density and fractures in a population‐based study among women . Journal of Bone and Mineral Research , 26 , 486–495. [ PubMed ] [ Google Scholar ]
  • Engström A, Michaëlsson K, Vahter M, Julin B, Wolk A and Åkesson A, 2012. Associations between dietary cadmium exposure and bone mineral density and risk of osteoporosis and fractures among women . Bone , 50 , 1372–1378. [ PubMed ] [ Google Scholar ]
  • Fritsche E, Crofton KM, Hernandez AF, Hougaard Bennekou S, Leist M, Bal‐Price A, Reaves E, Wilks MF, Terron A, Solecki R, Sachana M and Gourmelon A, 2017. OECD/EFSA workshop on developmental neurotoxicity (DNT): the use of non‐animal test methods for regulatory purposes . Altex , 34 , 311–315. 10.14573/altex.1701171 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fritsche E, Grandjean P, Crofton KM, Aschner M, Goldberg A, Heinonen T, Hessel EVS, Hogberg HT, Bennekou SH, Lein PJ, Leist M, Mundy WR, Paparella M, Piersma AH, Sachana M, Schmuck G, Solecki R, Terron A, Monnet‐Tschudi F, Wilks MF, Witters H, Zurich MG and Bal‐Price A, 2018. Consensus statement on the need for innovation, transition and implementation of developmental neurotoxicity (DNT) testing for regulatory purposes . Toxicology and Applied Pharmacology , 354 , 3–6. 10.1016/j.taap.2018.02.004 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ganzleben C, Antignac J‐P, Barouki R, Castaño A, Fiddicke U, Klánová J, Lebret E, Olea N, Sarigiannis D, Schoeters GR, Sepai O, Tolonen H and Kolossa‐Gehring M, 2017. Human biomonitoring as a tool to support chemicals regulation in the European Union . International Journal of Hygiene and Environmental Health , 220 ( 2, Part A ), 94–97. [ PubMed ] [ Google Scholar ]
  • Gordon S, Daneshian M, Bouwstra J, Caloni F, Constant S, Davies DE, Dandekar G, Guzman CA, Fabian E, Haltner E, Hartung T, Hasiwa N, Hayden P, Kandarova H, Khare S, Krug HF, Kneuer C, Leist M, Lian G, Marx U, Metzger M, Ott K, Prieto P, Roberts MS, Roggen EL, Tralau T, van den Braak C, Walles H and Lehr CM, 2015. Non‐animal models of epithelial barriers (skin, intestine and lung) in research, industrial applications and regulatory toxicology . Altex , 32 , 327–378. 10.14573/altex.1510051 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hartung T, 2019. Predicting toxicity of chemicals: software beats animal testing . EFSA Journal , Special Issue July 2019, Third EFSA Conference on Science, Food and Society. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hartung T and Leist M, 2008. Food for thought … on the evolution of toxicology and the phasing out of animal testing . Altex , 25 , 91–102. [ PubMed ] [ Google Scholar ]
  • Hartung T, De Vries R, Hoffmann S, Hogberg HT, Smirnova L, Tsaioun K, Whaley P and Leist M, 2019. Toward good in vitro reporting standards . Altex , 36 , 3–17. 10.14573/altex.1812191 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Heindel JJ, Newbold RR, Bucher JR, Camacho Delclos KB, Lewis SM, Vanlandingham M, Churchwell M, Twaddle NC, McLellen M, Chidambaram M, Bryant M, Woodling K, Gamboa da Costa G, Ferguson SA, Flaws J, Howard PC, Walker NJ, Zoeller RT, Fostel J, Favaro C and Schug TT, 2015. NIEHS/FDA CLARITY‐BPA research program update . Reproductive Toxicology , 58 , 33–44. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hougaard Bennekou S, 2019. Moving towards a holistic approach for human health risk assessment – is the current approach fit for purpose? EFSA Journal , Special Issue July 2019, Third EFSA Conference on Science, Food and Society. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kadereit S, Zimmer B, van Thriel C, Hengstler JG and Leist M, 2012. Compound selection for in vitro modeling of developmental neurotoxicity . Frontiers in Bioscience (Landmark Edn) , 17 , 2442–2460. [ PubMed ] [ Google Scholar ]
  • Kjellström T and Nordberg GF, 1978. A kinetic model of cadmium metabolism in the human being . Environmental Research , 16 , 248–269. [ PubMed ] [ Google Scholar ]
  • Koning F, Thomas R, Rossjohn J and Toes RE, 2015. Coeliac disease and rheumatoid arthritis: similar mechanisms, different antigens . Nature Reviews Rheumatology , 11 , 450–461. [ PubMed ] [ Google Scholar ]
  • Leist M and Hartung T, 2013. Inflammatory findings on species extrapolations: humans are definitely no 70‐kg mice . Archives in Toxicology , 87 , 563–567. 10.1007/s00204-013-1038-0 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Leist M, Hartung T and Nicotera P, 2008. The dawning of a new age of toxicology . Altex , 25 , 103–114. [ PubMed ] [ Google Scholar ]
  • Leist M, Hasiwa N, Rovida C, Daneshian M, Basketter D, Kimber I, Clewell H, Gocht T, Goldberg A, Busquet F, Rossi AM, Schwarz M, Stephens M, Taalman R, Knudsen TB, McKim J, Harris G, Pamies D and Hartung T, 2014. Consensus report on the future of animal‐free systemic toxicity testing . Altex , 31 , 341–356. 10.14573/altex.1406091 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Leist M, Ghallab A, Graepel R, Marchan R, Hassan R, Bennekou SH, Limonciel A, Vinken M, Schildknecht S, Waldmann T, Danen E, van Ravenzwaay B, Kamp H, Gardner I, Godoy P, Bois FY, Braeuning A, Reif R, Oesch F, Drasdo D, Höhme S, Schwarz M, Hartung T, Braunbeck T, Beltman J, Vrieling H, Sanz F, Forsby A, Gadaleta D, Fisher C, Kelm J, Fluri D, Ecker G, Zdrazil B, Terron A, Jennings P, van der Burg B, Dooley S, Meijer AH, Willighagen E, Martens M, Evelo C, Mombelli E, Taboureau O, Mantovani A, Hardy B, Koch B, Escher S, van Thriel C, Cadenas C, Kroese D, van de Water B and Hengstler JG, 2017. Adverse outcome pathways: opportunities, limitations and open questions . Archives in Toxicology , 91 , 3477–3505. 10.1007/s00204-017-2045-3 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Marx U, Andersson TB, Bahinski A, Beilmann M, Beken S, Cassee FR, Cirit M, Daneshian M, Fitzpatrick S, Frey O, Gaertner C, Giese C, Griffith L, Hartung T, Heringa MB, Hoeng J, de Jong WH, Kojima H, Kuehnl J, Leist M, Luch A, Maschmeyer I, Sakharov D, Sips AJ, Steger‐Hartmann T, Tagle DA, Tonevitsky A, Tralau T, Tsyb S, van de Stolpe A, Vandebriel R, Vulto P, Wang J, Wiest J, Rodenburg M and Roth A, 2016. Biology‐inspired microphysiological system approaches to solve the prediction dilemma of substance testing . Altex , 33 , 272–321. 10.14573/altex.1603161 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Meigs L, Smirnova L, Rovida C, Leist M and Hartung T, 2018. Animal testing and its alternatives – the most important omics is economics . Altex , 35 , 275–305. 10.14573/altex.1807041 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • National Academies of Sciences, Engineering, and Medicine , 2017. Using 21st Century Science to Improve Risk‐Related Evaluations . The National Academies Press, Washington, DC: 10.17226/24635 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • NRC (National Research Council), 2007. Toxicity Testing in the 21st Century: A Vision and a Strategy . The National Academies Press, Washington, DC: 10.17226/11970 [ CrossRef ] [ Google Scholar ]
  • NRC (National Research Council), 2012. Exposure Science in the 21st Century: A Vision and a Strategy . The National Academies Press, Washington, DC: 10.17226/13507 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • OECD (Organisation for Economic Co‐operation and Development), 1998. Guideline for the testing of chemicals—repeated dose 90‐day oral toxicity study in rodents, 408
  • Pande P, Fleck SC, Twaddle NC, Churchwell MI, Doerge DR and Teeguarden JG, 2019. Comparative estrogenicity of endogenous, environmental and dietary estrogens in pregnant women. II: total estrogenicity calculations accounting for competitive protein and receptor binding and potency . Food and Chemical Toxicology , 125 , 341–353. [ PubMed ] [ Google Scholar ]
  • Ramirez T, Daneshian M, Kamp H, Bois FY, Clench MR, Coen M, Donley B, Fischer SM, Ekman DR, Fabian E, Guillou C, Heuer J, Hogberg HT, Jungnickel H, Keun HC, Krennrich G, Krupp E, Luch A, Noor F, Peter E, Riefke B, Seymour M, Skinner N, Smirnova L, Verheij E, Wagner S, Hartung T, van Ravenzwaay B and Leist M, 2013. Metabolomics in toxicology and preclinical research . Altex , 30 , 209–225. 10.14573/altex.2013.2.209 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rappaport SM and Smith MT, 2010. Epidemiology. Environment and disease risks . Science , 22 , 460–461. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Rovida C, Asakura S, Daneshian M, Hofman‐Huether H, Leist M, Meunier L, Reif D, Rossi A, Schmutz M, Valentin JP, Zurlo J and Hartung T, 2015. Toxicity testing in the 21st century beyond environmental chemicals . Altex , 32 , 171–181. 10.14573/altex.1506201 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ruiz P, Mumtaz M, Osterloh J, Fisher J and Fowler BA, 2010. Interpreting NHANES biomonitoring data, cadmium . Toxicology Letters , 198 , 44–48. [ PubMed ] [ Google Scholar ]
  • Schlumpf M, Kypke K, Wittassek M, Angerer J, Mascher H, Mascher D, Vökt C, Birchler M and Lichtensteiger W, 2010. Exposure patterns of UV filters, fragrances, parabens, phthalates, organochlor pesticides, PBDEs, and PCBs in human milk: correlation of UV filters with use of cosmetics . Chemosphere , 81 , 1171–1183. 10.1016/j.chemosphere.2010.09.079 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schmidt BZ, Lehmann M, Gutbier S, Nembo E, Noel S, Smirnova L, Forsby A, Hescheler J, Avci HX, Hartung T, Leist M, Kobolák J and Dinnyés A, 2016. In vitro acute and developmental neurotoxicity screening: an overview of cellular platforms and high‐throughput technical possibilities . Archives in Toxicology , 91 , 1–33. 10.1007/s00204-016-1805-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schmidt K, Schmidtke J, Schmidt P, Kohl C, Wilhelm R, Schiemann J, van der Voet H and Steinberg P, 2017. Variability of control data and relevance of observed group differences in five oral toxicity studies with genetically modified maize MON810 in rats . Archives in Toxicology , 91 , 1977 10.1007/s00204-016-1857-x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schug TT, Heindel JJ, Camacho L, Delclos KB, Howard P, Johnson AF, Aungst J, Keefe D, Newbold R, Walker NJ, Thomas Zoeller R and Bucher JR, 2013. A new approach to synergize academic and guideline‐compliant research: the CLARITY‐BPA research program . Reproductive Toxicology , 40 , 35–40. [ PubMed ] [ Google Scholar ]
  • Scott SE, Inbar Y, Wirz CD, Brossard D and Rozin P, 2018. An overview of attitudes toward genetically engineered food . Annual Review of Nutrition , 38 , 459–479. [ PubMed ] [ Google Scholar ]
  • Séralini GE, Clair E, Mesnage R, Gress S, Defarge N, Malatesta M, Hennequin D and de Vendômois JS, 2012. Long term toxicity of a Roundup herbicide and a Roundup‐tolerant genetically modified maize . Food and Chemical Toxicology , 50 , 4221–4231. 10.1016/j.fct.2012.08.005 (retracted: Retraction notice to ‘Long term toxicity of a Roundup herbicide and a Roundup‐tolerant genetically modified maize’) [Food and Chemical Toxicology, 50(2012), 4221–4231]. [Food and Chemical Toxicology, 2014] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sharbati J, Bohmer M, Bohmer N, Keller A, Backes C, Franke A, Steinberg P, Zeljenková D and Einspanier R, 2017. Transcriptomic analysis of intestinal tissues from two 90‐day feeding studies in rats using genetically modified MON810 maize varieties . Frontiers in Genetics , 8 , 222 10.3389/fgene.2017.00222 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Siegrist M and Sütterlin B, 2017. Importance of perceived naturalness for acceptance of food additives and cultured meat . Appetite , 113 , 320–326. [ PubMed ] [ Google Scholar ]
  • Smirnova L, Hogberg HT, Leist M and Hartung T, 2014. Developmental neurotoxicity – challenges in the 21st century and in vitro opportunities . Altex , 31 , 129–156. 10.14573/altex.1403271 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • van Thriel C, Westerink RH, Beste C, Bale AS, Lein PJ and Leist M, 2012. Translating neurobehavioural endpoints of developmental neurotoxicity tests into in vitro assays and readouts . Neurotoxicology , 33 , 911–924. 10.1016/j.neuro.2011.10.002 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • US Food and Drug Administration , 2014. Final report for the review of literature and data on BPA . Available online: https://www.fda.gov/Food/IngredientsPackagingLabeling/FoodAdditivesIngredients/ucm166145.htm
  • Vineis P, 2018. From John Snow to omics: the long journey of environmental epidemiology . European Journal of Epidemiology , 33 , 355–363. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Vineis P and Fecht D, 2018. Environment, cancer and inequalities – the urgent need for prevention . European Journal of Cancer , 103 , 317–326. [ PubMed ] [ Google Scholar ]
  • Vineis P, Chadeau‐Hyam M, Gmuender H, Gulliver J, Herceg Z, Kleinjans J, Kogevinas M, Kyrtopoulos S, Nieuwenhuijsen M, Phillips DH, Probst‐Hensch N, Scalbert A, Vermeulen R and Wild CP, 2017. EXPOsOMICS Consortium. The exposome in practice: design of the EXPOsOMICS project . International Journal of Hygiene and Environmental Health , 220 ( 2 Pt A ), 142–151. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • van Vliet E, Daneshian M, Beilmann M, Davies A, Fava E, Fleck R, Julé Y, Kansy M, Kustermann S, Macko P, Mundy WR, Roth A, Shah I, Uteng M, van de Water B, Hartung T and Leist M, 2014. Current approaches and future role of high content imaging in safety sciences and drug discovery . Altex , 31 , 479–493. 10.14573/altex.1405271 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wild CP, 2005. Complementing the genome with an ‘exposome’: the outstanding challenge of environmental exposure measurement in molecular epidemiology . Cancer, Epidemiology, Biomarkers and Prevention , 14 , 1847–1850. [ PubMed ] [ Google Scholar ]
  • Wild CP, 2012. The exposome: from concept to utility . International Journal of Epidemiology , 41 , 24–32. [ PubMed ] [ Google Scholar ]
  • Yang X, Doerge DR, Teeguarden JG and Fisher JW, 2015. Development of a physiologically based pharmacokinetic model for assessment of human exposure to bisphenol A . Toxicology and Applied Pharmacology , 289 , 442–456. [ PubMed ] [ Google Scholar ]
  • Zeljenková D, Ambrušová K, Bartušová M, Kebis A, Kovrižnych J, Krivošíková Z, Kuricová M, Líšková A, Rollerová E, Spustová V, Szabová E, Tulinská J, Wimmerová S, Levkut M, Révajová V, Ševčíková Z, Schmidt K, Schmidtke J, La Paz JL, Corujo M, Pla M, Kleter GA, Kok EJ, Sharbati J, Hanisch C, Einspanier R, Adel‐Patient K, Wal JM, Spök A, Pöting A, Kohl C, Wilhelm R, Schiemann J and Steinberg P, 2014. Ninety‐day oral toxicity studies on two genetically modified maize MON810 varieties in Wistar Han RCC rats (EU 7th Framework Programme project GRACE) . Archives of Toxicology , 88 , 2289–2314. 10.1007/s00204-014-1374-8 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zeljenková D, Aláčová R, Ondrejková J, Zeljenková D, Aláčová R, Ondrejková J, Ambrušová K, Bartušová M, Kebis A, Kovrižnych J, Rollerová E, Szabová E, Wimmerová S, Černák M, Krivošíková Z, Kuricová M, Líšková A, Spustová V, Tulinská J, Levkut M, Révajová V, Ševčíková Z, Schmidt K, Schmidtke J, Schmidt P, La Paz JL, Corujo M, Pla M, Kleter GA, Kok EJ, Sharbati J, Bohmer M, Bohmer N, Einspanier R, Adel‐Patient K, Spök A, Pöting A, Kohl C, Wilhelm R, Schiemann J and Steinberg P, 2016. One‐year oral toxicity study on a genetically modified maize MON810 variety in Wistar Han RCC rats. EU 7th Framework Programme project GRACE . Archives of Toxicology , 90 , 2531 10.1007/s00204-016-1798-4 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Risk management

  • Change management
  • Competitive strategy
  • Corporate strategy
  • Customer strategy

Bringing the Environment Down to Earth

  • Forest L. Reinhardt
  • From the July–August 1999 Issue

Confronting a Bad Deal

  • Blythe McGarvie
  • June 16, 2015

case study of risk assessment

See Your Company Through the Eyes of a Hacker

  • Nathaniel C Fick
  • March 24, 2015

case study of risk assessment

Ethics and AI: 3 Conversations Companies Need to Have

  • Reid Blackman
  • Beena Ammanath
  • March 21, 2022

Strategic Analysis for More Profitable Acquisitions

  • Alfred Rappaport
  • From the July 1979 Issue

case study of risk assessment

Learning from the Future

  • J. Peter Scoblic
  • From the July–August 2020 Issue

Generative AI-nxiety

  • August 14, 2023

case study of risk assessment

Is Anyone Really Responsible for Your Company’s Data Security?

  • Joel Brenner
  • June 19, 2013

Treat Employees like Adults

  • Frank Furedi
  • From the May 2005 Issue

Four Steps to Fixing Your Bad Data

  • Thomas C. Redman
  • August 10, 2011

How to Face Your Company’s Mortality

  • Ron Ashkenas
  • February 22, 2010

case study of risk assessment

"What Is the Next Normal Going to Look Like?"

  • Adi Ignatius

case study of risk assessment

The Stretch Goal Paradox

  • Sim B. Sitkin
  • C. Chet Miller
  • Kelly E. See
  • From the January–February 2017 Issue

What’s Your Company’s Water Footprint?

  • August 05, 2009

Pitfalls in Evaluating Risky Projects

  • James E. Hodder
  • Henry E. Riggs
  • From the January 1985 Issue

A New Approach to Innovation Investment

  • Rita Gunther McGrath
  • Rita McGrath
  • March 25, 2008

case study of risk assessment

The Real Story of the Fake Story of One of Europe's Most Charismatic CEOs

  • Ludovic François
  • Dominique Rouziès
  • July 18, 2018

Decision to Trust

  • Robert F. Hurley
  • From the September 2006 Issue

Innovating for Cash

  • James P. Andrew
  • Harold L. Sirkin
  • From the September 2003 Issue

Simple Ethics Rules for Better Risk Management

  • Dante Disparte
  • November 08, 2016

case study of risk assessment

Slighting Urgency: A Cross-Cultural Reexamination of the Crash of Avianca Flight 052

  • Christine Pearson
  • August 01, 2019

Sy Friedland and JF&CS

  • Jesseca Timmons
  • April 01, 2017

Kosmos Energy and Ghana A

  • Andrew C. Inkpen
  • Michael Moffett
  • June 15, 2012

Columbia's Final Mission (Abridged) (B)

  • Amy C. Edmondson
  • Kerry Herman
  • May 15, 2012

Creating an Asian Benchmark for Crude Oil

  • Robert Webb
  • May 09, 2021

Emergence of Default Swap Index Products

  • Darrell Duffie
  • February 12, 2004

Hormel Foods

  • David E. Bell
  • Natalie Kindred
  • November 20, 2019

Guidant Corp.: Shaping Culture Through Systems

  • Robert Simons
  • Antonio Davila
  • April 01, 1998

Accounting for Political Risk at AES (B)

  • Gerardo Perez Cavazos
  • Suraj Srinivasan
  • August 29, 2017

ATH Technologies: Making the Numbers

  • Jennifer Packard
  • May 18, 2017

Nextel Peru: Emerging Market Cost of Capital

  • Luis M. Viceira
  • Joel L. Heilprin
  • December 11, 2015

V-Cola: Confidential Instructions for Cly Entman Client Services Director, Chikara Advertising

  • Ian I. Larkin
  • Hallam Movius
  • March 27, 2012

case study of risk assessment

Finance Reading: Risk and Return 1: Stock Returns and Diversification

  • Timothy A. Luehrman
  • April 11, 2017

What Went Wrong with Boeing's 737 Max?

  • William W. George
  • Amram Migdal
  • June 28, 2020

The Rise and Fall of Lehman Brothers

  • Stuart C. Gilson
  • Kristin Mugford
  • Sarah L. Abbott
  • January 18, 2017

Baupost Group: Finding a Margin of Safety in London Real Estate

  • Adi Sunderam
  • Shawn O'Brien
  • Franklin Muanankese
  • May 30, 2018

Boardroom Battle Behind Bars: Gome Electrical Appliances Holdings -- A Corporate Governance Drama

  • William C. Kirby
  • Tracy Yuen Manty
  • August 05, 2011

People Management (Abridged)

  • Boris Groysberg
  • November 21, 2013

CITIC Pacific: Good Governance or Smoke and Mirrors?

  • Steven John DeKrey
  • David Ian Thomas
  • February 09, 2021

case study of risk assessment

A Leader's Guide to Cybersecurity: Why Boards Need to Lead--and How to Do It

  • Thomas J. Parenty
  • Jack J. Domet
  • December 03, 2019

case study of risk assessment

Societe Generale: The Kerviel Affair (A) and (B), Teaching Note

  • Francois Brochet
  • April 22, 2010

case study of risk assessment

How Do You Win the Capital Allocation Game?

  • John A. Boquist
  • Todd T. Milbourn
  • Anjan V. Thankor
  • December 01, 1998

Leadership and Strategic Risk Management: An SFO Approach

  • Jack Klinck
  • November 13, 2009

case study of risk assessment

Introduction

  • David L. Olson
  • October 06, 2016

Popular Topics

Partner center.

Harvard Partners

IT Assessment Risk Mitigation

In only 2 weeks of working with a university, a set of recommendations minimizing data center risk by protecting the current infrastructure and architecting for improved cooling was developed., project background.

A major renovation of the campus center (housing the data center) was beginning. As a part of the renovation, an elevator was being added to the building, and the elevator shaft needed to go through the university’s data center. The CIO wanted to understand the risk to the school’s computing infrastructure.

The Strategy

We met with members of the IT department to understand the current data center layout and inventory. Meeting with the facilities team (including architect and construction vendor), we were able to identify areas of data center risk and make recommendations for avoiding risk. Our recommendations were scaled appropriately for the size and criticality of the university’s data center.

We also provided detailed documentation of the data center layout, including equipment placement and cable paths. During our assessment, we identified opportunities for data center expansion, improved cooling, and new cable layouts.

Proven Results

  • Steps were taken to reduce dust and vibration during construction and install devices to reduce EMI during the elevator’s operation.
  • Plans were made to migrate racks 90 degrees, allowing for hot aisle/cold aisle operation.
  • Cabinet doors were changed to grills to improve airflow and cooling.

More Successful Projects

case study of risk assessment

Business Resiliency Assessment and Planning

case study of risk assessment

Business Continuance Recovery Planning

case study of risk assessment

IT Assessment - Full Evaluation

Uncover opportunities for it excellence.

Terra Gaines, Senior Account Manager for Harvard Partners has been in the Staffing Industry for 17 years, supporting multiple industry verticals and market segments including: IT, Cybersecurity, Semi-Conductor, Tech Integrators, Finance & Medical to name a few. Her personal and professional passions have always been people centric and she’s extremely proud of providing white glove service to each client and manager that she serves.

Jill Gearhart, Director Client Services, has over 20 years of Account Management experience in technology service areas across IT Consulting & Staffing, Cloud, Datacenter, Networking & Communications. Jill’s focus is in Client Engagement, proposing and ensuring the successful delivery of services from the Harvard Partners Portfolio tailored to attain each Client’s desired business outcomes, including the Staffing of essential resources.

Prior to joining Harvard Partners in 2014, she held a high-level Account Management position at a global technology company now known as Lumen (formerly CenturyLink), where she was appointed to multiple Excellence Advisory boards in several Enterprise product areas, domestic and abroad, over the span of 11 years. Notably, after the Qwest-CenturyLink Merger in 2010, and the acquisition of Savvis thereafter, she was instrumental in the integration between organizations in the effort to build a seamless customer experience. Through continual engagement with Enterprise client organizations throughout her tenure, she has had the privilege of collaborating on solutions and individual resources needed to answer numerous business objectives, whether expanding into new markets or advancing operational efficiency and resiliency.

Education: Bachelor’s of Science, Business Administration, Bryant University, Cum Laude

Chris Callaghan is the Director of Architecture Services and is responsible for overseeing the architectural services arm of Harvard Partners. This includes everything from to architecture approach strategy, to candidate selection & vetting, to engagement leadership. Chris has years of technology architecture consulting experience ranging from boutique architectural services companies to larger, established consulting companies. He’s played multiple roles, from individual contribution to client and consultant management.

Prior to joining Harvard Partners, he was the Engagement Lead and Consultant Manager at Systems Flow, Inc. where he was responsible for client engagement management, consultant management, architectural services, SOW negotiation/creation/signing, training, etc. Prior to that, he worked as an Enterprise Solutions Architect for a large reinsurance firm under Fairfax Holdings.

Gary Gardner is the Managing Director of Harvard Partners and an Information Technology executive with over 30 years of global Investment Management experience. He has a broad range of knowledge of Investment Management systems including investment research, portfolio management, trading, compliance, back office, CRM, and client reporting. Gary has expertise with technical infrastructure, operational risk, business continuity, SOX compliance, SSAE16 certification, vendor management, and cloud services.

Prior to joining Harvard Partners, he was the Chief Technology Officer at Batterymarch Financial Management, Inc. and GMO LLC where he was responsible for IT leadership and technical strategy for high computational and data-intensive quantitative asset management environments. Gary also held senior technology positions at Santander Global Advisors and Baring Asset Management.

Education: Gary studied Management Information Systems at Northeastern University.

Steve Walsh is a Managing Partner at Harvard Partners. Steve has been a career business leader for companies such as Hewlett Packard, EMC, Centerstone Manhattan Software, ClearEdge Partners, and Alliance Consulting.

Prior to joining Harvard Partners Steve was the worldwide leader for the Storage Consulting practice at Hewlett Packard. In this role, Steve was responsible for more than 500 employees encompassing sales, pursuit, portfolio, and delivery. Under Steve’s stewardship Storage Consulting built offerings to help clients assess and design complex storage infrastructures and develop state-of-the-art backup, recovery, and business continuance strategies. Steve grew the Storage Consulting Practice at HP by over 200% and introduced 20 new value-added offerings.

In addition to Hewlett Packard Steve has worked for companies both large and small. At ClearEdge Partners Steve advised C-level Fortune 500eExecutives on their IT purchasing and supply chain strategies, saving his clients millions over his tenure. Steve also has been a business leader at Alliance Consulting, where he built a practice to more than 200 consultants and 10 strategic offerings. Steve started his career at EMC Corporation from 1986 to 1998.

Education: Boston College School of Management, Computer Science

Matt Ferm is a F ounder and Managing Partner of Harvard Partners. Matt’s focus is on IT Assessments, IT Governance, and Program Management. Prior to Harvard Partners, Matt spent 17 years with Wellington Management Company, LLP. As an Associate Partner and Director of Enterprise Technologies, Matt was responsible for managing the global physical computing infrastructure of this financial services firm. This includes data centers, servers, voice and data networks, desktops, laptops, audio/video hardware, messaging (email, IM, etc.), security administration, disaster recovery, production control, monitoring, market data services, storage systems, and capacity planning.

During his career at Wellington, Matt managed the Operational Resilience, Resource Management, Systems Engineering, IT Client Services, and IT Strategic Development groups, chaired the firm’s Year 2000 efforts and was a member of the firms IS Priorities Committee, Project Review Committee (Chair), Systems Architecture Committee (Chair), Year 2000 Committee (Chair), Operational Resilience Committee, Incident Review Committee and Web Oversight Committee.

Prior to joining Wellington Management in 1992, Matt served as Director of Financial Services Markets for Apollo Computer, Hewlett-Packard, and Oki Electric where he managed the marketing of Unix workstations to the Financial Services industry. In 1985, Matt was Manager, New Business Development for Gregg Corporation (now IDD/Dow Jones/SunGard), a small investment database software company. Matt got his start in 1981 on Wall Street, working in the Custody Department of Bankers Trust and the MIS department of E.F. Hutton. Matt received his BA in Economics from Queens College, the City University of New York in 1982, and is a member of the Society for Information Management.

Education: Queens College, City University of New York – BA in Economics

Jason Young is a Senior Technical Recruiter at Harvard Partners and has more than 13 years of experience in recruiting and talent acquisition. Jason’s focus is on leading recruiting efforts and ensuring expectations are met or exceeded between our client’s needs and our candidate’s experience to deliver. Throughout his career, he’s filled immediate needs with high-level IT and business professionals. He also developed sourcing strategies and built strong relationships with IT specialists, leaders, and executives in a variety of industries.

Prior to joining Harvard Partners in 2018, Jason had a successful career with Advantage Technical Resourcing, (formerly TAC Worldwide Companies). He began his career in IT Staffing with Advantage as a Sourcing Recruiter, finding top-tier candidates for the Sr. Recruiters. He quickly advanced to be the sole recruiter of a national high-volume staffing program. His accomplishments with this program led to him being an MSA recruiter for a large global enterprise client. He provided them with a wide range of talent for more than five years.

Education: Bachelor’s of Arts, Psychology, Framingham State University

Lisa Brody is the Talent Operations Manager at Harvard Partners and her focus is on managing the recruiting practice. Lisa has over 30 years of experience in recruiting and talent acquisition. She has successfully brought top-tier Information Technology and Business Professionals to our clients, with a purpose, to fill immediate needs as well as, create an ongoing strategy to find IT specialists, leaders, and executives in a variety of industries.

Prior to joining Harvard Partners in 2016, Lisa reveled in an accomplished career with Advantage Technical Resourcing, (formerly TAC Worldwide Companies) from the rise of the organization, serving in several specialized recruiting and talent management roles. She was a lead MSA recruiter for large global enterprise clients for over a decade, providing a wide range of talent. Throughout her advancement, she has consistently, cultivated a strong reputation among candidates and clients for competency, professionalism, and results.

Education: Massachusetts Bay Community College, Wellesley, MA Associate of Science, Retail Management

case study of risk assessment

Environmental risk analysis of a Ramsar site: a case study of east Kolkata wetlands with PSR framework

  • Published: 06 April 2024
  • Volume 196 , article number  432 , ( 2024 )

Cite this article

  • Subhra Halder   ORCID: orcid.org/0000-0001-8552-3174 1 ,
  • Subhasish Das   ORCID: orcid.org/0000-0002-6512-7454 1 &
  • Suddhasil Bose   ORCID: orcid.org/0000-0003-4836-7779 1  

66 Accesses

Explore all metrics

The East Kolkata Wetlands (EKWT), designated as a Ramsar Site for its crucial role in sewage water purification, agriculture and pisciculture, faces escalating environmental threats due to rapid urbanisation. Employing the pressure-state-response (PSR) framework and Environmental Risk Assessment (ERA), this study spans three decades to elucidate the evolving dynamics of EKWT. Using Landsat TM and OLI images from 1991, 2001, 2011 and 2021, the research identifies key parameters within the PSR framework. Principal component analysis generates environmental risk maps, revealing a 46% increase in urbanisation, leading to reduced vegetation cover and altered land surface conditions. The spatial analysis, utilizing Getis–Ord Gi* statistics, pinpoints risk hotspots and coldspots in the EKWT region. Correlation analysis underscores a robust relationship between urbanisation, climatic response and environmental risk. Decadal ERA exposes a noteworthy surge in high-risk areas, indicating a deteriorating trend. Quantitative assessments pinpoint environmental risk hotspots, emphasizing the imperative for targeted conservation measures. The study establishes a direct correlation between environmental risk and air quality, underscoring the broader implications of EKWT’s degradation. While acknowledging the East Kolkata administration’s efforts, the research recognises its limitations and advocates a holistic, multidisciplinary approach for future investigations. Recommendations encompass the establishment of effective institutions, real-time monitoring, public engagement and robust anti-pollution measures. In offering quantitative insights, this study provides an evidence-based foundation for conservation strategies and sustainable management practices essential to safeguard the East Kolkata Wetlands.

This is a preview of subscription content, log in via an institution to check access.

Access this article

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Institutional subscriptions

case study of risk assessment

Data availability

Data are available based on considerable requests.

Abbreviations

Bay of Bengal

Brightness of temperature

Digital number

East Kolkata Wetland

Environment Risk Assessment

Enhanced vegetation index

Fuzzy Rule-based Assessment Model

Google earth engine

High environmental risk hotspots

Low environmental risk coldspots

Land surface emissivity

Land surface moisture

Land surface temperature

Land use and land cover

Moderate environmental risk hotspots

Normalised difference built-up and bare soil index

Near infrared

Operational land imager

Principal component analysis

Pressure-state-response

Random forest

Remote sensing

Support vector machine

Shortwave infrared

Thematic mapper

Top of atmosphere

Urban heat island

Very low environmental risk coldspots

Abdi, H., & Williams, L. J. (2010). Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics, 2 (4), 433–459. https://doi.org/10.1002/wics.101

Article   Google Scholar  

Alberto, R. T., Biagtan, A. R., Isip, M. F., & Tagaca, R. C. (2019). Hot spot area analysis of onion armyworm outbreak in Nueva Ecija using geographic information system. Spatial Information Research, 27 , 673–680. https://doi.org/10.1007/s41324-019-00266-0

Ashraf, A., Haroon, M. A., Ahmad, S., Abowarda, A. S., Wei, C., & Liu, X. (2023). Use of remote sensing-based pressure-state-response framework for the spatial ecosystem health assessment in Langfang, China. Environmental Science and Pollution Research, 30 , 89395–89414. https://doi.org/10.1007/s11356-023-28674-8

Avijit, D., & Kamalakannan, D. (2023). Geochemistry and mass balance of selected heavy metals in East Kolkata Wetlands, a Ramsar site of West Bengal. India. Journal of Hazardous Materials, 445 , 130574. https://doi.org/10.1016/j.jhazmat.2022.130574

Article   CAS   Google Scholar  

Bera, B., Bhattacharjee, S., Shit, P. K., Sengupta, N., & Saha, S. (2021). Anthropogenic stress on a Ramsar site, India: Study towards rapid transformation of the health of aquatic environment. Environmental Challenges, 4 , 100158. https://doi.org/10.1016/j.envc.2021.100158

Bitterman, P., Tate, E., Van Meter, K. J., & Basu, N. B. (2016). Water security and rainwater harvesting: A conceptual framework and candidate indicators. Applied Geography, 76 , 75–84. https://doi.org/10.1016/j.apgeog.2016.09.01

Boori, M. S., Choudhary, K., Paringer, R., & Kupriyanov, A. (2021). Eco-environmental quality assessment based on pressure-state-response framework by remote sensing and GIS. Remote Sensing Applications: Society and Environment, 23 , 100530. https://doi.org/10.1016/j.rsase.2021.100530

Chandra, K., Raghunathan, C., & Mao, A. A. (2020). Biodiversity profile of East Kolkata Wetlands. Jointly published by the Director of Zoological Survey of India, Kolkata and East Kolkata Wetlands Management Authority, Department of Environment, Government of West Bengal. Retrieved from https://bsi.gov.in/uploads/userfiles/file/Monitoring%20and%20Evaluation/East%20Kolkata%20Wetlands.pdf

Cheng, H., Zhu, L., & Meng, J. (2022). Fuzzy evaluation of the ecological security of land resources in mainland China based on the Pressure-State-Response framework. Science of the Total Environment, 804 , 150053. https://doi.org/10.1016/j.scitotenv.2021.150053

Dey, D., & Banerjee, S. (2013). Ecosystem and livelihood support: The story of East Kolkata Wetlands. Environment and Urbanization Asia, 4 (2), 325–337. https://doi.org/10.1177/0975425313511158

EKWMA. (2022). East Kolkata Wetland Management Authority Annual Report 2021–22. Retrieved from  https://ekwma.in/ek/wp-content/uploads/2023/06/ANNUAL-REPORT-2021-2022.pdf

Everard, M., Kangabam, R., Tiwari, M. K., McInnes, R., Kumar, R., Talukdar, G. H., & Das, L. (2019). Ecosystem service assessment of selected wetlands of Kolkata and the Indian Gangetic Delta: Multi-beneficial systems under differentiated management stress. Wetlands Ecology and Management, 27 , 405–426.

Firozjaei, M. K., Fathololomi, S., Kiavarz, M., Arsanjani, J. J., Homaee, M., & Alavipanah, S. K. (2021). Modeling the impact of the COVID-19 lockdowns on urban surface ecological status: A case study of Milan and Wuhan cities. Journal of Environmental Management, 286 , 112236. https://doi.org/10.1016/j.jenvman.2021.112236

Gayen, J., & Datta, D. (2023). Application of pressure–state–response approach for developing criteria and indicators of ecological health assessment of wetlands: A multi-temporal study in Ichhamati floodplains. India. Ecological Processes, 12 (1), 34. https://doi.org/10.1186/s13717-023-00447-8

Ghosh, S., & Das, A. (2019). Urban expansion induced vulnerability assessment of East Kolkata Wetland using Fuzzy MCDM method. Remote Sensing Applications: Society and Environment, 13 , 191–203. https://doi.org/10.1016/j.rsase.2018.10.014

Ghosh, S., & Das, A. (2020). Wetland conversion risk assessment of East Kolkata Wetland: A Ramsar site using random forest and support vector machine model. Journal of Cleaner Production, 275 , 123475. https://doi.org/10.1016/j.jclepro.2020.123475

Halder, S., Das, S., & Basu, S. (2023). Vegetation condition, land surface temperature, and air quality in Shali River Basin, West Bengal, India. Remote Sensing in Earth Systems Sciences, 6 , 60–76. https://doi.org/10.1007/s41976-023-00083-y

Hazbavi, Z., Sadeghi, S. H., Gholamalifard, M., & Davudirad, A. A. (2020). Watershed health assessment using the pressure–state–response (PSR) framework. Land Degradation & Development, 31 (1), 3–19. https://doi.org/10.1002/ldr.3420

Huang, H. F., Kuo, J., & Lo, S. L. (2011). Review of PSR framework and development of a DPSIR model to assess greenhouse effect in Taiwan. Environmental Monitoring and Assessment, 177 , 623–635. https://doi.org/10.1007/s10661-010-1661-7

Huete, A. E. (2002). Overview of the radiometric and biophysical performance of the MODIS forest indices. Remote Sensing of Environment, 83 (1–2), 195–213. https://doi.org/10.1016/S0034-4257(02)00096-2

Hughey, K. F., Cullen, R., Kerr, G. N., & Cook, A. J. (2004). Application of the pressure–state–response framework to perceptions reporting of the state of the New Zealand environment. Journal of Environmental Management, 70 (1), 85–93. https://doi.org/10.1016/j.jenvman.2003.09.020

Jana, M., & Sar, N. (2016). Modeling of hotspot detection using cluster outlier analysis and Getis-Ord Gi* statistic of educational development in upper-primary level, India. Modeling Earth Systems and Environment, 2 , 1–10. https://doi.org/10.1007/s40808-016-0122-x

Kang, P., Chen, W., Hou, Y., & Li, Y. (2019). Spatial-temporal risk assessment of urbanization impacts on ecosystem services based on pressure-status-response framework. Scientific Reports, 9 (1), 16806. https://doi.org/10.1038/s41598-019-52719-z

Kumar, N., Chandan, N. K., Bhushan, S., Singh, D. K., & Kumar, S. (2023). Health risk assessment and metal contamination in fish, water and soil sediments in the East Kolkata Wetlands, India. Ramsar Site. Scientific Reports, 13 (1), 1546. https://doi.org/10.1038/s41598-023-28801-y

Kundu, N., Pal, M., & Saha, S. (2008). East Kolkata Wetlands: A resource recovery system through productive activities. In M. Sengupta, & R. Dalwani (Eds.), Proceedings of Taal 2007: 12th World lake conference (pp. 868–881).

Li, Z. L., Tang, B. H., Wu, H., Ren, H., Yan, G., Wan, Z., ... & Sobrino, J. A. (2013). Satellite-derived land surface temperature: Current status and perspectives. Remote Sensing of Environment , 131 , 14–37. https://doi.org/10.1016/j.rse.2012.12.008

Mondal, B., Dolui, G., Pramanik, M., Maity, S., Biswas, S. S., & Pal, R. (2017). Urban expansion and wetland shrinkage estimation using a GIS-based model in the East Kolkata Wetland, India. Ecological Indicators, 83 , 62–73. https://doi.org/10.1016/j.ecolind.2017.07.037

Mondal, B. K., Kumari, S., Ghosh, A., & Mishra, P. K. (2022). Transformation and risk assessment of the East Kolkata Wetlands (India) using fuzzy MCDM method and geospatial technology. Geography and Sustainability, 3 (3), 191–203. https://doi.org/10.1016/j.geosus.2022.07.002

Niemeijer, D., & De Groot, R. S. (2008). A conceptual framework for selecting environmental indicator sets. Ecological Indicators, 8 (1), 14–25. https://doi.org/10.1016/j.ecolind.2006.11.012

Pal, S., Chakraborty, S., Datta, S., & Mukhopadhyay, S. K. (2018). Spatio-temporal variations in total carbon content in contaminated surface waters at East Kolkata Wetland Ecosystem, a Ramsar Site. Ecological Engineering, 110 , 146–157. https://doi.org/10.1016/j.ecoleng.2017.11.009

Parihar, S. M., Sarkar, S., Dutta, A., Sharma, S., & Dutta, T. (2013). Characterizing wetland dynamics: A post-classification change detection analysis of the East Kolkata Wetlands using open source satellite data. Geocarto International, 28 (3), 273–287. https://doi.org/10.1080/10106049.2012.705337

Qiu, M., Zuo, Q., Wu, Q., Yang, Z., & Zhang, J. (2022). Water ecological security assessment and spatial autocorrelation analysis of prefectural regions involved in the Yellow River Basin. Scientific Reports, 12 (1), 5105. https://doi.org/10.1038/s41598-022-07656-9

Roy-Basu, A., Bharat, G. K., Chakraborty, P., & Sarkar, S. K. (2020). Adaptive co-management model for the East Kolkata wetlands: A sustainable solution to manage the rapid ecological transformation of a peri-urban landscape. Science of the Total Environment, 698 , 134203. https://doi.org/10.1016/j.scitotenv.2019.134203

Sarkar, S., Parihar, S. M., & Dutta, A. (2016). Fuzzy risk assessment modelling of East Kolkata Wetland Area: A remote sensing and GIS based approach. Environmental Modelling & Software, 75 , 105–118. https://doi.org/10.1016/j.envsoft.2015.10.003

Vijith, H., & Dodge-Wan, D. (2020). Applicability of MODIS land cover and Enhanced Vegetation Index (EVI) for the assessment of spatial and temporal changes in strength of vegetation in tropical rainforest region of Borneo. Remote Sensing Applications: Society and Environment, 18 , 100311. https://doi.org/10.1016/j.rsase.2020.100311

Wang, Q., Yuan, X., Zhang, J., Mu, R., Yang, H., & Ma, C. (2013). Key evaluation framework for the impacts of urbanization on air environment–A case study. Ecological Indicators, 24 , 266–272. https://doi.org/10.1016/j.ecolind.2012.07.004

Wolfslehner, B., & Vacik, H. (2008). Evaluating sustainable forest management strategies with the Analytic Network Process in a Pressure-State-Response framework. Journal of Environmental Management, 88 (1), 1–10. https://doi.org/10.1016/j.jenvman.2007.01.027

Wu, J., Wang, X., Zhong, B., Yang, A., Jue, K., Wu, J., ... & Liu, Q. (2020). Ecological environment assessment for Greater Mekong Subregion based on Pressure-State-Response framework by remote sensing. Ecological Indicators , 117 , 106521. https://doi.org/10.1016/j.ecolind.2020.106521

Xu, H., Wang, Y., Guan, H., Shi, T., & Hu, X. (2019). Detecting ecological changes with a remote sensing based ecological index (RSEI) produced time series and change vector analysis. Remote Sensing, 11 (20), 2345. https://doi.org/10.3390/rs11202345

Zanchetta, A., Bitelli, G. & Karnieli, A. (2016). Monitoring desertification by remote sensing using the Tasselled Cap transform for long-term change detection. Nat Hazards 83 (Suppl 1), 223–237. https://doi.org/10.1007/s11069-016-2342-9

Zhang, Q., Huang, T., & Xu, S. (2023). Assessment of urban ecological resilience based on PSR framework in the Pearl River Delta Urban Agglomeration. China. Land, 12 (5), 1089. https://doi.org/10.3390/land12051089

Zhao, Y. W., Zhou, L. Q., Dong, B. Q., & Dai, C. (2019). Health assessment for urban rivers based on the pressure, state and response framework—A case study of the Shiwuli River. Ecological Indicators, 99 , 324–331. https://doi.org/10.1016/j.ecolind.2018.12.023

Download references

No funding was obtained for this study.

Author information

Authors and affiliations.

School of Water Resources Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India

Subhra Halder, Subhasish Das & Suddhasil Bose

You can also search for this author in PubMed   Google Scholar

Contributions

SH, SD and SB conceptualised and wrote the manuscript and reviewed it.

Corresponding author

Correspondence to Subhra Halder .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Halder, S., Das, S. & Bose, S. Environmental risk analysis of a Ramsar site: a case study of east Kolkata wetlands with PSR framework. Environ Monit Assess 196 , 432 (2024). https://doi.org/10.1007/s10661-024-12585-3

Download citation

Received : 24 October 2023

Accepted : 23 March 2024

Published : 06 April 2024

DOI : https://doi.org/10.1007/s10661-024-12585-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • East Kolkata Wetlands (EKWT)
  • Ramsar site
  • Pressure-state-response (PSR) framework
  • Environmental risk assessment
  • Sustainable development
  • Find a journal
  • Publish with us
  • Track your research
  • Search Menu
  • Advance articles
  • Editor's Choice
  • 100 years of the AJE
  • Collections
  • Author Guidelines
  • Submission Site
  • Open Access Options
  • About American Journal of Epidemiology
  • About the Johns Hopkins Bloomberg School of Public Health
  • Journals Career Network
  • Editorial Board
  • Advertising and Corporate Services
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Society for Epidemiologic Research

  • < Previous

Cohort Studies Versus Case-Control Studies on Night-Shift Work and Cancer Risk: The Importance of Exposure Assessment

  • Article contents
  • Figures & tables
  • Supplementary Data

Kyriaki Papantoniou, Johnni Hansen, Cohort Studies Versus Case-Control Studies on Night-Shift Work and Cancer Risk: The Importance of Exposure Assessment, American Journal of Epidemiology , Volume 193, Issue 4, April 2024, Pages 577–579, https://doi.org/10.1093/aje/kwad227

  • Permissions Icon Permissions

It is a general assumption that the prospective cohort study design is the gold standard approach and is superior to the case-control study design in epidemiology. However, there may be exceptions if the exposure is complex and requires collection of detailed information on many different aspects. Night-shift work, which impairs circadian rhythms, is an example of such a complex occupational exposure and may increase the risks of breast, prostate, and colorectal cancer. So far, for logistical reasons, investigators in cohort studies have assessed shift work rather crudely, lacking information on full occupational history and relevant shift-work metrics, and have presented mostly null findings. On the other hand, most cancer case-control studies have assessed the lifetime occupational histories of participants, including collection of detailed night-shift work metrics (e.g., type, duration, intensity), and tend to show positive associations. In this commentary, we debate why cohort studies with weak exposure assessment and other limitations might not necessarily be the preferred or less biased approach in assessing the carcinogenicity of night-shift work. Furthermore, we propose that risk-of-bias assessment and comparison of associations between studies with low versus high risks of bias be considered in future synthesis of the evidence.

Email alerts

Citing articles via, looking for your next opportunity.

  • Recommend to your Library

Affiliations

  • Online ISSN 1476-6256
  • Print ISSN 0002-9262
  • Copyright © 2024 Johns Hopkins Bloomberg School of Public Health
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Heavy metal contamination and environmental risk assessment: a case study of surface water in the Bahr Mouse stream, East Nile Delta, Egypt

Affiliations.

  • 1 Geology Department, Faculty of Sciences, Zagazig University, Zagazig City, 44519, Egypt. [email protected].
  • 2 Geology Department, Faculty of Sciences, Zagazig University, Zagazig City, 44519, Egypt.
  • 3 Environmental Affairs, Sharkiya Governorate, Zagazig City, 44511, Egypt.
  • 4 Central Administrations for Environmental Inspection at the Ministry of Environment, Cairo City, 11728, Egypt.
  • 5 Department of Geology, Bayero University Kano, Kano State, 700241, Nigeria.
  • 6 Key Laboratory of Metallogenic Prediction of Nonferrous Metals and Geological Environment Monitoring, Ministry of Education, School of Geosciences and Info-Physics, Central South University, Changsha, 410083, China. [email protected].
  • 7 Department of Geology, Faculty of Science, Suez Canal University, Ismailia City, 41522, Egypt. [email protected].
  • PMID: 38575685
  • PMCID: PMC10995087
  • DOI: 10.1007/s10661-024-12541-1

Water, as an indispensable constituent of life, serves as the primary source of sustenance for all living things on Earth. The contamination of surface water with heavy metals poses a significant global health risk to humans, animals, and plants. Sharkiya Governorate, situated in the East Nile Delta region of Egypt, is particularly susceptible to surface water pollution due to various industrial, agricultural, and urban activities. The Bahr Mouse Stream, crucial for providing potable water and supporting irrigation activities in Sharkiya Governorate, caters to a population of approximately 7.7 million inhabitants. Unfortunately, this vital water source is exposed to many illegal encroachments that may cause pollution and deteriorate the water resource quality. In a comprehensive study conducted over two consecutive seasons (2019-2020), a total of 38 surface water samples were taken to assess the quantity of heavy metals in surface water destined for human consumption and other applications, supported by indices and statistics. The assessment utilized flame atomic absorption spectrophotometry to determine the concentration of key heavy metals including iron (Fe), manganese (Mn), cadmium (Cd), copper (Cu), lead (Pb), zinc (Zn), nickel (Ni), cobalt (Co), and chromium (Cr). The calculated mean value of the Water Quality Index (WQI) was found to be 39.1 during the winter season and 28.05 during the summer season. This value suggests that the surface water maintains good quality and is suitable for drinking purposes. Furthermore, the analysis indicated that the concentrations of heavy metals in the study area were below the recommended limits set by the World Health Organization and fell within the safe threshold prescribed by Egyptian legislation. Despite the identification of localized instances of illegal activities in certain areas, such as unauthorized discharges, the findings affirm that the Bahr Mouse stream is devoid of heavy metal pollution. This underscores the importance of continued vigilance and regulatory enforcement to preserve the integrity of these vital water resources.

Keywords: Bahr Mouse stream; Environmental risk assessment; Heavy metals; Sharkiya Governorate; Surface water pollution; Water Quality Index (WQI).

© 2024. The Author(s).

  • Cadmium / analysis
  • Environmental Monitoring
  • Metals, Heavy* / analysis
  • Risk Assessment
  • Water Pollutants, Chemical* / analysis
  • Water Quality
  • Metals, Heavy
  • Water Pollutants, Chemical

Improving consistency in estimating future health burdens from environmental risk factors: Case study for ambient air pollution

  • Malley, Christopher S.
  • Anenberg, Susan C.
  • Shindell, Drew T.

Future changes in exposure to risk factors should impact mortality rates and population. However, studies commonly use mortality rates and population projections developed exogenously to the health impact assessment model used to quantify future health burdens attributable to environmental risks that are therefore invariant to projected exposure levels. This impacts the robustness of many future health burden estimates for environmental risk factors. This work describes an alternative methodology that more consistently represents the interaction between risk factor exposure, population and mortality rates, using ambient particulate air pollution (PM 2.5 ) as a case study. A demographic model is described that estimates future population based on projected births, mortality and migration. Mortality rates are disaggregated between the fraction due to PM 2.5 exposure and other factors for a historic year, and projected independently. Accounting for feedbacks between future risk factor exposure and population and mortality rates can greatly affect estimated future attributable health burdens. The demographic model estimates much larger PM 2.5 -attributable health burdens with constant 2019 PM 2.5 (∼10.8 million deaths in 2050) compared to a model using exogenous population and mortality rate projections (∼7.3 million), largely due to differences in mortality rate projection methods. Demographic model-projected PM 2.5 -attributable mortality can accumulate substantially over time. For example, ∼71 million more people are estimated to be alive in 2050 when WHO guidelines (5 μg m -3 ) are achieved compared to constant 2019 PM 2.5 concentrations. Accounting for feedbacks is more important in applications with relatively high future PM 2.5 concentrations, and relatively large changes in non-PM 2.5 mortality rates.

  • Health impact assessment;
  • Risk factor attribution;
  • Air pollution;
  • Particulate matter
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, the landslide traces inventory in the transition zone between the qinghai-tibet plateau and the loess plateau: a case study of jianzha county, china.

www.frontiersin.org

  • 1 National Institute of Natural Hazards, Ministry of Emergency Management of China, Beijing, China
  • 2 Key Laboratory of Compound and Chained Natural Hazards Dynamics, Ministry of Emergency Management of China, Beijing, China
  • 3 Key Laboratory of Shale Gas and Geoengineering, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China
  • 4 Institute of Geology, China Earthquake Administration, Beijing, China

The upper reaches of the Yellow River in China, influenced by erosion of the Yellow River and tectonic activities, are prone to landslides. Therefore, it is necessary to investigate the existing landslide traces. Based on visual interpretation on high-resolution satellite images and terrain data, supplemented and validated by existing landslide records, this paper prepared the most complete and detailed landslide traces inventory in Jianzha County, Huangnan Tibetan Autonomous Prefecture, Qinghai Province, to date. The results indicate that within the study area of 1714 km 2 , there are at least 713 landslide traces, ranging in scale from 3,556 m 2 to 11.13 km 2 , with a total area of 134.46 km 2 . The total landslide area excluding the overlap area is 126.30 km 2 . The overall landslide point density and area density in the study area are 0.42 km -2 and 7.37% respectively. The maximum point density and maximum area density of landslide traces in the area are as high as 5.69 km -2 and 98.0% respectively. The landslides are primarily distributed in the relatively low-elevation northeastern part of Jianzha County, characterized mainly by large-scale loess landslides, with 14 landslides exceeding 1×10 6 m 2 . This inventory not only supplements the landslide trace data in the transition zone between the Qinghai-Tibet Plateau and the Loess Plateau, but also provides an important basis for subsequent landslide risk zoning, response to climate change, and landscape evolution. Additionally, it holds significant reference value for compiling landslide inventories in similar geological environments.

1 Introduction

Worldwide, mass movements such as landslides are prevalent geological hazards, causing heavy casualties ( Petley, 2012 ; Froude and Petley, 2018 ). As far as landslide hazards are concerned, China ranks among the regions with the very frequency of landslide hazards globally ( Kirschbaum et al., 2015 ; Xu and Xu, 2021 ). According to statistics from 2004 to 2016, China experienced 463 fatal landslides not induced by earthquakes, resulting in 4,718 deaths and causing economic losses exceeding 900 million dollars ( Zhang and Huang, 2018 ). Therefore, the prevention and control of landslide hazards is crucial for people’s lives. As a key step in hazard prevention and mitigation, including the analysis of regional landslide distribution patterns, hazard assessments, and risk assessment, the construction of a regional landslide inventory is fundamental and essential. A complete and accurate inventory ensures the objectivity and precision of subsequent work ( Xu, 2015 ; Piacentini et al., 2018 ).

In the construction of China’s landslide traces inventory, many scholars have carried out a lot of work and made certain progress ( Chen et al., 2016 ; Qiu et al., 2019 ; Zhao et al., 2019 ; Zhang et al., 2020 ). In northwestern China, Huang et al. (2022) compiled a landslide traces inventory for Hualong County, Qinghai Province, consisting of 3,517 landslides through visual interpretation of high-resolution optical images. Furthermore, an in-depth study on the spatial distribution patterns of landslides was conducted based on this inventory. In central China, Li et al. (2022a) primarily utilized visual interpretation, supplemented by existing literature and hazard records, to improve and supplement the landslide traces inventory for Baoji City, Shaanxi Province. The inventory contains a total of 3,422 landslides, providing foundational data for subsequent exploration of the distribution characteristics of large-scale landslides in the region. In the western part of the Qinghai-Tibet Plateau, Cui et al. (2023) employed the Google Earth platform and visual interpretation method to identify landslide traces in the Western Himalayan Syntaxis. They established a landslide traces inventory containing 7,947 landslides. This inventory serves as a support for subsequent landslide hazard assessments. Wu et al. (2016) collected landslide data based on aerial photographs at a scale of 1:50,000 under the conditions of existing data and field survey. They mapped 328 landslides in Gangu County, Gansu Province, providing a crucial foundation for subsequent research. Lan et al. (2004) combined aerial photographs, previous landslide investigation data, and on-site verification to compile a landslide inventory for the Xiaojiang River Basin, including 574 landslide records. They conducted spatial analysis and prediction of landslide based on this inventory. The landslide data sets constructed by these studies, supported by various methods, demonstrate the ability to facilitate subsequent study on landslide in terms of accuracy and completeness. Nevertheless, accurate and complete landslide trace data are still lacking for the entire region of China.

In studies covering Jianzha County, many scholars have carried out identification work on regional landslides, or conducted research on landslide failure patterns, InSAR deformation analysis, geomorphic effects, and other aspects based on landslide data ( Ma et al., 2008 ; Guo et al., 2020a ; Wang et al., 2022 ; Tu et al., 2023 ). Yin et al. (2014) primarily utilized visual interpretation to identify 508 landslides from Sigou Gorge to Lagan Gorge in the upper reaches of the Yellow River, with many landslides distributed in Jianzha County. Tu et al. (2023) conducted landslide detection in the upper reaches of the Yellow River based on InSAR technology, and carried out detailed deformation analysis of the Lijia Gorge landslides group in Jianzha County. Du et al. (2023) combined InSAR deformation monitoring and optical images to identify 597 landslides in the upper reaches of the Yellow River. Landslides are mainly distributed in Jianzha County and its surrounding areas. Wang et al. (2022) conducted deformation analysis on the Simencun landslide in Jianzha County to explore the relationship between the failure patterns before and after the landslide occurrence. Currently, although many studies have been carried out in Jianzha County based on landslide data. However, the landslide inventory maps produced do not cover the entire Jianzha County, or the landslide data are not complete and detailed enough. Therefore, by combining the visual interpretation of high-resolution optical images with the comparison of existing literature, this study compiled a landslide traces inventory for Jianzha County, Qinghai Province. Additionally, a spatial analysis was performed on the inventory. Finally, the completeness and importance of the landslide inventory are discussed.

2 Study area

Jianzha County has a total area of approximately 1714 km 2 and is located in the transitional zone from the upper reaches of the Yellow River on the northeastern edge of the Qinghai-Tibet Plateau to the Loess Plateau ( Figure 1 ) ( Ma et al., 2008 ). For a long time, the landscape evolution of this region has been influenced by the northeastward compression of the Qinghai-Tibet Plateau, resulting in the formation of basin and mountainous topography ( Guo et al., 2020a ; Peng et al., 2020 ). The overall terrain in the region is high in the southwest and low in the northeast. The northeastern part is the Qunke-Jianzha Basin, characterized by relatively low elevations and crossed by the main trunk of the Yellow River. On either side, there are two basins, namely, the Guide Basin and the Xunhua Basin. The Yellow River and its tributaries exert strong erosion and incision along the edges of the basins, with cutting depths exceeding 500 m. This has resulted in the formation of numerous erosion and accumulation terraces, as well as steep and rugged slopes, providing favorable conditions for landslide occurrence ( Craddock et al., 2010 ; Guo et al., 2020a ; Du et al., 2023 ).

www.frontiersin.org

Figure 1 . Location of the study area. Surface wave magnitude (Ms) is a measure of the strength of an earthquake, calculated from surface wave. The larger the value, the stronger the earthquake. Active fault data from Deng (2007) .

The study area exhibits undulating and rugged topography with well-developed valleys and gullies. The surrounding active tectonics are developed, with the north part of the area having the Lajishanbeiyuan Fault (LJSBYF) and the Lijishannanyuan Fault (LJSNYF). The NWW-SEE trending Daotanghe-Linxia Fault (DTH-LXF) and NNW-SSE trending Riyueshan Fault (RYSF) pass through the study area. Tectonic activity and climate change contribute to the frequent geological hazards ( Yin et al., 2014 ). The large, extra-large, and giant landslides in the region are typical and representative in China ( Guo et al., 2020b ; Yin et al., 2021 ). Some studies suggest that the tectonic uplift of the Qinghai-Tibet Plateau, as an internal dynamic factor, has led to the episodic incision of the Yellow River main and tributary channels, serving as the underlying cause for the formation of giant landslides ( Li et al., 2011 ). As shown in Figure 1 , there are several historical earthquakes with Ms greater than 5.0 around Jianzha County. The occurrence of landslides may be related to seismic activity or may be the result of landscape evolution, such as river erosion and high groundwater levels ( Guo et al., 2016 ; Guo et al., 2018 ).

With the advancement of remote sensing technology and improved transportation accessibility, the main methods for compiling regional landslide inventories currently include field investigation, visual interpretation of satellite images combined with computer, and automatic identification technology. Table 1 summarizes the advantages and disadvantages of the three landslide identification methods. Detailed field investigation can ensure high accuracy for landslide investigations in small-scale areas ( Huangfu et al., 2021 ). However, for large-scale regional landslide investigations, the feasibility of extensive field investigation decreases. This is primarily due to the substantial cost and time required ( Peng et al., 2016 ), as well as the difficulty in accessing rugged landslide sites. With the development of automatic identification technology, it has a significant advantage in quickly obtaining regional landslide data. However, its accuracy may be not very good ( Fayne et al., 2019 ; Zhang et al., 2020 ; Piralilou et al., 2021 ; Vecchiotti et al., 2021 ; Milledge et al., 2022 ). Combining the strengths of both approaches, the human-computer interaction visual interpretation of satellite images has gradually become an important method for constructing landslide inventory ( Xu et al., 2015 ; Shao et al., 2020 ; Li et al., 2021 ; Cui et al., 2022a ). This approach requires interpreters to have certain professional background knowledge. Compared to detailed field survey, it sacrifices a small portion of accuracy but significantly improves the efficiency of constructing landslide inventory ( Xu et al., 2014b ; Cui et al., 2022b ; Cui et al., 2022c ).

www.frontiersin.org

Table 1 . Advantages and disadvantages of three landslide identification methods.

This article primarily employed high-resolution optical images overlayed on terrain data for human-computer interactive visual interpretation, and combined existing landslide records in literature for validation and supplementation to construct a landslide traces inventory for Jianzha County. Google Earth Pro platform integrates a vast amount of high-resolution optical satellite image data and allows for the three-dimensional, multi-angle display of landscape by overlaying terrain data ( Crosby et al., 2012 ; Rabby and Li, 2019 ). This provides extremely convenient conditions for landslide identification. Focusing on the Jianzha County, the image quality is exceptionally high, with 100% satellite image coverage and 0% cloud coverage. Therefore, we performed repetitive basic work on landslide interpretation based on the Google Earth Pro for inventory construction. First of all, the shape and boundary of the landslide can be easily determined based on the differences between the texture, tone, shadow and vegetation development on the satellite images and the surrounding environment, combined with terrain differences and multi-angle observation. Secondly, many existing literature findings on landslides in the region will be conducted to check and supplement the inventory for ensuring the completeness and objectivity. Because different landslides have different topographic and geomorphic characteristics, there is no uniform standard applicable to the interpretation of all landslide traces. Here, some common landslide features used in landslide interpretation are listed: 1) Having an obvious armchair-shaped back wall and the phenomenon of double grooves homologous; 2) Depression in the source area, prominent topography in the accumulation area, accompanied by a distinct landslide boundary; 3) Obvious displacement between the landslide body and the surrounding environment, accompanied by cracks or differences in elevation; 4) The source area shows a brighter color, and the accumulation has transverse fissures and appears tongue-shaped; 5) Irregular stepped appearance in the accumulation body, with the terraces possibly transformed into residential areas or farmland.

4 Results and analysis

4.1 landslide traces inventory.

The landslide inventory serves as a crucial foundation for regional landslide risk assessment and prevention. Many scholars have conducted regional or individual landslide studies in Jianzha County ( Yin et al., 2014 ; Guo et al., 2020b ; Du et al., 2023 ; Tu et al., 2023 ). Although the study areas of these studies cover or partially cover Jianzha County, most have not established a complete landslide traces inventory that fully encompasses Jianzha County. Table 2 presents selected existing landslide records in Jianzha County. After objectively supplemented and validated by these records, the landslide inventory constructed in this study contains a total of 713 landslide traces ( Figure 2 ). The total area of these landslides is 134.46 km 2 . The total landslide area excluding the overlap area is 126.30 km 2 , accounting for 7.37% of the study area. The average landslide area is approximately 0.19 km 2 , with a minimum of 3,556 m 2 and a maximum of 11.13 km 2 . It can be found that landslides mainly occur on the slopes of the relatively low-elevation ridges in the northeastern part of Jianzha County. These landslides are widely distributed in towns such as Kanbula, Jiajia, Cuozhou, Maketang, and Angla, with a predominance of large-scale landslides. In the southwest, where the altitude is relatively high, landslides are sparsely distributed.

www.frontiersin.org

Table 2 . Selected recorded landslides in Jianzha County.

www.frontiersin.org

Figure 2 . Spatial distribution map of landslide traces.

4.2 Typical landslide display

In order to more intuitively display the landslides, several typical landslides were selected within the study area for display ( Figure 3 ). It can be found that the predominant landslide type is loess landslide. The landslide boundary is easily identified based on the discontinuity in texture and shape between the deposits and the surrounding environment. The material movement along the slope is evident. The displacement between the landslide deposits and the boundary visually demonstrates the movement direction and shape of the landslide. Over time, traces of human activity become visible on the deposits. After reconstruction, roads and buildings of various sizes are distributed across the deposits. These typical landslide examples can clearly capture the landslide morphology and material movement traces, which is of great value for the study of regional landslide failure mechanisms.

www.frontiersin.org

Figure 3 . Display of seven typical landslide traces in the study area.

4.3 Landslide density statistics

In order to quantitatively analyze the spatial distribution of landslides, landslide point density and area density are used to characterize the distribution and aggregation of landslides. After kernel density calculation with the search radius set to 2 km, the results are shown in Figure 4 . High point density areas are primarily concentrated in the northeastern part of the study area ( Figure 4A ), with the maximum density reaching 5.69 km -2 . This indicates that landslides in these areas are numerically dominant. The maximum landslide area density is 98.0% ( Figure 4B ). High area density areas are different from high point density areas in distribution. For instance, landslide area density is more significant relative to point density in areas close to the Kanbula Town. This indicates that the landslides in this area tend to be larger in scale.

www.frontiersin.org

Figure 4 . Landslide density map. (A) point density map; (B) area density map.

5 Discussion

5.1 landslide scale and the completeness analysis.

To explore the scale of landslides in Jianzha County, the cumulative landslide number was plotted against the landslide area in a double logarithmic coordinate system to show the relationship between them ( Figure 5 ). Where N represents the number of landslides exceeding a given area, A. It can be observed that the majority of landslides have a scale smaller than 1×10 6 m 2 , with only 14 landslides exceeding 1×10 6 m 2 in scale. The fitting formula for all landslides is l g N A = − 0.728   l g A + 5.957 , with R 2 =0.882. For landslides with an area larger than 1×10 5 m 2 , the fitting formula is l g N A = − 1.210   l g A + 8.552 , with R 2 =0.99, indicating that these data are relatively complete. For landslides with an area smaller than 1×10 4 m 2 , the curve exhibits a smoother trend, possibly due to the less distinct image change characteristics in the exposed loess areas, making them difficult to identify.

www.frontiersin.org

Figure 5 . Curve depicting correlation between the cumulative landslide number and the landslide area.

As shown in Figure 5 , landslides with an area greater than 1×10 5 m 2 are fitted as l g N A = − 1.210   l g A + 8.552 . In previous studies, this formula was often used in the statistics of coseismic landslide inventories to evaluate the completeness. For example, in the nearly complete coseismic landslide inventory established by Xu et al. (2014a) after the Wenchuan earthquake, landslides within a certain area are defined by the equation l g N A = − 2.0745   l g A + 13 , and the landslides exhibit a rolling trend. Similarly, coseismic landslide inventories for the Minxian-Zhangxian earthquake ( Xu et al., 2014b ) and Maerkang earthquake ( Chen et al., 2023 ) also show a similar trend, with the slopes and intercepts of the corresponding fitting equations are −1.341 and 6.02 (Minxian-Zhangxian earthquake) and −1.1052 and 5.7839 (Maerkang earthquake), respectively. Although the scale of landslides may vary due to different environmental conditions. However, it can be observed that whether it is the landslide traces inventory of this article or the coseismic landslide inventory, the landslides show a similar trend of change. In particular, in the work of establishing a landslide inventory with similar landslide scales, Li et al. (2022b) constructed a landslide traces inventory containing 3,757 landslides around the Baihetan Hydropower Station reservoir in China. The relationship between the cumulative number and area of landslides with an area greater than 1×10 5 m 2 is l g N A = − 1.275   l g A + 6.26 . Upon comparison, this is very close to the results of this article, which also proves the completeness of our inventory to a certain extent. By comparing the completeness of landslide inventories in different categories, it is concluded that landslide inventories of the same category have more reference value than those of different categories.

5.2 Objective assessment of methods

A complete and detailed landslide inventory is of great significance for regional landslide research and risk management. The human-computer interaction visual interpretation method, as one of the primary approaches for establishing regional landslide inventory, possesses advantages that are irreplaceable by field investigation and automatic identification techniques ( Guzzetti et al., 2012 ; Tian et al., 2019 ; Xu et al., 2020 ). While this study primarily relied on such a method to construct a relatively objective landslide traces inventory for Jianzha County, there are still some limitations. For small-scale landslides, due to the resolution limitations of satellite images, the coverage of topographic and geomorphic features, and the subjective factors from interpreters, it is inevitable that landslides with unclear identification characteristics will be missed. Compared with detailed field investigation, the visual interpretation method consumes less cost and time. Compared with automatic identification method, it is superior in accuracy and is currently a widely used method for identifying regional landslides ( Cui et al., 2021 ; Li et al., 2022a ; Sun et al., 2024 ). This method sacrifices some accuracy compared to field investigation, but greatly improves efficiency. Balancing the efficiency of automatic identification and the accuracy of field investigation is an exploratory and challenging task.

5.3 The importance of the landslide inventory

Landslide susceptibility refers to the probability of slope failure in a specific geological environment without considering triggering factors ( Akgun, 2012 ; Nikoobakht et al., 2022 ). As a fundamental component, landslide inventory plays an indispensable role in landslide susceptibility assessment. It provides essential information about landslides, including the number, scale, location. Based on landslide inventory, one can select a single assessment method and specific influencing factors for landslide susceptibility assessment ( Huangfu et al., 2021 ; Nanehkaran et al., 2021 ; Cemiloglu et al., 2023 ). Alternatively, one can choose several different assessment methods for comparative analysis to find the optimal results ( Azarafza et al., 2021 ; Nanehkaran et al., 2022 ; Mao et al., 2024 ). With the development of landslide assessment, machine learning has demonstrated outstanding performance among many methods, gradually becoming the preferred approach for assessment ( Nanehkaran et al., 2023 ). Based on susceptibility assessment, triggering factors are added to evaluate landslide hazard, while carrier indicators are added for vulnerability assessment. Risk assessment is then performed by overlaying hazard and vulnerability. However, regardless of which assessment method is chosen and which landslide influencing factors are considered, the susceptibility assessment, hazard assessment, vulnerability assessment, and risk assessment all need to be based on landslide data. The landslide inventory can not only be used to validate the results obtained through predictive modeling, but also provide an important reference for exploring factors involved in the occurrence of new landslides. Many studies have been carried out on incomplete landslide inventories and updated the inventories, effectively enhancing the understanding of subsequent landslide development and assessment research. For example, the Hokkaido earthquake ( Kasai and Yamada, 2019 ; Cui et al., 2021 ), Wenchuan earthquake ( Dai et al., 2011 ; Xu et al., 2014a ), and Jiuzhaigou earthquake ( Tian et al., 2019 ; Sun et al., 2024 ). The work of compiling a complete and detailed landslide inventory is not only of great value and significance, but also has important supporting value for subsequent research on landslide failure mechanisms, landscape evolution, especially landslide susceptibility assessment.

6 Conclusion

This study established a landslide traces inventory in Jianzha County, Qinghai Province, China, and conducted a statistical analysis of their number, area, and density. A total of 713 landslides were identified, mainly loess landslides. The total area of landslides is 134.46 km 2 , ranging in scale from 3556 m 2 to 11.13 km 2 . The landslides are primarily concentrated in the low-elevation regions of the northeastern part of the study area. This inventory is more similar in scale and completeness to other loess landslide inventories. Furthermore, it is more complete and detailed than previous landslide traces records in Jianzha County. The study compiled the most complete and detailed landslide traces inventory in Jianzha County so far, which is of great significance to landslide scientific research. In future, relevant research on loess landslide development characteristics, failure mechanisms, susceptibility assessment and risk zoning can be conducted based on this landslide inventory.

Data availability statement

The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.

Author contributions

TL: Investigation, Visualization, Writing–original draft. CX: Conceptualization, Resources, Writing–review and editing. LL: Investigation, Writing–review and editing. JX: Investigation, Writing–review and editing.

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was supported by the National Institute of Natural Hazards, Ministry of Emergency Management of China (2023-JBKY-57) and the National Natural Science Foundation of China (42077259).

Acknowledgments

Deep thanks are extended to the reviewers for their beneficial review and valuable comments.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Akgun, A. (2012). A comparison of landslide susceptibility maps produced by logistic regression, multi-criteria decision, and likelihood ratio methods: a case study at İzmir, Turkey. Landslides 9 (1), 93–106. doi:10.1007/s10346-011-0283-7

CrossRef Full Text | Google Scholar

Azarafza, M., Azarafza, M., Akgün, H., Atkinson, P. M., and Derakhshani, R. (2021). Deep learning-based landslide susceptibility mapping. Sci. Rep. 11 (1), 24112. doi:10.1038/s41598-021-03585-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Cemiloglu, A., Zhu, L., Mohammednour, A. B., Azarafza, M., and Nanehkaran, Y. A. (2023). Landslide susceptibility assessment for Maragheh County, Iran, using the logistic regression algorithm. Land 12 (7), 1397. doi:10.3390/land12071397

Chen, W., Chai, H., Zhao, Z., Wang, Q., and Hong, H. (2016). Landslide susceptibility mapping based on GIS and support vector machine models for the Qianyang county, China. Environ. Earth Sci. 75, 474. doi:10.1007/s12665-015-5093-0

Chen, Z., Huang, Y., He, X., Shao, X., Li, L., Xu, C., et al. (2023). Landslides triggered by the 10 June 2022 Maerkang earthquake swarm, Sichuan, China: spatial distribution and tectonic significance. Landslides 20, 2155–2169. doi:10.1007/s10346-023-02080-0

Craddock, W. H., Kirby, E., Harkins, N. W., Zhang, H., Shi, X., and Liu, J. (2010). Rapid fluvial incision along the Yellow River during headward basin integration. Nat. Geosci. 3 (3), 209–213. doi:10.1038/ngeo777

Crosby, C. J., Whitmeyer, S. J., De Paor, D. G., Bailey, J., and Ornduff, T. (2012). Lidar and Google Earth: simplifying access to high-resolution topography data. Spec. Pap. Geol. Soc. Am. 492, 37–47. doi:10.1130/2012.2492(03)

Cui, Y., Bao, P., Xu, C., Ma, S., Zheng, J., and Fu, G. (2021). Landslides triggered by the 6 September 2018 Mw6.6 Hokkaido, Japan: an updated inventory and retrospective hazard assessment. Earth Sci. Inf. 14, 247–258. doi:10.1007/s12145-020-00544-8

Cui, Y., Hu, J., Xu, C., Miao, H., and Zheng, J. (2022b). Landslides triggered by the 1970 Ms7.7 Tonghai earthquake in Yunnan, China: an inventory, distribution characteristics, and tectonic significance. J. Mt. Sci. 19 (6), 1633–1649. doi:10.1007/s11629-022-7321-x

Cui, Y., Hu, J., Zheng, J., Fu, G., and Xu, C. (2022a). Susceptibility assessment of landslides caused by snowmelt in a typical loess area in the Yining County, Xinjiang, China. Q. J. Eng. Geol. Hydrogeology 55 (1), qjegh2021–2024. doi:10.1144/qjegh2021-024

Cui, Y., Jin, J., Huang, Q., Yuan, K., and Xu, C. (2022c). A data-driven model for spatial shallow landslide probability of occurrence due to a typhoon in Ningguo City, Anhui Province, China. Forests 13 (5), 732. doi:10.3390/f13050732

Cui, Y., Yang, W., Xu, C., and Wu, S. (2023). Distribution of ancient landslides and landslide hazard assessment in the Western Himalayan Syntaxis area. Front. Earth Sci. 11, 1135018. doi:10.3389/feart.2023.1135018

Dai, F., Xu, C., Yao, X., Xu, L., Tu, X., and Gong, Q. (2011). Spatial distribution of landslides triggered by the 2008 Ms8.0 Wenchuan earthquake, China. J. Asian Earth Sci. 40 (4), 883–895. doi:10.1016/j.jseaes.2010.04.010

Deng, Q. (2007). Map of active tectonics in China . Beijing: Seismological Press . (In Chinese)

Du, J., Li, Z., Song, C., Zhu, W., Ji, Y., Zhang, C., et al. (2023). InSAR-based active landslide detection and characterization along the upper reaches of the Yellow River. IEEE J. Sel. Top. Appl. Earth Observations Remote Sens. 16, 3819–3830. doi:10.1109/jstars.2023.3263003

Fayne, J. V., Ahamed, A., Roberts-Pierel, J., Rumsey, A. C., and Kirschbaum, D. (2019). Automated satellite-based landslide identification product for Nepal. Earth Interact. 23 (3), 1–21. doi:10.1175/ei-d-17-0022.1

Froude, M. J., and Petley, D. N. (2018). Global fatal landslide occurrence from 2004 to 2016. Nat. Hazards Earth Syst. Sci. 18 (8), 2161–2181. doi:10.5194/nhess-18-2161-2018

Guo, X., Forman, S. L., Marin, L., and Li, X. (2018). Assessing tectonic and climatic controls for late quaternary fluvial terraces in Guide, Jianzha, and Xunhua basins along the Yellow River on the northeastern Tibetan plateau. Quat. Sci. Rev. 195, 109–121. doi:10.1016/j.quascirev.2018.07.005

Guo, X., Sun, Z., Lai, Z., Lu, Y., and Li, X. (2016). Optical dating of landslide-dammed lake deposits in the upper Yellow River, Qinghai-Tibetan Plateau, China. Quat. Int. 392, 233–238. doi:10.1016/j.quaint.2015.06.021

Guo, X., Wei, J., Lu, Y., Song, Z., and Liu, H. (2020a). Geomorphic effects of a dammed pleistocene lake formed by landslides along the upper Yellow River. Water 12 (5), 1350. doi:10.3390/w12051350

Guo, X., Wei, J., Song, Z., Lai, Z., and Yu, L. (2020b). Optically stimulated luminescence chronology and geomorphic imprint of Xiazangtan landslide upon the upper Yellow River valley on the northeastern Tibetan Plateau. Geol. J. 55 (7), 5498–5507. doi:10.1002/gj.3754

Guzzetti, F., Mondini, A. C., Cardinali, M., Fiorucci, F., Santangelo, M., and Chang, K.-T. (2012). Landslide inventory maps: new tools for an old problem. Earth-Science Rev. 112 (1-2), 42–66. doi:10.1016/j.earscirev.2012.02.001

Huang, Y., Xu, C., Li, L., He, X., Cheng, J., Xu, X., et al. (2022). Inventory and spatial distribution of ancient landslides in Hualong county, China. Land 12 (1), 136. doi:10.3390/land12010136

Huangfu, W., Wu, W., Zhou, X., Lin, Z., Zhang, G., Chen, R., et al. (2021). Landslide geo-hazard risk mapping using logistic regression modeling in Guixi, Jiangxi, China. Sustainability 13 (9), 4830. doi:10.3390/su13094830

Kasai, M., and Yamada, T. (2019). Topographic effects on frequency-size distribution of landslides triggered by the Hokkaido Eastern Iburi Earthquake in 2018. Earth, Planets Space 71 (1), 89–12. doi:10.1186/s40623-019-1069-8

Kirschbaum, D., Stanley, T., and Zhou, Y. (2015). Spatial and temporal analysis of a global landslide catalog. Geomorphology 249, 4–15. doi:10.1016/j.geomorph.2015.03.016

Lan, H. X., Zhou, C. H., Wang, L. J., Zhang, H. Y., and Li, R. H. (2004). Landslide hazard spatial analysis and prediction using GIS in the Xiaojiang watershed, Yunnan, China. Eng. Geol. 76 (1), 109–128. doi:10.1016/j.enggeo.2004.06.009

Li, L., Xu, C., Xu, X., Zhang, Z., and Cheng, J. (2021). Inventory and distribution characteristics of large-scale landslides in Baoji city, Shaanxi province, China. ISPRS Int. J. Geo-Information 11 (1), 10. doi:10.3390/ijgi11010010

Li, L., Xu, C., Yang, Z., Zhang, Z., and Lv, M. (2022a). An inventory of large-scale landslides in Baoji city, Shaanxi province, China. Data 7 (8), 114. doi:10.3390/data7080114

Li, L., Xu, C., Yao, X., Shao, B., Ouyang, J., Zhang, Z., et al. (2022b). Large-scale landslides around the reservoir area of Baihetan hydropower station in Southwest China: analysis of the spatial distribution. Nat. Hazards Res. 2 (3), 218–229. doi:10.1016/j.nhres.2022.07.002

Li, X., Guo, X., and Li, W. (2011). Mechanism of giant landslides from Longyangxia valley to Liujiaxia valley along upper Yellow River. J. Eng. Geol. 19 (4), 516–529 [in Chinese, with English summary].

Google Scholar

Ma, X., Wang, L., Lv, B., and Ju, S. (2008). An investigation of geological hazards based on IRS-P 6 remote sensing data, Jianzha county, Qinghai province. Northwest. Geol. 41 (2), 93–100 [in Chinese, with English summary].

Mao, Y., Li, Y., Teng, F., Sabonchi, A. K. S., Azarafza, M., and Zhang, M. (2024). Utilizing hybrid machine learning and soft computing techniques for landslide susceptibility mapping in a Drainage Basin. Water 16 (3), 380. doi:10.3390/w16030380

Milledge, D. G., Bellugi, D. G., Watt, J., and Densmore, A. L. (2022). Automated determination of landslide locations after large trigger events: advantages and disadvantages compared to manual mapping. Nat. Hazards Earth Syst. Sci. 22 (2), 481–508. doi:10.5194/nhess-22-481-2022

Nanehkaran, Y. A., Chen, B., Cemiloglu, A., Chen, J., Anwar, S., Azarafza, M., et al. (2023). Riverside landslide susceptibility overview: leveraging artificial neural networks and machine learning in accordance with the United Nations (UN) sustainable development goals. Water 15 (15), 2707. doi:10.3390/w15152707

Nanehkaran, Y. A., Licai, Z., Chen, J., Azarafza, M., and Yimin, M. (2022). Application of artificial neural networks and geographic information system to provide hazard susceptibility maps for rockfall failures. Environ. Earth Sci. 81 (19), 475. doi:10.1007/s12665-022-10603-6

Nanehkaran, Y. A., Mao, Y., Azarafza, M., Kockar, M. K., and Zhu, H.-H. (2021). Fuzzy-based multiple decision method for landslide susceptibility and hazard assessment: a case study of Tabriz, Iran. Geomechanics Eng. 24 (5), 407–418. doi:10.12989/gae.2021.24.5.407

Nikoobakht, S., Azarafza, M., Akgün, H., and Derakhshani, R. (2022). Landslide susceptibility assessment by using convolutional neural network. Appl. Sci. 12 (12), 5992. doi:10.3390/app12125992

Peng, D., Xu, Q., Qi, X., Fan, X., Dong, X., Li, S., et al. (2016). Study on early recognition of loess landslides based on field investigation. Int. J. Georesources Environment-IJGE Former. Int'l J Geohazards Environ. 2 (2), 35–52. doi:10.15273/ijge.2016.02.006

Peng, J., Lan, H., Qian, H., Wang, W., Li, R., Li, Z., et al. (2020). Scientific research framework of livable Yellow River. J. Eng. Geol. 28 (2), 189–201 [in Chinese, with English summary].

Petley, D. (2012). Global patterns of loss of life from landslides. Geology 40 (10), 927–930. doi:10.1130/g33217.1

Piacentini, D., Troiani, F., Daniele, G., and Pizziolo, M. (2018). Historical geospatial database for landslide analysis: the catalogue of landslide occurrences in the Emilia-Romagna region (CLOCkER). Landslides 15 (4), 811–822. doi:10.1007/s10346-018-0962-8

Piralilou, S. T., Shahabi, H., and Pazur, R. (2021). Automatic landslide detection using bi-temporal sentinel 2 imagery. GI_Forum 9, 39–45. doi:10.1553/giscience2021_01_s39

Qiu, H., Cui, Y., Yang, D., Pei, Y., Hu, S., Ma, S., et al. (2019). Spatiotemporal distribution of nonseismic landslides during the last 22 years in Shaanxi province, China. ISPRS Int. J. Geo-Information 8 (11), 505. doi:10.3390/ijgi8110505

Rabby, Y. W., and Li, Y. (2019). An integrated approach to map landslides in Chittagong Hilly Areas, Bangladesh, using Google Earth and field mapping. Landslides 16 (3), 633–645. doi:10.1007/s10346-018-1107-9

Shao, X., Ma, S., Xu, C., Shen, L., and Lu, Y. (2020). Inventory, distribution and geometric characteristics of landslides in Baoshan city, Yunnan province, China. Sustainability 12 (6), 2433. doi:10.3390/su12062433

Sun, J., Shao, X., Feng, L., Xu, C., Huang, Y., and Yang, W. (2024). An essential update on the inventory of landslides triggered by the Jiuzhaigou Mw6. 5 earthquake in China on 8 August 2017, with their spatial distribution analyses. Heliyon 10 (2), e24787. doi:10.1016/j.heliyon.2024.e24787

Tian, Y., Xu, C., Ma, S., Xu, X., Wang, S., and Zhang, H. (2019). Inventory and spatial distribution of landslides triggered by the 8th August 2017 Mw 6.5 Jiuzhaigou earthquake, China. J. Earth Sci. 30 (1), 206–217. doi:10.1007/s12583-018-0869-2

Tu, K., Ye, S., Zou, J., Hua, C., and Guo, J. (2023). InSAR displacement with high-resolution optical remote sensing for the early detection and deformation analysis of active landslides in the upper Yellow River. Water 15 (4), 769. doi:10.3390/w15040769

Vecchiotti, F., Tilch, N., and Kociu, A. (2021). The use of TERRA-ASTER satellite for landslide detection. Geosciences 11 (6), 258. doi:10.3390/geosciences11060258

Wang, L., Qiu, H., Zhou, W., Zhu, Y., Liu, Z., Ma, S., et al. (2022). The post-failure spatiotemporal deformation of certain translational landslides may follow the pre-failure pattern. Remote Sens. 14 (10), 2333. doi:10.3390/rs14102333

Wu, Y., Li, W., Liu, P., Bai, H., Wang, Q., He, J., et al. (2016). Application of analytic hierarchy process model for landslide susceptibility mapping in the Gangu County, Gansu Province, China. Environ. Earth Sci. 75, 422. doi:10.1007/s12665-015-5194-9

Xu, C. (2015). Preparation of earthquake-triggered landslide inventory maps using remote sensing and GIS technologies: principles and case studies. Geosci. Front. 6 (6), 825–836. doi:10.1016/j.gsf.2014.03.004

Xu, C., Xu, X., and Shyu, J. B. H. (2015). Database and spatial distribution of landslides triggered by the Lushan, China Mw6.6 earthquake of 20 April 2013. Geomorphology 248, 77–92. doi:10.1016/j.geomorph.2015.07.002

Xu, C., Xu, X., Shyu, J. B. H., Zheng, W., and Min, W. (2014b). Landslides triggered by the 22 July 2013 Minxian–Zhangxian, China, Mw5.9 earthquake: inventory compiling and spatial distribution analysis. J. Asian Earth Sci. 92, 125–142. doi:10.1016/j.jseaes.2014.06.014

Xu, C., Xu, X., Yao, X., and Dai, F. (2014a). Three (nearly) complete inventories of landslides triggered by the May 12, 2008 Wenchuan Mw7.9 earthquake of China and their spatial distribution statistical analysis. Landslides 11, 441–461. doi:10.1007/s10346-013-0404-6

Xu, X., and Xu, C. (2021). Natural Hazards Research: an eternal subject of human survival and development. Nat. Hazards Res. 1 (1), 1–3. doi:10.1016/j.nhres.2020.12.003

Xu, Y., Allen, M. B., Zhang, W., Li, W., and He, H. (2020). Landslide characteristics in the Loess Plateau, northern China. Geomorphology 359, 107150. doi:10.1016/j.geomorph.2020.107150

Yin, Z., Qin, X., and Zhao, W. (2014). “Characteristics of landslides from sigou gorge to lagan gorge in the upper reaches of Yellow River. Landslide Science for a Safer Geoenvironment: vol. 1,” in The international programme on landslides (IPL) ( Springer ).

Yin, Z., Qin, X., Zhao, W., and Wei, G. (2013a). Characteristics of landslides in upper reaches of Yellow River with multiple data of remote sensing. J. Eng. Geol. 21 (5), 779–787 [in Chinese, with English summary].

Yin, Z., Wei, G., Qi, X., and Zhou, C. (2013b). Spatial and temporal characteristics of landslides and there response to climatic change from Sigou to Lagan gorges in upper reaches of Yellow River. J. Eng. Geol. 21 (1), 129–137 [in Chinese, with English summary].

Yin, Z., Wei, G., Qin, X., Li, W., and Zhao, W. (2021). Research progress on landslides and dammed lakes in the upper reaches of the Yellow River, northeastern Tibetan Plateau. Earth Sci. Front. 28 (2), 46–57 [in Chinese, with English summary].

Zhang, F., and Huang, X. (2018). Trend and spatiotemporal distribution of fatal landslides triggered by non-seismic effects in China. Landslides 15 (8), 1663–1674. doi:10.1007/s10346-018-1007-z

Zhang, P., Xu, C., Ma, S., Shao, X., Tian, Y., and Wen, B. (2020a). Automatic extraction of seismic landslides in large areas with complex environments based on deep learning: an example of the 2018 Iburi earthquake, Japan. Remote Sens. 12 (23), 3992. doi:10.3390/rs12233992

Zhang, Z., Wang, T., and Wu, S. (2020b). Distribution and features of landslides in the tianshui basin, northwest China. J. Mt. Sci. 17 (3), 686–708. doi:10.1007/s11629-019-5595-4

Zhao, B., Wang, Y., Chen, M., Luo, Y., Liang, R., and Li, J. (2019). Typical characteristics of large-scale landslides in the transition belt between the Qinghai-Tibet Plateau and the Loess Plateau. Arabian J. Geosciences 12, 470. doi:10.1007/s12517-019-4612-9

Keywords: landslide traces inventory, upper reaches of the yellow river, loess landslides, Jianzha County, visual interpretation

Citation: Li T, Xu C, Li L and Xu J (2024) The landslide traces inventory in the transition zone between the Qinghai-Tibet Plateau and the Loess Plateau: a case study of Jianzha County, China. Front. Earth Sci. 12:1370992. doi: 10.3389/feart.2024.1370992

Received: 15 January 2024; Accepted: 19 March 2024; Published: 05 April 2024.

Reviewed by:

Copyright © 2024 Li, Xu, Li and Xu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Chong Xu, [email protected]

This article is part of the Research Topic

Prevention, Mitigation, and Relief of Compound and Chained Natural Hazards

IMAGES

  1. risk analysis

    case study of risk assessment

  2. a case study of the qualitative risk assessment methods

    case study of risk assessment

  3. hazard identification and risk assessment case study

    case study of risk assessment

  4. Risk Assessment Policy Examples

    case study of risk assessment

  5. examples of good risk management case study

    case study of risk assessment

  6. hazard identification and risk assessment case study

    case study of risk assessment

VIDEO

  1. Risk Assessment In Details ||ExplainedRiskAssessment In Urdu || @HSEstudyguide204

  2. Practical CyberSecurity Solutions

  3. Role of the Health and Safety Policy in Decision-Making NEBOSH IGC I Element 2:

  4. Risk, Return and Portfolio Management // Day5 //

  5. Risk Management Part XVI

  6. Risk Assessment

COMMENTS

  1. A case study on the relationship between risk assessment of ...

    This paper delves into the nuanced dynamics influencing the outcomes of risk assessment (RA) in scientific research projects (SRPs), employing the Naive Bayes algorithm. The methodology involves ...

  2. A case study exploring field-level risk assessments as a leading safety

    Risk assessment practices to reveal leading indicators. Risk assessment is a process used to gather knowledge and information around a specific health threat or safety hazard (Smith and Harrison, 2005).Based on the probability of a negative incident, risk assessment also includes determining whether or not the level of risk is acceptable (Lindhe et al., 2010; International Electrotechnical ...

  3. Enterprise Risk Management Examples l Smartsheet

    The following examples of enterprise risk management can be considered success stories. ERM Case Study: Statoil. A major global oil producer, Statoil of Norway stands out for the way it practices ERM by looking at both downside risk and upside potential.

  4. PDF Case Study

    Activities Performed. Configuration review: Performed configuration reviews (console and checklist based) on operating system and databases for Hyperion, OnAir, ERP, SAP, and PeopleSoft to identify data leakage related risks. Third Party Risk Management: Assisted in improving third party risk management and security management practices (For ...

  5. How to Do a Risk Assessment: A Case Study

    Accept whatever risk is left and get on with the ministry's work; Reject the remaining risk and eliminate it by getting rid of the source of the risk; Step 5: Ongoing Risk Management. On a regular basis, in keeping with the type of risk and its threat, the risk assessment and risk management plan should be reviewed to see if it is still valid.

  6. Risk Assessment and Analysis Methods: Qualitative and Quantitative

    A risk assessment determines the likelihood, consequences and tolerances of possible incidents. "Risk assessment is an inherent part of a broader risk management strategy to introduce control measures to eliminate or reduce any potential risk- related consequences." 1 The main purpose of risk assessment is to avoid negative consequences related to risk or to evaluate possible opportunities.

  7. Risk Management Articles, Research, & Case Studies

    This study of financial risk-taking among politicians shows risk preferences to be an important antecedent of misconduct. Risk preferences as measured by portfolio choices between risky and safe investments were found to strongly predict political scandals. ... In the new case study "Honeywell and the Great Recession," Sandra Sucher and ...

  8. Analysis of Case Studies

    Chapter: Analysis of Case Studies. Suggested Citation: "Analysis of Case Studies." National Research Council. 1993. Issues in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2078. identification in the case studies are as important in the presentation of hazard data as they are for health risk assessment.

  9. Fire Safety Risk Assessment of Workplace Facilities: A Case Study

    The study also presents a case study to assess the provision and maintenance of fire safety requirements, utilizing the developed risk assessment tool. This paper is of significant value to design professionals, real estate developers and owners, and facilities managers, through raising awareness about the causes of fire, consequences of fire ...

  10. PDF Risk Management—the Revealing Hand

    we draw lessons from seven case studies about the multiple and contingent ways that a corporate risk function can foster highly interactive and intrusive dialogues tosurface and prioritize risks, help to allocate resources to mitigate them, and bring clarity to the value trade-offs and moral dilemmas that lurk in those decisions.

  11. PDF Case Study 1: Risk Assessment and Lifecycle Management Learning

    Risk assessment should be carried out initially and be repeated throughout development in order to assess in how far the identified risks have become controllable. The time point of the risk assessment should be clearly stated. A summary of all material quality attributes and process parameters.

  12. Case Study: How FAIR Risk Quantification Enables Information ...

    Improving risk assessment processes; Risk Scenario Analysis The FAIR team performs the deep analysis of risk scenarios using an open-source tool adapted for Swisscom's use. Based on the analysis, it provides quantitative estimates for discussion with risk, IT and business analysts (figure 3).

  13. PDF Risk assessment case study

    TECHNEAU project, six risk assessment case studies were carried out at different drinking water systems during 2007-2008. The aim of the case studies was to apply and evaluate the applicability of different methods for risk analysis (i.e. hazard identification and risk estimation) and to some extent risk evaluation of drinking water supplies.

  14. Risk assessment and risk management: Review of recent ...

    The risk field has two main tasks, (I) to use risk assessments and risk management to study and treat the risk of specific activities (for example the operation of an offshore installation or an investment), and (II) to perform generic risk research and development, related to concepts, theories, frameworks, approaches, principles, methods and ...

  15. Managing Risks and Risk Assessment in Ergonomics—A Case Study

    The RULA (Rapid Upper Limb Assessment) method was designed in 1993 in the UK as an ergonomic tool for rapid assessment of workplace risks, the main purpose of which is to determine the risk of loading on the upper limbs and neck. This method was designed as a rapid screening-based tool. In this method, the biomechanical and postural loading of ...

  16. PDF Information Security Risks Assessment: A Case Study

    Risk management process is the systematic application of management policies, procedures and practices to the activities of communicating, consulting, establishing the context and identifying, analysing, evaluating, treating, monitoring and reviewing risk. Risk assessment is the overall process of risk identification, risk analysis and risk ...

  17. Case Studies

    Case Study: Manufacturing Company. Background: A safety products company was contracted to perform a risk assessment. Result: The most expensive products and solutions were recommended by the product company. The client purchased and installed the materials, resulting in an improper application of a safety device.

  18. (PDF) A case study exploring field-level risk assessments as a leading

    The daily operations in the mining industry are still a significant source of risk with regard to occupational safety and health (OS & H). Various research studies and statistical data worldwide show that the number of serious injuries and fatalities still remains high despite substantial efforts the industry has put in recent years in decreasing those numbers.

  19. Supporting Occupational Health and Safety Risk Assessment Skills: A

    Workplaces may not have sufficient expertise in risk assessment. The aim of this study was to identify the needed OHS risk assessment skills, current support in the workplaces and the ways to improve risk assessment skills. This study was conducted with the Delphi survey for OHS experts (n = 13) and with interviews (n = 41) in the case ...

  20. Advancing human health risk assessment

    The integration of the various NAMs in defined case studies allows the assessment of the overall applicability domain of these NAMs in chemical hazard and ultimate risk assessment. Case studies are centred around AOPs and include, for example, the application of NAMs for the assessment of: (i) microvesicular liver steatosis induced by valproic ...

  21. Risk management

    Frank Furedi. People aren't getting dumber, but they're often treated that way. Politicians, educators, and the media assume the public is uncomfortable with nuance and grateful for prescribed ...

  22. A Case Study of Introducing Security Risk Assessment in ...

    Studies with proper security risk assessment provided in the pre-study document, which had passed through the SRA forum (with the results of the analysis performed by the requirement engineers either being approved directly or after expert involvement through an SRA workshop). Table 4 shows the statistics for these categories. The requirements ...

  23. Case Study

    Read our IT assessment case study that lends insight into a risk mitigation analysis we completed. Learn more about the IT risk assessment services Harvard Partners offers by visiting our website. ... IT Assessment Risk Mitigation In only 2 weeks of working with a university, a set of recommendations minimizing data center risk by protecting ...

  24. Environmental risk analysis of a Ramsar site: a case study of east

    The East Kolkata Wetlands (EKWT), designated as a Ramsar Site for its crucial role in sewage water purification, agriculture and pisciculture, faces escalating environmental threats due to rapid urbanisation. Employing the pressure-state-response (PSR) framework and Environmental Risk Assessment (ERA), this study spans three decades to elucidate the evolving dynamics of EKWT. Using Landsat TM ...

  25. Failure risk assessment by multi-state dynamic Bayesian network based

    Through this analysis, DBN can be utilized to complete the failure risk assessment of the event and provide references for reducing the associated risk. 4. Case study. As an integral part of the oil and gas transportation process, gathering pipelines are prone to failure due to the transport medium and operational environment.

  26. Cohort Studies Versus Case-Control Studies on Night-Shift Work and

    Cohort Studies Versus Case-Control Studies on Night-Shift Work and Cancer Risk: The Importance of Exposure Assessment Get access. Kyriaki Papantoniou, Kyriaki Papantoniou ... and have presented mostly null findings. On the other hand, most cancer case-control studies have assessed the lifetime occupational histories of participants, including ...

  27. Heavy metal contamination and environmental risk assessment: a case

    Heavy metal contamination and environmental risk assessment: a case study of surface water in the Bahr Mouse stream, East Nile Delta, Egypt Environ Monit Assess. 2024 Apr 5;196(5):429. doi: 10.1007/s10661-024-12541-1. ... Risk Assessment Rivers Water Pollutants, Chemical* / analysis ...

  28. Improving consistency in estimating future health burdens from

    Future changes in exposure to risk factors should impact mortality rates and population. However, studies commonly use mortality rates and population projections developed exogenously to the health impact assessment model used to quantify future health burdens attributable to environmental risks that are therefore invariant to projected exposure levels. This impacts the robustness of many ...

  29. Frontiers

    Risk assessment is then performed by overlaying hazard and vulnerability. However, regardless of which assessment method is chosen and which landslide influencing factors are considered, the susceptibility assessment, hazard assessment, vulnerability assessment, and risk assessment all need to be based on landslide data. ... a case study of ...