credit risk management Recently Published Documents

Total documents.

  • Latest Documents
  • Most Cited Documents
  • Contributed Authors
  • Related Sources
  • Related Keywords

Effect of Credit Risk Management on Financial Performance Of Nepalese Commercial Banks

The main purpose of this study is to investigate the effect of credit risk on the financial performance of commercial banks in Nepal. The panel data of seventeen commercial banks with 85 observations for the period of 2015 to 2020 have been analyzed. The regression model revealed that non – performing loan (NPLR) has negative and statistically significant impact on financial performance (ROA).Capital adequacy ratio (CAR) and bank size (BS) have negative and statistically no significant impact on financial performance (ROA). Credit to deposit (CDR) has positive but no significant relationship with the financial performance (ROA) and the study concluded that the management quality ratio (MQR) has positive and significant relationship with the financial performance (ROA) of the commercial banks in Nepal. The study recommends that, it is fundamental for Nepalese commercial banks to practice scientific credit risk management, improve their efficacy in credit analysis and loan management to secure as much as possible their assets, and minimize the high incidence of non-performing loans and their negative effects on financial performance.

Evaluation of Effects of Credit-Risk on Return on Assets of Commercial Banks in Nepal

Introduction: The major cause of bank distress in Nepal is associated with poor credit management which results to decline in credit standing of the banks. The study adopts judgmental sampling techniques. Objective: To analyze and evaluate the impact of credit-risk ratio on return on assets of commercial banks in Nepal. Research design: Descriptive and exploratory research designs were used. Methods and materials: Review of various articles and collection of secondary data through the websites of Nepal Rastra Bank. Results and conclusion: It is found that there is inverse relationship between credit risk ratio and return on assets ratio. The findings provide sufficient evidence that credit risk management indicators impact significantly on commercial bank performance in Nepal. Article type: Research Paper

Application of Machine Learning Techniques for Credit Risk Management: A Survey

Portfolio credit risk management applications, exploration of financial market credit scoring and risk management and prediction using deep learning and bionic algorithm.

The purpose is to effectively manage the financial market, comprehensive assess personal credit, reduce the risk of financial enterprises. Given the systemic risk problem caused by the lack of credit scoring in the existing financial market, a credit scoring model is put forward based on the deep learning network. The proposed model uses RNN (Recurrent Neural Network) and BRNN (Bidirectional Recurrent Neural Network) to avoid the limitations of shallow models. Afterward, to optimize path analysis, bionic optimization algorithms are introduced, and an integrated deep learning model is proposed. Finally, a financial credit risk management system using the integrated deep learning model is proposed. The probability of default or overdue customers is predicted through verification on three real credit data sets, thus realizing the credit risk management for credit customers.

Analysis of Credit Risk, Intellectual Capital and Financial Performance of Listed Deposit Money Banks in Nigeria

This paper examined the effects of credit risk, intellectual capital as well as credit risk moderated by intellectual capital on financial performance of fifteen listed deposit money banks in Nigeria (DMBs) from 2007 to 2016. Data were sourced from annual reports of banks and Nigerian National Bureau of Statistics and analysed using Generalised Method of Moments (GMM). The study finds that credit risk index by loan loss ratio negatively affects financial performance of the sampled banks; while capital employed efficiency, loan loss provision moderated by intellectual capital, capital adequacy ratio, income and diversification have positive relationship with banks’ financial performance. Thus, the study recommends that banks should strengthen their credit risk management culture to ensure prompt repayment of loans. The banks should operate within the required capital adequacy ratio to serve as buffer against loan loss provisions provided by the Central Bank of Nigeria. A strong credit risk management culture should be embedded within intellectual capital structure of banks, where all persons at all levels appreciate and understand the banks’ risk management policies as well as strategies and incorporate same into decision-making and business processes.

Methodical approaches for reducing the credit risk

Banking is inevitably associated with risks. No matter what efforts the bank makes to minimize risks, they will always exist – the only question is to what extent. Lending operations are among the most profitable types of banking, but they are associated with a high level of risk. The instability of the economic situation in the country, the imperfection of the legal framework in this area necessitate a detailed study of the problems of minimizing credit risks. It should be noted that the choice of methods of credit risk management in the bank is quite relevant today. Credit risk management is the most important task of any bank, and choosing the right method of credit risk management will increase the stability, reliability and competitiveness of the banking system, which will positively affect the overall economic condition of the country. Credit risk is the oldest in the system of banking risks and occupies a prominent place. It is necessary to work out an effective system of using the tools recognized by the world banking community to minimize risks, given the possibility of their transfer from the bank to investors. The starting point in the development of the latest risk management tools of the bank should be the creation of a regulatory framework that will regulate this process. It is necessary to improve the existing methodological framework and develop a new methodological framework for credit risk management of the bank, concentrating the advantages of existing assessment methods, create a single method of assessing the borrower's creditworthiness, not to mention a certain algorithm for banks to form credit procedures. It is necessary to adopt the experience of foreign banks in credit risk management. The experience of foreign banks in developed countries, based on a detailed study of all credit procedures, multifactor analysis of the creditworthiness of potential borrowers.

Enterprise Credit Risk Management Using Multicriteria Decision-Making

The purpose of this study is to reduce the rate of multicriteria decision-making (MCDA) errors in credit risk management and to weaken the influence of different attitudes of enterprise managers on the final decision when facing credit risk. First, several solutions that are suitable for present enterprise credit risk management are proposed according to the research of enterprise risk management in the world. Moreover, the criteria and matrix are established according to the general practice of the expert method. A decision-making method of enterprise credit risk management with trapezoidal fuzzy number as the criteria of credit risk management is proposed based on the prospect theory; then, the weight is calculated based on G1 weight calculation, G2 weight calculation method, and the method of maximizing deviation; finally, the prospect values of the alternatives calculated by each method are adopted to sort and compare the proposed solutions. Considering the difference of risk degree of managers in the face of credit risk management, the ranking results of enterprise credit risk management solutions based on three weight calculation methods are compared. The results show that as long as the quantitative value of the risk attitude of the enterprise credit risk manager meets a certain range, the final choice of credit risk management scheme ranking is consistent. This exploration provides a new research direction for enterprise credit risk management, which has reference significance.

The Impacts of New Financial Instrument Accounting Standards on Chinese Commercial Banks

On March 31, 2017, the Ministry of Finance revised and issued three new financial instrument accounting standards including Accounting Standards for Enterprises No.22- Recognition and Measurement of Financial Instruments. The banks of China’s A and H stocks have implemented the new standards since January 1, 2018. From January 1, 2021, the scope of implementation of the standards covers all non-listed commercial banks. The new financial instrument standards have undergone great changes in the classification and impairment treatment of financial assets, which is bound to have a profound impact on Chinese commercial banks. This article analyzes the impacts of new standards on Chinese commercial banks from the aspects of financial asset classification and measurement, impairment, credit risk management, profit and earnings management. Finally, the paper puts forward several suggestions and measures on the system and model construction, credit policy and post-loan risk management and talent training, in order to facilitate banks smooth the transition to the new standards.

Data Driven Design to Credit Risk Management Using Digital Footprint Intelligence

Export citation format, share document.

Explainable Machine Learning in Credit Risk Management

  • Open access
  • Published: 25 September 2020
  • Volume 57 , pages 203–216, ( 2021 )

Cite this article

You have full access to this open access article

  • Niklas Bussmann 1 ,
  • Paolo Giudici   ORCID: orcid.org/0000-0002-4198-0127 1 ,
  • Dimitri Marinelli 2 &
  • Jochen Papenbrock 3  

35k Accesses

143 Citations

38 Altmetric

Explore all metrics

The paper proposes an explainable Artificial Intelligence model that can be used in credit risk management and, in particular, in measuring the risks that arise when credit is borrowed employing peer to peer lending platforms. The model applies correlation networks to Shapley values so that Artificial Intelligence predictions are grouped according to the similarity in the underlying explanations. The empirical analysis of 15,000 small and medium companies asking for credit reveals that both risky and not risky borrowers can be grouped according to a set of similar financial characteristics, which can be employed to explain their credit score and, therefore, to predict their future behaviour.

Similar content being viewed by others

research papers on credit management

Machine learning and deep learning

Christian Janiesch, Patrick Zschech & Kai Heinrich

research papers on credit management

Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence

Vikas Hassija, Vinay Chamola, … Amir Hussain

research papers on credit management

A brief review of portfolio optimization techniques

Abhishek Gunjan & Siddhartha Bhattacharyya

Avoid common mistakes on your manuscript.

1 Introduction

Black box Artificial Intelligence (AI) is not suitable in regulated financial services. To overcome this problem, Explainable AI models, which provide details or reasons to make the functioning of AI clear or easy to understand, are necessary.

To develop such models, we first need to understand what “Explainable” means. Recently, some important insitutional definitions have been provided. For example, Bracke et al. ( 2019 ) states that “Explanations can answer different kinds of questions about a model's operation depending on the stakeholder they are addressed to and Croxson et al. ( 2019 )” ‘interpretability’ will be the focus will be the focus—generally taken to mean that an interested stakeholder can comprehend the main drivers of a model-driven decision".

Explainability means that an interested stakeholder can comprehend the main drivers of a model-driven decision; FSB ( 2017 ) suggests that “lack of interpretability and auditability of AI and Machine Learning (ML) methods could become a macro-level risk”; Croxson et al. ( 2019 ) establishes that “in some cases, the law itself may dictate a degree of explainability.”

The European GDPR EU ( 2016 ) regulation states that “the existence of automated decision-making should carry meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” Under the GDPR regulation, the data subject is therefore, under certain circumstances, entitled to receive meaningful information about the logic of automated decision-making.

Finally, the European Commission High-Level Expert Group on AI presented the Ethics Guidelines for Trustworthy Artificial Intelligence in April 2019. Such guidelines put forward a set of seven key requirements that AI systems should meet in order to be deemed trustworthy. Among them three relate to the concept of “eXplainable Artificial Intelligence (XAI)” , and are the following.

Human agency and oversight: decisions must be informed, and there must be a human-in-the-loop oversight.

Transparency: AI systems and their decisions should be explained in a manner adapted to the concerned stakeholder. Humans need to be aware that they are interacting with an AI system.

Accountability: AI systems should develop mechanisms for responsibility and accountability, auditability, assessment of algorithms, data and design processes.

Following the need to explain AI models, stated by legislators and regulators of different countries, many established and startup companies have started to embrace Explainable AI models. In addition, more and more people are searching information about what “Explainable Artificial Intelligence” means.

In this respect, Fig.  1 represents the evolution of Google searches for explainable AI related terms.

From a mathematical viewpoint, it is well known that “simple” statistical learning models, such as linear and logistic regression models, provide a high interpretability but, possibly, a limited predictive accuracy. On the other hand, “complex” machine learning models, such as neural networks and tree models, provide a high predictive accuracy at the expense of a limited interpretability.

To solve this trade-off, we propose to boost machine learning models, that are highly accurate, with a novel methodology, that can explain their predictive output. Our proposed methodology acts in the post processing phase of the analysis, rather than in the preprocessing part. It is agnostic (technologically neutral) as it is applied to the predictive output, regardless of which model generated it: a linear regression, a classification tree or a neural network model.

The machine learning procedure proposed in the paper processes the outcomes of any other arbitrary machine learning model. It provides more insight, control and transparency to a trained, potentially black box machine learning model. It utilises a model-agnostic method aiming at identifying the decision-making criteria of an AI system in the form of variable importance (individual input variable contributions).

A key concept of our model is the Shapley value decomposition of a model, a pay-off concept from cooperative game theory. To the best of our knowledge this is the only explainable AI approach rooted in an economic foundation. It offers a breakdown of variable contributions so that every data point (e.g. a credit or loan customer in a portfolio) is not only represented by input features (the input of the machine learning model) but also by variable contributions to the prediction of the trained machine learning model.

More precisely, our proposed methodology is based on the combination of network analysis with Shapley values [see Lundberg and Lee ( 2017 ), Joseph ( 2019 ), and references therein]. Shapley values were originally introduced by Shapley ( 1953 ) as a solution concept in cooperative game theory. They correspond to the average of the marginal contributions of the players associated with all their possible orders. The advantage of Shapley values, over alternative XAI models, is that they can be exploited to measure the contribution of each explanatory variable for each point prediction of a machine learning model, regardless of the underlying model itself [see, e.g. Lundberg and Lee ( 2017 )]. In other words, Shapley based XAI models combine generality of application (they are model agnostic) with the personalisation of their results (they can explain any single point prediction).

Our original contribution is to improve Shapley values, improving the interpretation of the predictive output of a machine learning model by means of correlation network models. To exemplify our proposal, we consider one area of the financial industry in which Artificial Intelligence methods are increasingly being applied: credit risk management [see for instance the review by Giudici ( 2018 )].

Correlation networks, also known as similarity networks, have been introduced by Mantegna and Stanley ( 1999 ) to show how time series of asset prices can be clustered in groups on the basis of their correlation matrix. Correlation patterns between companies can similarly be extracted from cross-sectional features, based on balance sheet data, and they can be used in credit risk modelling. To account for such similarities we can rely on centrality measures, following Giudici et al. ( 2019 ) , who have shown that the inclusion of centrality measures in credit scoring models does improve their predictive utility. Here we propose a different use of similarity networks. Instead of applying network models in a pre-processing phase, as in Giudici et al. ( 2019 ) , who extract from them additional features to be included in a statistical learning model, we use them in a post-processing phase, to interpret the predictive output from a highly performing machine learning model. In this way we achieve both predictive accuracy and explainability.

We apply our proposed method to predict the credit risk of a large sample of small and medium enterprises. The obtained empirical evidence shows that, while improving the predictive accuracy with respect to a standard logistic regression model, we improve, the interpretability (explainability) of the results.

The rest of the paper is organized as follows: Sect.  2 introduces the proposed methodology. Section  3 shows the results of the analysis in the credit risk context. Section  4 concludes and presents possible future research developments.

2 Methodology

2.1 statistical learning of credit risk.

Credit risk models are usually employed to estimate the expected financial loss that a credit institution (such as a bank or a peer-to-peer lender) suffers, if a borrower defaults to pay back a loan. The most important component of a credit risk model is the probability of default, which is usually estimated statistically employing credit scoring models.

Borrowers could be individuals, companies, or other credit institutions. Here we focus, without loss of generality, on small and medium enterprises, whose financial data are publicly available in the form of yearly balance sheets.

For each company, n , define a response variable \(Y_{n}\) to indicate whether it has defaulted on its loans or not, i.e. \(Y_{n}=1\) if company defaults, \(Y_{n}=0\) otherwise. And let \(X_{n}\) indicate a vector of explanatory variables. Credit scoring models assume that the response variable \(Y_{n}\) may be affected (“caused”) by the explanatory variables \(X_{n}\) .

The most commonly employed model of credit scoring is the logistic regression model. It assumes that

where \(p_{n}\) is the probability of default for company n ; \({\mathbf {x}}_{n}=(x_{i,1},\ldots ,x_{i,J})\) is a J -dimensional vector containing the values that the J explanatory variables assume for company n ; the parameter \(\alpha \) represents an intercept; \(\beta _{j}\) is the j th regression coefficient.

Once the parameters \(\alpha \) and \(\beta _{j}\) are estimated using the available data, It the probability of default can be estimated, inverting the logistic regression model, from:

2.2 Machine Learning of Credit Risk

Alternatively, credit risk can be measured with Machine Learning (ML) models, able to extract non-linear relations among the financial information contained in the balance sheets. In a standard data science life cycle, models are chosen to optimise the predictive accuracy. In highly regulated sectors, like finance or medicine, models should be chosen balancing accuracy with explainability (Murdoch et al. 2019 ). We improve the choice selecting models based on their predictive accuracy, and employing a posteriori an algorithm that achieves explanability. This does not limit the choice of the best performing models.

To exemplify our approach we consider, without loss of generality, the Extreme Gradient Boost model, one of the most popular and fast machine learning algorithms [see e.g. Chen and Guestrin ( 2016 )].

Extreme Gradient Boosting (XGBoost) is a supervised model based on the combination of tree models with Gradient Boosting. Gradient Boosting is an optimisation technique able to support different learning tasks, such as classification, ranking and prediction. A tree model is a supervised classification model that searches for the partition of the explanatory variables that best classify a response (supervisor) variable. Extreme Gradient Boosting improves tree models strengthening their classification performance, as shown by Chen and Guestrin ( 2016 ). The same authors also show that XGBoost is faster than tree model algorithms.

In practice, a tree classification algorithm is applied successively to “training” samples of the data set. In each iteration, a sample of observations is drawn from the available data, using sampling weights which change over time, weighting more the observations with the worst fit. Once a sequence of trees is fit, and classifications made, a weighted majority vote is taken. For a more detailed description of the algorithm see, for instance (Friedman et al. 2000 ).

2.3 Learning Model Comparison

Once a default probability estimation model is chosen, it should be measured in terms of predictive accuracy, and compared with other models, so to select the best one. The most common approach to measure predictive accuracy of credit scoring models is to randomly split the available data in two parts: a “train” and a “test” set; build the model using data in the train set, and compare the predictions the model obtains on the test set, \(\hat{Y_n}\) , with the actual values of \(Y_n\) .

To obtain \(\hat{Y_n}\) the estimated default probability is rounded into a “default” or “non default”, depending on whether a threshold is passed or not. For a given threshold T , one can then count the frequency of the four possible outputs, namely: False Positives (FP): companies predicted to default, that do not; True Positives (TP): companies predicted to default, which do; False Negatives (FN): companies predicted not to default, which do; True Negatives (TN): companies predicted not to default, which do not.

The misclassification rate of a model can be computed as:

and it characterizes the proportion of wrong predictions among the total number of cases.

The misclassification rate depends on the chosen threshold and it is not, therefore, a generally agreed measure of predictive accuracy. A common practice is to use the Receiver Operating Characteristics (ROC) curve, which plots the false positive rate (FPR) on the Y axis against the true positive rate (TPR) on the X axis, for a range of threshold values (usually percentile values). FPR and TPR are then calculated as follows:

The ideal ROC curve coincides with the Y axis, a situation which cannot be realistically achieved. The best model will be the one closest to it. The ROC curve is usually summarised with the Area Under the ROC curve value (AUROC), a number between 0 and 1. The higher the AUROC, the better the model.

2.4 Explaining Model Predictions

We now explain how to exploit the information contained in the explanatory variables to localise and cluster the position of each individual (company) in the sample. This information, coupled with the predicted default probabilities, allows a very insightful explanation of the determinant of each individual’s creditworthiness. In our specific context, information on the explanatory variables is derived from the financial statements of borrowing companies, collected in a vector \({\mathbf {x}}_{n}\) , representing the financial composition of the balance sheet of institution n .

We propose to calculate the Shapley value associated with each company. In this way we provide an agnostic tool that can interpret in a technologically neutral way the output from a highly accurate machine learning model. As suggested in Joseph ( 2019 ), the Shapley values of a model can be used as a tool to transfer predictive inferences into a linear space, opening a wide possibility of applying to them a variety of multivariate statistical methods.

We develop our Shapley approach using the SHAP Lundberg and Lee ( 2017 ) computational framework, which allows to estimate Shapley values expressing predictions as linear combinations of binary variables that describe whether each single variable is included or not in the model.

More formally, the explanation model \(g(x')\) for the prediction f ( x ) is constructed by an additive feature attribution method, which decomposes the prediction into a linear function of the binary variables \(z' \in \{0,1\}^M\) and the quantities \(\phi _i \in {\mathbb {R}}\) :

In other terms, \(g'(z')\approx f(h_x (z'))\) is a local approximation of the predictions where the local function \(h_x (x')=x\) maps the simplified variables \(x'\) into x , \(z'\approx x\) and M is the number of the selected input variables.

Indeed, Lundberg and Lee ( 2017 ) prove that the only additive feature attribution method that satisfies the properties of local accuracy , missingness and consistency is obtained attributing to each feature \(x'_i\) an effect \(\phi _i\) called Shapley value, defined as

where f is the trained model, x the vector of inputs (features), \(x'\) the vector of the M selected input features. The quantity \(f_x(z') - f_x(z' {\setminus } i) \) is the contribution of a variable i and expresses, for each single prediction, the deviation of Shapley values from their mean.

In other words, a Shapley value represents a unique quantity able to construct an explanatory model that locally linearly approximate the original model, for a specific input x ,( local accuracy ). With the property that, whenever a feature is locally zero, the Shapley value is zero ( missingness ) and if in a second model the contribution of a feature is higher, so will be its Shapley value ( consistency ).

Once Shapley values are calculated, we propose to employ similarity networks, defining a metric that provides the relative distance between companies by applying the Euclidean distance between each pair \(({\mathbf {x}}_{i},{\mathbf {x}}_{j})\) of company predicted vectors, as in Giudici et al. ( 2019 ).

We then derive the Minimal Spanning Tree (MST) representation of the companies, employing the correlation network method suggested by Mantegna and Stanley ( 1999 ). The MST is a tree without cycles of a complex network, that joins pairs of vertices with the minimum total “distance”.

The choice is motivated by the consideration that, to represent all pairwise correlations between N companies in a graph, we need \(N*(N-1)/2\) edges, a number that quickly grows, making the corresponding graph not understandable. The Minimal Spanning Tree simplifies the graph into a tree of \(N-1\) edges, which takes \(N-1\) steps to be completed. At each step, it joins the two companies that are closest, in terms of the Euclidean distance between the corresponding explanatory variables.

In our Shapley value context, the similarity of variable contributions is expressed as a symmetric matrix of dimension n × n , where n Is the number of data points in the (train) data set. Each entry of the matrix measures how similar or distant a pair of data points is in terms of variable contributions. The MST representation associates to each point its closest neighbour. To generate the MST we have used the EMST Dual-Tree Boruvka algorithm, and its implementation in the R package “emstreeR”.

The same matrix can also be used, in a second step, for a further merging of the nodes, through cluster analysis. This extra step can reveal segmentations of data points with very similar variable contributions, corresponding to similar credit scoring decision making.

3 Application

We test our proposed model to data supplied by European External Credit Assessment Institution (ECAI) that specializes in credit scoring for P2P platforms focused on SME commercial lending. The data is described by Giudici et al. ( 2019 ) to which we refer for further details. In summary, the analysis relies on a dataset composed of official financial information (balance-sheet variables) on 15,045 SMEs, mostly based in Southern Europe, for the year 2015. The information about the status (0 = active, 1 = defaulted) of each company one year later (2016) is also provided. The proportion of defaulted companies within this dataset is 10.9%.

Using this data, Giudici et al. ( 2019 ) have constructed logistic regression scoring models that aim at estimating the probability of default of each company, using the available financial data from the balance sheets and, in addition, network centrality measures that are obtained from similarity networks.

Here we aim to improve the predictive performance of the model and, for this purpose, we run an XGBoost tree algorithm [see e.g. Chen and Guestrin ( 2016 )]. To explain the results from the model, typically highly predictive, we employ similarity network models, in a post-processing step. In particular, we employ the cluster dendrogram representation that corresponds to the application of the Minimum Spanning Tree algorithm.

3.2 Results

We first split the data in a training set (80%) and a test set (20%), using random sampling without replacement.

We then estimate the XGBoost model on the training set, apply the obtained model to the test set and compare it with the best logistic regression model. The ROC curves of the two models are contained in Fig.  1 below.

figure 1

Receiver Operating Characteristic (ROC) curves for the logistic credit risk model and for the XGBoost model. In blue, we show the results related to the logistic models while in red we show the results related to the XGBoost model

From Fig.  1 note that the XGBoost clearly improves predictive accuracy. Indeed the comparison of the Area Under the ROC curve (AUROC) for the two models indicate an increase from 0.81 (best logistic regression model) to 0.93 (best XGBoost model).

We then calculate the Shapley value explanations of the companies in the test set, using the values of their explanatory variables. In particular, we use TreeSHAP method (Lundberg et al. 2020 ) [see e.g. Murdoch et al. ( 2019 ); Molnar 2019 )] in combination with XGBoost. The Minimal Spanning Tree (a single linkage cluster) is used to simplify and interpret the structure present among Shapley values. We can also "colour" the MST graph in terms of the associated response variables values: default, not default.

Figures  2 and  3 present the MST representation. While in Fig.  3 company nodes are colored according to the cluster to which they belong, in Fig.  4 they are colored according to their status: not defaulted (grey); defaulted (red).

figure 2

Minimal Spanning Tree representation of the borrowing companies. Companies are colored according to their cluster of belonging

In Fig.  2 , nodes are colored according to the cluster in which they are classified. The figure shows that clusters are quite scattered along the correlation network.

To construct the colored communities in Fig.  2 , we used the algorithm implemented in the R package “igraph” that directly optimizes a modularity score. The algorithm is very efficient and easily scales to very large networks (Clauset et al. 2004 ).

In Fig.  3 , nodes are colored in a simpler binary way: whether the corresponding company has defaulted or not.

figure 3

Minimal Spanning Tree representation of the borrowing companies. Clustering has been performed using the standardized Euclidean distance between institutions. Companies are colored according to their default status: red = defaulted; grey = not defaulted

From Fig.  3 note that default nodes appear grouped together in the MST representation, particularly along the bottom left branch. In general, defaulted institutions occupy precise portion of the network, usually to the leafs of the tree, and form clusters. This suggests that those companies form communities, characterised by similar predictor variables’ importances. It also suggests that not defaulted companies that are close to default ones have a high risk of becoming defaulted as well, being the importance of their predictor variables very similar to those of the defaulted companies.

To better explain the explainability of our results, in Fig.  4 we provide the interpretation of the estimated credit scoring of four companies: two that actually defaulted and two that did not.

figure 4

Contribution of each explanatory variable to the Shapley’s decomposition of four predicted default probabilities, for two defaulted and two non defaulted companies. The more red the color the higher the negative importance, and the more blue the color the higher the positive importance

Figure  4 clearly shows the advantage of our explainable model. It can indicate which variables contribute more to the prediction of default. Not only in general, as is typically done by statistical and machine learning models, but differently and specifically for each company in the test set. Indeed, Fig.  4 clearly shows how the explanations are different (“personalised”) for each of the four considered companies.

The most important variables, for the two non defaulted companies (left boxes) regard: profits before taxes plus interests paid, and earnings before income tax and depreciation (EBITDA), which are common to both; trade receivables, for company 1; total assets, for company 2.

Economically, a high proficiency decreases the probability of default, for both companies; whereas a high stock of outstanding invoices, not yet paid, or a large stock of assets, helps reducing the same probability.

On the other hand, Fig.  4 shows that the most important variables, for the two defaulted companies (right boxes) concern: total assets, for both companies; shareholders funds plus non current liabilities, for company 3; profits before taxes plus interests paid, for company 4.

In other words, lower total assets coupled, in one case, with limited shareholder funds and, in the other, with low proficiency, increase the probability of default of these two companies.

The above results are consistent with previous analysis of the same data: both Giudici et al. ( 2019 ) select, as most important variables in several models, the return on equity, related to both EBITDA and profit before taxes plus interests paid; the leverage, related to total assets and shareholders’ funds; and the solvency ratio, related to trade payables.

We remark that Fig.  4 contains a “local” explanation of the predictive power of the explanatory variables, and it is the most important contribution of Shapley value theory. If we average Shapley values across all observations we get an “overall” or “global” explanation, similar to what already available in the statistical and machine learning literature. Figure  5 below provides the global explanation in our context: the ten most important explanatory variables, over the whole sample.

figure 5

Mean contribution of each explanatory variable to the Shapley’s decomposition. The more red the color the higher the negative importance, and the more blue the color the higher the positive importance

From Fig.  5 note that total assets to total liabilities (the leverage) is the most important variable, followed by the EBITDA, along with profit before taxes plus interest paid, measures of operational efficiency; and by trade receivables, related to solvency, in line with the previous comments.

4 Conclusions and Future Research

The need to leverage the high predictive accuracy brought by sophisticated machine learning models, making them interpretable, has motivated us to introduce an agnostic, post-processing methodology, based on correlation network models. The model can explain, from a substantial viewpoint, any single prediction in terms of the Shapley value contribution of each explanatory variables.

For the implementation of our model, we have used TreeSHAP, a consistent and accurate method, available in open-source packages. TreeSHAP is a fast algorithm that can compute SHapley Additive exPlanation for trees in polynomial time instead of the classical exponential runtime. For the xgboost part of our model we have used NVIDIA GPUs to considerably speed up the computations. In this way, the TreeSHAP method can quickly extract the information from the xgboost model.

Our research has important policy implications for policy makers and regulators who are in their attempt to protect the consumers of artificial intelligence services. While artificial intelligence effectively improve the convenience and accessibility of financial services, they also trigger new risks. Our research suggests that network based explainable AI models can effectively advance the understanding of the determinants of financial risks and, specifically, of credit risks. The same models can be applied to forecast the probability of default, which is critical for risk monitoring and prevention.

Future research should extend the proposed methodology to other datasets and, in particular, to imbalanced ones, for which the occurrence of defaults tends to be rare, even more than what observed for the analysed data. The presence of rare events may inflate the predictive accuracy of such events [as shown in Bracke et al. ( 2019 )]. Indeed, Thomas and Crook ( 1997 ) suggests to deal with this problem via oversampling and it would be interesting to see what this implies in the proposed correlation network Shapley value context.

Bracke, P., Datta, A., Jung, C., & Sen, S. (2019). Machine learning explainability in finance: an application to default risk analysis. Bank of England staff working paper no. 816.

Chen, T., & Guestrin, C. (2016). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 785–794). ACM.

Clauset, A., Newman, M. E., & Moore, C. (2004). Finding community structure in very large networks. Physical review E , 70 (6), 066111.

Article   Google Scholar  

Croxson, K., Bracke, P., & Jung, C. (2019). Explaining why the computer says ‘no’. FCA-Insight.

EU. (2016). Regulation (EU) 2016/679—general data protection regulation (GDPR). Official Journal of the European Union .

Friedman, J., Hastie, T., & Tibshirani, R. (2000). Additive logistic regression: A statistical view of boosting (with discussion and a rejoinder by the authors). Annals of Statistics , 28 (2), 337–407.

FSB. (2017). Artificial intelligence and machine learning in financial services—market developments and financial stability implication . Technical report, Financial Stability Board.

Giudici, P. (2018). Financial data science. Statistics and Probability Letters , 136 , 160–164.

Giudici, P., Hadji-Misheva, B., & Spelta, A. (2019). Network based credit risk models. Quality Engineering , 32 (2), 1–13.

Google Scholar  

Joseph, A. (2019). Shapley regressions: a framework for statistical inference on machine learning models . Research report 784, Bank of England.

Lundberg, S., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in neural information processing systems 30 (pp. 4765–4774). Curran Associates, Inc.

Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., & Nair, B., et al. (2020). From local explanations to global understanding with explainable ai for trees. Nature machine intelligence , 2 (1), 2522–5839.

Mantegna, R. N., & Stanley, H. E. (1999). Introduction to econophysics: Correlations and complexity in finance . Cambridge: Cambridge University Press.

Book   Google Scholar  

Molnar, C. (2019). Interpretable machine learning: A guide for making black box models explainable .

Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., & Yu, B. (2019). Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences , 116 (44), 22071–22080.

Shapley, L. (1953). A value for n-person games. Contributions to the Theory of Games , 28 (2), 307–317.

Thomas, L., & Crook, J. (1997). Credit scoring and its applications. SIAM Monographs .

Download references

Acknowledgements

This research has received funding from the European Union’s Horizon 2020 research and innovation program “FIN-TECH: A Financial supervision and Technology compliance training programme” under the Grant Agreement No 825215 (Topic: ICT-35-2018, Type of action: CSA), and from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No.750961. Firamis acknowledges the NVIDIA Inception DACH program for the computational GPU resources. In addition, the Authors thank ModeFinance, a European ECAI, for the data; the partners of the FIN-TECH European project, for useful comments and discussions. The authors also thank the Guest Editor, and two anonymous referees, for the useful comments and suggestions.

Open access funding provided by Università degli Studi di Pavia within the CRUI-CARE Agreement.

Author information

Authors and affiliations.

University of Pavia, Pavia, Italy

Niklas Bussmann & Paolo Giudici

FinNet-Project, Frankfurt, Germany

Dimitri Marinelli

FIRAMIS, Frankfurt, Germany

Jochen Papenbrock

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Paolo Giudici .

Ethics declarations

Conflicts of interest.

Niklas Bussmann, Dimitri Marinelli and Jochen Papenbrock have been, or are, employed by the company FIRAMIS. The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The paper is the result of a close collaboration between all four authors. However, JP is the main reference for use case identification, method and process ideation and conception as well as fast and controllable implementation, whereas PG is the main reference for statistical modelling, literature benchmarking and paper writing.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Bussmann, N., Giudici, P., Marinelli, D. et al. Explainable Machine Learning in Credit Risk Management. Comput Econ 57 , 203–216 (2021). https://doi.org/10.1007/s10614-020-10042-0

Download citation

Accepted : 17 August 2020

Published : 25 September 2020

Issue Date : January 2021

DOI : https://doi.org/10.1007/s10614-020-10042-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Credit risk management
  • Explainable AI
  • Financial technologies
  • Similarity networks
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 05 December 2019

Impact of risk management strategies on the credit risk faced by commercial banks of Balochistan

  • Zia Ur Rehman 1 ,
  • Noor Muhammad 1 ,
  • Bilal Sarwar 1 &
  • Muhammad Asif Raz 1  

Financial Innovation volume  5 , Article number:  44 ( 2019 ) Cite this article

62k Accesses

17 Citations

Metrics details

This study aims to identify risk management strategies undertaken by the commercial banks of Balochistan, Pakistan, to mitigate or eliminate credit risk. The findings of the study are significant as commercial banks will understand the effectiveness of various risk management strategies and may apply them for minimizing credit risk. This explanatory study analyses the opinions of the employees of selected commercial banks about which strategies are useful for mitigating credit risk. Quantitative data was collected from 250 employees of commercial banks to perform multiple regression analyses, which were used for the analysis. The results identified four areas of impact on credit risk management (CRM): corporate governance exerts the greatest impact, followed by diversification, which plays a significant role, hedging and, finally, the bank’s Capital Adequacy Ratio. This study highlights these four risk management strategies, which are critical for commercial banks to resolve their credit risk.

Introduction

Credit risk causes economic downturn as banks fail due to default risk from clients, which has had a negative impact on the economic development of many nations around the world (Reinhart & Rogoff, 2008 ). By definition, credit risk describes the risk of default by a borrower who fails to repay the money borrowed. The term hedging signals the protection of a business’s investments by limiting its level of risk, for example, by purchasing an insurance policy. Diversification is the allocation of financial resources in variety of different investments and has also long been understood to minimize such risk. The capital adequacy ratio is a measure of a bank’s capital maintained to absorb its outlying risks. Since there is a lot of competition among banks to attract customers, therefore, it has triggered several innovations in banking services (Aruwa & Musa, 2014 ). Regulators also require banks to improve internal governance practices in order to ensure transparency and ethical standards to keep the customers satisfied with their products and services. Ambiguity in banks’ terms and conditions will make it difficult for customers to select financial products appropriate for their needs, whereas clear terms and conditions allow customers to be more satisfied with the bank’s performance (Ho & Yusoff, 2009 ). Customers expect the financial institutions to have strong policies that can safeguard their interests and protect them. Therefore, poor understanding of effective credit risk and the acceptable risk management strategies by bank managers poses a threat to the commercial banks advancement and customers’ interest.

One critical success factor for financial institutions lies in their realization of the importance of credit risk and devising solid strategies – such as hedging, diversification and managing their capital adequacy ratio – to avoid shortcomings that could lead to operational catastrophe. Credit risks faced by banks have fundamental impact on the performance because, even few large customers default on loans would cause huge problems for it. The objective of the Credit Risk Management (CRM) process is to maximize the cost-adjusted rate of return of a particular bank by maintaining exposure to credit risk acceptable to its shareholders. Banks have to navigate the credit risk associated with the overall portfolio as well as external risks that may be due to macroeconomic factors in the economy. Banks must also compare the credit risk relationships with other risks. Another specific case of credit risk applies to the method of trying to settle banking transactions. Until and unless both parties settle their payments in a timely manner, bank suffers from opportunity loss. Corporate governance may also have large effect on the risk management strategies used by the bank for reducing credit risks. Research suggests that it is imperative that banks engage in prior planning in order to avoid future problems (Andrews, 1980 ).

Majority of commercial banks provide several services that could help them mitigate or manage risk. For example, hedging has been used to reduce the level of risk involved in transactions by keeping specific conditions that would allow different parties to exchange goods or services at a flexible date and time (Harrison & Pliska, 1981 ). The significance of effective risk management strategies have been highlighted by many researchers and practitioners over time to assist banks and other financial institutions. CRM became an obvious necessity for commercial banks, especially after the 2008 global financial crisis, in which it was primarily subprime mortgages that caused a liquidity crisis (Al-Tamimi, 2008 ). According to Al-Tamimi ( 2008 ), ensuring the efficient practice of risk management may not be expensive but the implementation should be done in a timely manner in order to ensure smooth banking operations.

A financial institution, just like a constituent part of any other major economic sector, aims to meet incurred expenses, increase the return on invested capital and maximize the wealth of its shareholders. In their pursuance of these objectives, the financial system has to offer effective risk management strategies to financial institutions like banks against credit risk (Hakim & Neaime, 2005 ).

Problem statement

In 2008, across the world, the credit crisis began as a result of mass issuing of sub-prime mortgages to individuals in the United States leading to defaults, which caused outwardly-rippling problems for financial institutions all across the world. Sub-prime mortgages and other loans with less restrictions can generate remarkable losses including corporate failure and bankruptcy for financial institution (Brown & Moles, 2014 ). These credit decisions have a pivotal role in firms’ profitability. The decision to over-extend credit to high-risk customers may increase short-term profitability for individual banks, though in aggregate, this lending behavior was seen to become a major challenge to the risk management structures of the economy as a whole. Therefore, managing risk is the most important element of a bank’s operations. This phenomenon is equally applicable to banks across the globe, including banks in Pakistan.

Due to unstable and volatile nature of the political and financial environment in Pakistan, banks are affected by many types of risk, including risks to foreign exchange rates, liquidity, operations, credit and interest rates. Pakistan’s financial institutions are generally risk-averse, especially towards car financing and mortgage loans in which chances of huge losses are higher (Shafiq & Nasr, 2010 ). Balochistan is the least developed part with largest geographical area in Pakistan. There are limited opportunities for small businesses and majority of businesses are run in informal form with poor documentation. Majority of commercial banks face problems like loan documents verification and loan processing. Therefore, the adoption of proper risk management strategies can help understand and mitigate the credit risk faced by commercial banks of Balochistan.

Research objective

This study aims to identify the different risk management strategies that can influence the management of credit risk by commercial banks. We expect to determine if these strategies contribute both to the reduction of credit risk as well as the efficient performance in fulfilling customer needs.

Significance of the study

This study aims to provide a basis for guidance for the commercial banks of Balochistan to adopt long-term performance-improving risk management strategies (Campbell, 2007 ). The model for the study shows the impact of risk management strategies, including hedging, diversification, the capital adequacy ratio and corporate governance. The research will also examine the impact of each risk management strategy individually in order to understand the importance of each strategy. To the best of authors’ knowledge, there is no study on credit risk management on Balochistan using the described parameters. The findings of this study are intended to contribute positively to society by demonstrating that the banks of Balochistan can develop effective strategies to improve their CRM. Additionally, policy makers can identify and generate appropriate policies to govern bank behavior in order to minimize risk.

Literature review

Credit risk is considered as the chance of loss that will occur when the loan or any other line of credit by a particular debtor is not repaid (Campbell, 2007 ). Since 2008, financial experts around the world have researched and analyzed the primary factors underpinning the credit crisis to identify problematic behavior and effective solutions that can help financial institutions avoid catastrophe in the future. Long ago, the Basel Committee on Banking Supervision Footnote 1 (1999) has also identified credit risk as potential threat to banking sector and developed certain banking regulations that must be maintained by the banks around the world. Owojori, Akintoye, and Adidu ( 2011 ) stated that there are legislative inadequacies in financial system especially in banking system that are effective as well as lack of uniform credit information sharing amongst banks. Thus, it urges to the fact that banks need to emphasize on better risk management strategies which may protect them in the long run.

Abiola and Olausi ( 2014 ) emphasized on the establishment of a separate credit unit at banks with professional staff for credit/loan officers and field officers. It is important as they perform variety of functions from project appraisals through credit disbursement, loan monitoring to loans collection. Therefore, a comprehensive human resource policy related their selection, training, placement, job evaluation, discipline, and remuneration need to be in placed to avoid any inefficiencies related to loan management and credit defaults.

Ho and Yusoff ( 2009 ) focused on researching Malaysian financial institutions and their management of credit risk. The study involved a sample of 15 foreign and domestic financial institutions from which the data was collected through questionnaires. The findings demonstrated that the diversification of loan services leads to risk improvement, though it requires training employees and the commitment of employees to ensure that the financial institution will meet the requirements for best practice lending.

Brown and Wang ( 2002 ) conducted study about the challenges faced by Australian financial institutions due to credit risk over the period January 1986 to August 1993. The Australian financial institutions were not able to provide a wide variety of alternatives to their clients that led to higher risks as there was a lack of diversification in their services. The research suggested that corporate governance practices allow firms to adopt appropriate rules, policies, and procedures to ensure that the rights of all the stakeholders are fulfilled. Hedging Footnote 2 is used by financial institutions to minimize the risk associated with the transactions conducted with the bank customers as it allows the bank to minimize the risk by offering flexible offers that allows customer to make their decisions effectively (Dupire, 1992 ).

The work of Karoui and Huang ( 1997 ) indicates that the super hedging strategy Footnote 3 could be implemented to achieve a surplus downside market risk as it possesses a duality of both the super hedging and open hedging approaches. The prices of options can increase due to the volatility of the asset prices. If the prices of the financial instrument are fluctuating, then the price of the options contract might also be influenced as the buyers or sellers will be deriving their profit from the price of the financial security (Hobson, 1998 ).

Several factors are associated with the pricing of securities as these factors support the financial decisions that must be made by the investors. The loans that the bank provides to the borrower are highly dependent on the conditions of the market. Decision-making for mitigation and management of credit risk is very important for banks (Li, Kou, & Peng, 2016 ). A highly volatile security market will influence the prices and interest rates of the securities being exchanged in such a market. Financial markets are affected by the macroeconomic variables that influence the prices of the securities being exchanged. Hedging allows firms and their managers to incorporate policies that will maximize the value of the company as clients have a wide array of alternatives that allow them to make their decisions in an effective manner. The derivatives such as options, futures, forwards and swaps that are used by firms increase their financial stability by allowing the customers to have sufficient information that improves their decision making in different circumstances. This enables managers to adopt practices that will benefit their organizations. Hedging allows businesses to support a higher debt load due to its flexible nature and ability to minimize risk, which increases the value of the company as it can actually meet the needs of more customers with a comparatively lower level of risk (Graham & Rogers, 2002 ). Similarly, Levitt ( 2004 ) explained that hedging enables firms to extend its activities because the risk inherent to providing funds is reduced in such transactions, allowing more flexibility to all involved parties.

Banks are able to maintain a particular level of reserved cash for the sake of managing the day to day operations that is decided based on the allocated capital adequacy ratio. This enables the bank to maintain a balance of cash that is sufficient to meet the needs of the customers. Managers can use the bank’s available cash flow to meet short-term cash requirement needs, which are based on the concept of capital adequacy ratio. This gives certainty to some funds that banks must maintain in order to address unforeseen circumstances. The selective hedging concept has been used by firms for the sake of making investments that are based on a certain part of their portfolio that pose the most threat and not the entire portfolio of the financial instruments (Stulz, 1996 ). The emphasis is on utilizing hedging at the right time for the specific customer that a company believes should be entering into a contract with flexible terms and conditions. It is a viable option for banks to use hedging to avoid customers’ dissatisfaction for those who do not meet the firm’s loan eligibility criteria. Zhang, Kou & Peng, ( 2019 ) proposed a consensus model that considers the cost and degree of consensus in the group decision making process. With a certain degree of consensus the generalized soft cost consensus model was developed by defining the generalized aggregation operator and consensus level function. The cost is properly reviewed from the perspective of the individual experts and the moderator. Economic significance of the two soft consensus cost models is also assessed. The usability of the model for the real-world context is checked by applying it to a loan consensus scenario that is based on online data from a lending platform. Group decision making is critical for changing the opinions of everyone to arrive at a synchronized strategy for minimizing the risks of the bank with the help of hedging (Zhang, Kou, & Peng, 2019 ).

Kou, Chao, Peng, Alsaadi & Herrera-Viedma, ( 2019 ) identified that financial systemic risk is a major issue in financial systems and economics. Machine learning methods are employed by researchers that are trying to respond to systemic risks with the help of financial market data. Machine learning methods are used for understanding the outbreak and contagion of the systemic risk for improving the current regulations of the financial market and industry. The paper studies the research and methodologies on measurement of financial systemic risk with the help of big data analysis, sentiment analysis and network analysis. Machine learning methods are used along with systematic financial risk management for controlling the overall risks faced by the banks that are related to hedging of the financial instruments of the bank (Kou, Chao, Peng, Alsaadi, & Herrera-Viedma, 2019 ).

Provision of financial assistance to customers that require the funds for business activity can prove profitable for the bank (Datta, Rajagopalan, & Rasheed, 1991 ). If the principle and interest of the loan is repaid in a timely manner that would help the banks ensure smooth flow of their operations, and the economic activities in the society are improved as the standard of living of people also improves with such financial assistance that is provided by commercial banks (Keats, 1990 ). As banks enter into such contracts with several customers, the level of the its incurred risk increases; management likewise becomes more complex with a more diverse group of customers (Kargi, 2011 ). Non-Performing Loans (NPL) represent the credit that a bank believes is causing a loss, and includes loan defaults, which are typically categorized by their expectation of recovery as “standard,” “doubtful” or “lost” (Kolapo, Ayeni, & Oke, 2012 ). The lost category focusing on the inability of the bank to recover particular products restricts a bank from reaching the set targets thus causing a bank to fail in attaining the objectives of profitability that have been set. The incurrence of a large amount of high-risk debt is often difficult for banks to manage unless the managers have undertaken appropriate strategies for mitigating the risk in addition to enhancing their financial performance. The existence of NPLs prompted central global banks to enter into the 1988 Basel Accord, also known as Basel I (later superseded in 2004 by Basel II), which maintained that banks must maintain a particular amount of capital in order to meet their operational needs (Van Greuning & Brajovic Bratanovic, 2009 ). This on-hand capital requirement, also called the capital adequacy ratio, is beneficial as it allows banks to more easily manage potential, sudden financial losses (Keats, 1990 ).

Kithinji ( 2010 ) provides specific evidence that the management of credit risk does not influence the profitability of banks in Kenya. In fact, the Kargi ( 2011 ) study on Nigerian banks from 2004 to 2008 revealed a healthy relationship between appropriate CRM (Credit Risk Management) and bank performance. Poudel ( 2012 ) emphasized the significant role played by CRM in the improvement of financial performance of banks in Nepal between 2001 and 2011. Strict requirements of maintaining higher capital that is around 14.3% of the cash balance as reserve in the banks of Nepal was found to have resulted in better bank performance by producing more profit.

Heffernan ( 1996 ) stated that CRM is crucial for predicting proper bank financial performance. A bank’s inability to recoup its outstanding loans reduces its ability to engage in other profitable transactions A loss both of principle as well as interest (including time value) means also a loss in opportunities to expand and pursue other profitable operations (Berríos, 2013 ).

Banks that avoid risk management face several challenges, including their own survival in the current highly competitive financial environment. To compete successfully with other commercial financial institutions, banks rely on a diversification of products and financial services to improve portfolio performance, including attracting more customers. Diversified services allow customers to select the most appropriate financial assistance in light of their individual needs. Along with diversification of the financial services, banks need to manage the credit risk involved where funds are given as loans for various needs of the customers such as car loans, house loans, starting a new business or expanding ongoing business (Kou, Ergu, Lin, & Chen, 2016). It is also important to have effective behavior monitoring models to ensure that bank employees are careful in minimizing the operational risks by providing maximum information to the customers about the financial instruments and the restrictions imposed by the bank for the sake of protecting the interests of the financial institution. Chao, Kou, Peng & Alsaadi, ( 2019 ) conducted a study to understand a new form of money laundering that is trade based which is using the signboard of international trade. It appears along with the capital movement that is mostly concerned with the rise in the collapse of the overall financial market. It is difficult to prevent money laundering since it has a plausible sort of trade characterization. The aim of the paper is to develop monitoring methods that have accurate recognition along with classified form of supervision of the trade based money laundering with the help of multi class knowledge driven classification algorithms that are linked with the micro and macro prudential regulations. Based on an empirical study from China the application is reviewed and the effectiveness is assessed in order to improve the efficiency of the management in the financial markets (Chao, Kou, Peng, & Alsaadi, 2019).

Selecting the most eligible customers for a loan is also essential to managing credit risk: a bank can screen through a list of customers to identify the ones who have a higher probability of repayment within the specified time duration, according to the terms and conditions of the contract. Hentschel and Kothari ( 1995 ) emphasized that using different derivatives is significant for the leverage of the financial institution. A vast majority of companies surveyed were using derivatives to reduce their risk (Kou, Peng, & Wang, 2014 ). Dolde, ( 1993 ) highlighted that several banks are vulnerable to various risks, therefore, banks have undertaken specific precautionary measures like training their employees, developing better credit policies and reviewing the credit rating of the customers applying for the loans (Dolde, 1993 ).

Diversification is adopted by corporations for increasing the returns of the shareholders and minimizing risk. Decision-making criteria is improved by using classifiers that have some algorithms for resolving problems (Kou, Lu, Peng, & Shi, 2012 ). Rumelt ( 1974 ) revealed that only around 14% of firms on the Fortune 500 list were working as single business organizations in 1974, whereas 86% of the businesses operated in diversified product markets. This shows a considerable inclination of the business sector to emphasize diversification instead of single trade. Much research has been conducted focusing on the activities of companies during recent times; most have found a rise in the prevalence of diversified firms (Datta et al., 1991 ).

Research hypotheses

The first hypothesis considers assessing the role of hedging in reducing a bank’s credit. Based on a model presented by Felix ( 2008 ), which showed risk management strategies of hedging, capital adequacy ratio and diversification may be used to explain credit risk that a bank faces. Thus our first hypothesis is as follows:

H 1 : hedging will minimize credit risk faced by the commercial banks of Balochistan

The second risk management strategy is diversification, which requires banks to provide a wide range of financial services with flexible terms to customers and to provide credit to a wide range of customers instead of few in order to reduce risk (Fredrick, 2013 ). The concept of diversification can be used by banks as they create a wide customer pool for providing loans, instead of providing large amount of loans to few customers, which inherently increases risk (Hobson,  1998 ). Therefore,

H 2: diversification will minimize credit risk of the commercial banks of Balochistan

The third hypothesis considers management strategy that requires banks to maintain a particular amount of the capital (Ho & Yusoff, 2009 ). The capital adequacy ratio is critical for banks to be in a better position to manage unexpected risks and thus capital maintained in a bank has a consequence at overall credit risk therefore the it may be hypothesized as following:

H 3 : capital adequacy ratio will minimize credit risk of commercial banks of Balochistan

The fourth hypothesis considers the role played by corporate governance in minimizing credit risk. Corporate governance assumes that the organization or corporation should adopt all practices that ensure accountability to the stakeholders (Shafiq & Nasr, 2010 ). Therefore,

H 4 : corporate governance will minimize credit risk of the commercial banks of Balochistan

Methodology.

This study adopts an explanatory research design, which was aimed to collect authentic, credible and unbiased data. The data were collected from the employees of commercial banks located in the province of Balochistan, Pakistan. All ethical considerations were made during the research process. The questionnaire developed for the collection of information was prepared to effectively incorporate all potential factors that include, diversification, hedging, capital adequacy ratio, corporate governance and credit risk. The purpose of this research was clearly explained in the questionnaire as it was being shared with the respondents.

The participants were informed about the research objective and ensured that the information provided would be kept confidential. This step was designed to remove bias and ensure that the participants were able to share their views without having any reservations. This process is important for authentic results and reliable information (Levitt, 2004 ).

The sample size for this study comprised of 250 employees from commercial banks in Balochistan. There are large scale commercial banks that operate in Pakistan with several branches of these banks working in the entire country. Commercial banks approached for this study included Habib Bank Limited, Standard Chartered Bank, United Bank Limited, Summit Bank, Faisal Bank, Askari Bank and Bank Al-Habib.

The questionnaire was adopted from a global survey previously conducted by the World Bank. This study analyzed the work that has been done on managing credit risk in several countries in different parts of the world. Our questionnaire used the framework of this valuable research tool, adopting changes specific to address the localized context of Balochistan.

The information collected from the participants was analyzed to identify trends and practices in the banks operating in Balochistan to understand the practices of these commercial banks for managing credit risk. Following is the theoretical framework of the study.

figure a

The relationships between risk management strategies such as diversification, hedging, the capital adequacy ratio and corporate governance with credit risk itself were determined in the paper.

Results & findings

The questionnaire was tested to check the reliability through Cronbach’s alpha (Table 1 ), which shows internal consistency of the instrument; the information revealed that the data are 80% reliable, considering the total of 31 questions asked. The information is essential as this shows that the results and findings of the study are reliable and they can be generalized to the population (Hungerford, 2005 ).

The correlation table shows the relationship between the different variables in the research study. The dependent variable, credit risk, was reviewed against the independent variables: corporate governance, hedging, diversification and capital adequacy ratio. The correlation is essential for further analysis as there should be some relation between the different variables. Each variable is used for the correlation analysis so it highlights the correlation among all the variables with each other. This is useful for assessing the correlation among the independent variables and to ensure that it is not too high leading to a problem of multicollinearity.

Table  2 shows the results of the correlation test between the independent variables and the dependent variable. Before running regression analysis, basic assumptions were also checked. Data normality was checked through skewness and kurtosis and for all variables; these values were in range ± 2. Linearity was checked through correlation analysis and all variables were shown to have a significant relationship with each other. Homogeneity was checked through scatter plot, showing that the variance across all variables was the same. No autocorrelation was found as the value for the Durbin Watson test was 2, showing no correlation among residuals (Antonakis, Bendahan, Jacquart, & Lalive, 2014 ). The value for the variance inflation factor (VIF) was VIF < 5, which shows no relationship among the four independent variables. The regression test was used to determine the influence of each of the variable on credit risk. The results can be seen in Table 3 .

Credit risk can be influenced by different factors but, there is around 36% influence of the four variables that are independent. The variation of 36% can be explained by the independent variables that are hedging, diversification, capital adequacy ratio and corporate governance on credit risk. These factors account for this much change that can be observed in the credit risk faced by the commercial banks. The adjusted r 2 was further analyzed because it is a better measure for a focused analysis on a bank’s performance.

Table  4 shows the results of the assessment of the data for the overall model goodness of fit; the overall model is highly significant at p  < 0.05. The analysis of the variance across the small samples of the data reveals that the overall information is consistent.

The standardized coefficients in Table 5  show the rate of change that is caused by each of the variables in the credit risk of the commercial banks. This is critical information as the variable that is having a higher coefficient value will be having more influence on the level of credit risk so it should be emphasized more by the commercial banks for the sake of achieving better performance. The regression analysis highlights that the four independent variables have an impact on credit risk.

The results reveal that corporate governance had the most impact on credit risk (with a 0.288 standardized beta value). In other words, this CRM strategy appears to be the most beneficial for commercial banks to undertake. Next is diversification (0.263 beta), followed by hedging (0.250 beta) and, finally, the capital adequacy ratio (0.040 beta). The results are significant in is showing that these variables have an impact on credit risk. The constant value was calculated at 1.765 and the error term in the equation is 0.237.

Recommendations

The banks in Balochistan would benefit from adopting sound strategies to improve control over credit risk. CRM strategies such as diversification, hedging, corporate governance and the capital adequacy ratio have all been cited in extant research as being crucial for the success in this regard; in fact, many problems arising from credit risk can be resolved by implementing some combination of these strategies. The research findings can likewise help the government of Balochistan to ensure that commercial banks take appropriate risk management measures to help keep them from failures, such as falling into bankruptcy (Greuning & Bratanovic, 2009 ). Society depends on the smooth operation of the banking sector, so individual (and aggregate) bank performance can help contribute to the development and improved welfare of the economy. Therefore, effective inspection should be employed by the banks to check and safeguard bank resources. Effective trainings and refresher courses should be giving to bank employees in the areas of risk asset management, risk control and credit utilization in order to ensure proper usage and performance.

Several banks have failed in the past as they were not able to control their credit risk. Recommendations for banks stemming from this study include the diversification of their products and services, which is critical as it allows the bank to provide customers with many products and services. After diversification, an emphasis on employing corporate governance policies is most important, according to the findings. Hedging and the capital adequacy ratio are also important strategies that can be examined and optimized by banks. Hedging is useful because entering into flexible contracts helps reduce risk. The banks in Balochistan will be able to realize the importance of the capital adequacy ratio as that will allow them to achieve a proper balance between the amounts of capital that should be maintained to manage the needs of the investors. It is recommended that further research on the topic should be conducted so that effective strategies for management of other risks can be identified for banks. The success and further progress of these banks depend on the smooth implementation of risk management strategies and activities, which have been shown to have a very significant positive impact on the ability of the banks of Balochistan to control credit risk.

Availability of data and materials

The data of the research paper will be available upon request.

This is a place in Switzerland where the Basel Committee on Banking Supervision (BCBS) comprising of 45 members from 28 Jurisdictions, consisting of Central Banks and authorities have the responsibility of banking regulation.

Hedging are flexible contracts that allow customers to agree to buy a particular product in future date using spot rates. It allows customers and banks to manage the transaction by locking contracts at desired price.

Super hedging strategy allows the users to hedge their positions with a trading plan based on self-financing. A low price is paid for the portfolio that would ensure that it’s worth to be equal or higher at a future date.

Acknowledgements

We are grateful to all the reviewers who have shared their valuable comments and suggestions for the research paper. The Editorial Board of Financial Innovation has been extremely kind in their editorial efforts.

There was no funding required for the completion of the research paper.

Author information

Authors and affiliations.

Balochistan University of Information Technology Engineering & Management Sciences, Quetta, Pakistan

Zia Ur Rehman, Noor Muhammad, Bilal Sarwar & Muhammad Asif Raz

You can also search for this author in PubMed   Google Scholar

Contributions

NM is the corresponding author and he has also given the idea for the paper. NM has reviewed the theoretical framework and empirical analysis of the research paper. ZR has written the manuscript and collected the data for the paper. BS has reviewed the methodology of the paper and reviewed literature. MAR has given conception advice and edited the paper. All authors have read the paper and approved the final manuscript.

Corresponding author

Correspondence to Noor Muhammad .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Rehman, Z.U., Muhammad, N., Sarwar, B. et al. Impact of risk management strategies on the credit risk faced by commercial banks of Balochistan. Financ Innov 5 , 44 (2019). https://doi.org/10.1186/s40854-019-0159-8

Download citation

Received : 06 February 2019

Accepted : 08 November 2019

Published : 05 December 2019

DOI : https://doi.org/10.1186/s40854-019-0159-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Credit risk
  • Risk management strategies
  • Financial risk

research papers on credit management

Effects of Income on Infant Health: Evidence from the Expanded Child Tax Credit and Pandemic Stimulus Checks

During the COVID-19 pandemic, the federal government issued stimulus checks and expanded the child tax credit. These pandemic payments varied by marital status and the number of children in the household and were substantial with some families receiving several thousand dollars. We exploit this plausibly exogenous variation in income to obtain estimates of the effect income on infant health. We measure the total amount of pandemic payments received during pregnancy, or the year before birth, and examine how this additional income affects birthweight, the incidence of low birth weight, gestational age and fetal growth. Data are from birth certificates and analyses are conducted separately by maternal marital status and education (less than high school or high school) to isolate only the variation in pandemic payments due to differences in the number of children (parity). Estimates indicate that these pandemic cash payments had no statistically significant, or clinically or economically meaningful effects on infant health. Overall, the findings suggest that income transfers during pregnancy will have little effect on socioeconomic disparities in infant health.

The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.

Robert Kaestner has nothing to disclose.

MARC RIS BibTeΧ

Download Citation Data

More from NBER

In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship  — as well as online conference reports , video lectures , and interviews .

15th Annual Feldstein Lecture, Mario Draghi, "The Next Flight of the Bumblebee: The Path to Common Fiscal Policy in the Eurozone cover slide

Non-Credit Certificate Program in Clinical Trials Management and Regulatory Compliance

Accelerate your career in the field of clinical research with hands-on training in every step of the clinical trials process.

A student studying at a library

At a Glance

The online Clinical Trials Management and Regulatory Compliance certificate program, designed and delivered by experts in clinical research, gives you the skills and knowledge you need to jumpstart your career in the growing clinical trials field. 

Become a Better Clinical Researcher 

In six core courses built around real-world clinical trials, you will master the procedure cycle and administration of the entire clinical trials process, learn to navigate stakeholder interests, and implement up-to-the-minute regulatory compliance practices and ethical standards. You will finish the program with the ability to initiate clinical research studies, apply monitoring methods, and write exemplary documents and reports.

Designed For

Designed for early or mid-career professionals who want to work in regulatory compliance, medical writing, site management, or data analysis in the pharmaceutical industry, at a clinical research organization, or with an academic institution.

doctor checking patients blood pressure

You Value Your Career, We Value Your Time.

Staying up-to-date on your career skills doesn’t have to take a lot of time. Our Clinical Trials certificate program is competitively priced and takes as little as nine months.

Learn with clinical research experts

Scientists and pharmaceutical industry executives, consultants and project managers, our instructors know every angle of the clinical trials process. In live classes informed by their considerable professional experience, Clinical Trials certificate instructors give feedback, support, and expert insight into the field.

The UChicago edge

Our professional courses take innovative learning approaches that uphold the University of Chicago’s distinct brand of academic excellence while driving career advancement.

  • Synchronous class sessions engage students with instructors and peers. 
  • Content-specific and networking webinars foster extracurricular training and allow students to make valuable professional connections. 
  • Professional development services include resume review, access to exclusive job listings, and more.
  • Program administrators support students throughout the certificate and beyond, from individual advising sessions to alumni services.

Career benefits

The global clinical trials market has been projected to grow  to $84.43 billion dollars by 2030. The field is thriving. Key drivers like the globalization of clinical trials and new, personalized treatments continue to impact market growth, while demand for skilled professionals widens the job market: the need for clinical trials professionals will continue to outpace that for similar roles. To learn how our Clinical Trials management certificate can boost your career, please visit our career benefits page for more information.

The estimated total pay for a Clinical Research Associate is $71,868 per year, according to Glassdoor .

Triple your medical writing skills

The University of Chicago Professional Education offers certificate programs in Clinical Trials, Medical Writing and Editing , and now Regulatory Writing . Our programs feature a blended learning model comprising of live synchronous sessions, real-world case studies, and writing exercises that work to elevate your medical writing skills. These part-time programs are tailored to develop your skillset so you can apply to your career immediately.

Explore how you can become an expert medical writer in three leading areas:

  • Clinical Trials Management and Regulatory Compliance :   Learn to use real-world clinical trials to reinforce your foundational knowledge and boost your career in clinical research.
  • Medical Writing and Editing : This program will provide the foundation for mastering the fundamentals and best practices of medical writing, editing, and communication.
  • Regulatory Writing : Building on the strengths of our Medical Writing and Editing program, Regulatory Writing courses will provide students with high-demand, professionally valuable skills to write submissions to the FDA and other regulatory bodies.

woman smiling in maroon suit

Clinical trials management offers a real spectrum of career paths. Students who complete the certificate have options spanning the pharmaceutical industry, clinical research organizations, and academic institutions, where they can consider careers in medical writing, site management, regulatory, and more.

Offered by The University of Chicago's Professional Education

Ready to Take Your Next Step?

Of interest, non-credit certificate program in regulatory writing.

Gain in-demand medical writing skills that will help elevate your career in healthcare or medical...

Anuradha Bangaley

From Healthcare Technician to Clinical Researcher

Accelerate your career in clinical trials.

people in lab coats

A Global Perspective on the Pharmaceutical Industry

  • Leading in Healthcare Through Project Management
  • Global Clinical Research: The Process from Start to Finish
  • New Frontier in Patient Recruitment

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Desalination system could produce freshwater that is cheaper than tap water

Press contact :, media download.

A desalinization prototype, a clear rectangular box with water, tubes and a square spring, setup in the lab

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

A desalinization prototype, a clear rectangular box with water, tubes and a square spring, setup in the lab

Previous image Next image

Engineers at MIT and in China are aiming to turn seawater into drinking water with a completely passive device that is inspired by the ocean, and powered by the sun.

In a paper appearing today in the journal Joule, the team outlines the design for a new solar desalination system that takes in saltwater and heats it with natural sunlight.

The configuration of the device allows water to circulate in swirling eddies, in a manner similar to the much larger “thermohaline” circulation of the ocean. This circulation, combined with the sun’s heat, drives water to evaporate, leaving salt behind. The resulting water vapor can then be condensed and collected as pure, drinkable water. In the meantime, the leftover salt continues to circulate through and out of the device, rather than accumulating and clogging the system.

The new system has a higher water-production rate and a higher salt-rejection rate than all other passive solar desalination concepts currently being tested.

The researchers estimate that if the system is scaled up to the size of a small suitcase, it could produce about 4 to 6 liters of drinking water per hour and last several years before requiring replacement parts. At this scale and performance, the system could produce drinking water at a rate and price that is cheaper than tap water.

“For the first time, it is possible for water, produced by sunlight, to be even cheaper than tap water,” says Lenan Zhang, a research scientist in MIT’s Device Research Laboratory.

The team envisions a scaled-up device could passively produce enough drinking water to meet the daily requirements of a small family. The system could also supply off-grid, coastal communities where seawater is easily accessible.

Zhang’s study co-authors include MIT graduate student Yang Zhong and Evelyn Wang, the Ford Professor of Engineering, along with Jintong Gao, Jinfang You, Zhanyu Ye, Ruzhu Wang, and Zhenyuan Xu of Shanghai Jiao Tong University in China.

A powerful convection

The team’s new system improves on their previous design — a similar concept of multiple layers, called stages. Each stage contained an evaporator and a condenser that used heat from the sun to passively separate salt from incoming water. That design, which the team tested on the roof of an MIT building, efficiently converted the sun’s energy to evaporate water, which was then condensed into drinkable water. But the salt that was left over quickly accumulated as crystals that clogged the system after a few days. In a real-world setting, a user would have to place stages on a frequent basis, which would significantly increase the system’s overall cost.

In a follow-up effort, they devised a solution with a similar layered configuration, this time with an added feature that helped to circulate the incoming water as well as any leftover salt. While this design prevented salt from settling and accumulating on the device, it desalinated water at a relatively low rate.

In the latest iteration, the team believes it has landed on a design that achieves both a high water-production rate, and high salt rejection, meaning that the system can quickly and reliably produce drinking water for an extended period. The key to their new design is a combination of their two previous concepts: a multistage system of evaporators and condensers, that is also configured to boost the circulation of water — and salt — within each stage.

“We introduce now an even more powerful convection, that is similar to what we typically see in the ocean, at kilometer-long scales,” Xu says.

The small circulations generated in the team’s new system is similar to the “thermohaline” convection in the ocean — a phenomenon that drives the movement of water around the world, based on differences in sea temperature (“thermo”) and salinity (“haline”).

“When seawater is exposed to air, sunlight drives water to evaporate. Once water leaves the surface, salt remains. And the higher the salt concentration, the denser the liquid, and this heavier water wants to flow downward,” Zhang explains. “By mimicking this kilometer-wide phenomena in small box, we can take advantage of this feature to reject salt.”

Tapping out

The heart of the team’s new design is a single stage that resembles a thin box, topped with a dark material that efficiently absorbs the heat of the sun. Inside, the box is separated into a top and bottom section. Water can flow through the top half, where the ceiling is lined with an evaporator layer that uses the sun’s heat to warm up and evaporate any water in direct contact. The water vapor is then funneled to the bottom half of the box, where a condensing layer air-cools the vapor into salt-free, drinkable liquid. The researchers set the entire box at a tilt within a larger, empty vessel, then attached a tube from the top half of the box down through the bottom of the vessel, and floated the vessel in saltwater.

In this configuration, water can naturally push up through the tube and into the box, where the tilt of the box, combined with the thermal energy from the sun, induces the water to swirl as it flows through. The small eddies help to bring water in contact with the upper evaporating layer while keeping salt circulating, rather than settling and clogging.

The team built several prototypes, with one, three, and 10 stages, and tested their performance in water of varying salinity, including natural seawater and water that was seven times saltier.

From these tests, the researchers calculated that if each stage were scaled up to a square meter, it would produce up to 5 liters of drinking water per hour, and that the system could desalinate water without accumulating salt for several years. Given this extended lifetime, and the fact that the system is entirely passive, requiring no electricity to run, the team estimates that the overall cost of running the system would be cheaper than what it costs to produce tap water in the United States.

“We show that this device is capable of achieving a long lifetime,” Zhong says. “That means that, for the first time, it is possible for drinking water produced by sunlight to be cheaper than tap water. This opens up the possibility for solar desalination to address real-world problems.”

“This is a very innovative approach that effectively mitigates key challenges in the field of desalination,” says Guihua Yu, who develops sustainable water and energy storage systems at the University of Texas at Austin, and was not involved in the research. “The design is particularly beneficial for regions struggling with high-salinity water. Its modular design makes it highly suitable for household water production, allowing for scalability and adaptability to meet individual needs.”

Funding for the research at Shanghai Jiao Tong University was supported by the Natural Science Foundation of China.

Share this news article on:

Press mentions, time magazine.

A number of MIT spinouts and research projects – including the MOXIE instrument that successfully generated oxygen on Mars, a new solar-powered desalination system and MIT spinout SurgiBox – were featured on TIME’s Best Inventions of 2023 list.

Insider reporter Katie Hawkinson explores how MIT researchers developed a new solar-powered desalination system that can remove the salt from seawater for less than the cost of U.S. tap water. Creating a device that relies on solar power, “eliminates a major financial barrier, especially for low-income countries experiencing water scarcity,” Hawkinson explains.

The Hill reporter Sharon Udasin writes that MIT researchers have developed a new solar-powered desalination device that “could last several years and generate water at a rate and price that is less expensive than tap water.” The researchers estimated that “if their model was scaled up to the size of a small suitcase, it could produce about 4 to 6 liters of drinking water per hour,” writes Udasin.

The Daily Beast

MIT researchers have developed a new desalination system that uses solar energy to convert seawater into drinkable water, reports Tony Ho Tran for the Daily Beast . The device could make it possible to, “make freshwater that’s even more affordable than the water coming from Americans’ kitchen faucets.”

Previous item Next item

Related Links

  • Evelyn Wang
  • Lenan Zhang
  • Device Research Lab
  • Department of Mechanical Engineering

Related Topics

  • Desalination
  • Mechanical engineering
  • Sustainability

Related Articles

This rooftop photo shows two scales, each holding thick square devices covered in foil. The blue sky is reflected on the foil, and the devices are attached to wires. The left device has a circular indention with white material on top.

Passive cooling system could benefit off-grid locations

desalination diagram

Solar-powered system offers a route to inexpensive desalination

prototype of water harvesting system

Solar-powered system extracts drinkable water from “dry” air

Tests on an MIT building rooftop showed that a simple proof-of-concept desalination device could produce clean, drinkable water at a rate equivalent to more than 1.5 gallons per hour for each square meter of solar collecting area.

Simple, solar-powered water desalination

More mit news.

Headshot of a woman in a colorful striped dress.

A biomedical engineer pivots from human movement to women’s health

Read full story →

Closeup of someone’s hands holding a stack of U.S. patents. The top page reads “United States of America “ and “Patent” in gold lettering, among other smaller text. They are next to a window that looks down on a city street.

MIT tops among single-campus universities in US patents granted

Jennifer Rupp, Thomas Defferriere, Harry Tuller, and Ju Li pose standing in a lab, with a nuclear radiation warning sign in the background

A new way to detect radiation involving cheap ceramics

Photo of the facade of the MIT Schwarzman College of Computing building, which features a shingled glass exterior that reflects its surroundings

A crossroads for computing at MIT

Hammaad Adam poses in front of a window. A brick building with large windows is behind him.

Growing our donated organ supply

Two hands inspect a lung X-ray. One hand is illustrated with nodes and lines creating a neural network. The other is a doctor’s hand. Four “alert” icons appear on the lung X-ray.

New AI method captures uncertainty in medical images

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

This paper is in the following e-collection/theme issue:

Published on 12.4.2024 in Vol 26 (2024)

Application of AI in in Multilevel Pain Assessment Using Facial Images: Systematic Review and Meta-Analysis

Authors of this article:

Author Orcid Image

  • Jian Huo 1 * , MSc   ; 
  • Yan Yu 2 * , MMS   ; 
  • Wei Lin 3 , MMS   ; 
  • Anmin Hu 2, 3, 4 , MMS   ; 
  • Chaoran Wu 2 , MD, PhD  

1 Boston Intelligent Medical Research Center, Shenzhen United Scheme Technology Company Limited, Boston, MA, United States

2 Department of Anesthesia, Shenzhen People's Hospital, The First Affiliated Hospital of Southern University of Science and Technology, Shenzhen Key Medical Discipline, Shenzhen, China

3 Shenzhen United Scheme Technology Company Limited, Shenzhen, China

4 The Second Clinical Medical College, Jinan University, Shenzhen, China

*these authors contributed equally

Corresponding Author:

Chaoran Wu, MD, PhD

Department of Anesthesia

Shenzhen People's Hospital, The First Affiliated Hospital of Southern University of Science and Technology

Shenzhen Key Medical Discipline

No 1017, Dongmen North Road

Shenzhen, 518020

Phone: 86 18100282848

Email: [email protected]

Background: The continuous monitoring and recording of patients’ pain status is a major problem in current research on postoperative pain management. In the large number of original or review articles focusing on different approaches for pain assessment, many researchers have investigated how computer vision (CV) can help by capturing facial expressions. However, there is a lack of proper comparison of results between studies to identify current research gaps.

Objective: The purpose of this systematic review and meta-analysis was to investigate the diagnostic performance of artificial intelligence models for multilevel pain assessment from facial images.

Methods: The PubMed, Embase, IEEE, Web of Science, and Cochrane Library databases were searched for related publications before September 30, 2023. Studies that used facial images alone to estimate multiple pain values were included in the systematic review. A study quality assessment was conducted using the Quality Assessment of Diagnostic Accuracy Studies, 2nd edition tool. The performance of these studies was assessed by metrics including sensitivity, specificity, log diagnostic odds ratio (LDOR), and area under the curve (AUC). The intermodal variability was assessed and presented by forest plots.

Results: A total of 45 reports were included in the systematic review. The reported test accuracies ranged from 0.27-0.99, and the other metrics, including the mean standard error (MSE), mean absolute error (MAE), intraclass correlation coefficient (ICC), and Pearson correlation coefficient (PCC), ranged from 0.31-4.61, 0.24-2.8, 0.19-0.83, and 0.48-0.92, respectively. In total, 6 studies were included in the meta-analysis. Their combined sensitivity was 98% (95% CI 96%-99%), specificity was 98% (95% CI 97%-99%), LDOR was 7.99 (95% CI 6.73-9.31), and AUC was 0.99 (95% CI 0.99-1). The subgroup analysis showed that the diagnostic performance was acceptable, although imbalanced data were still emphasized as a major problem. All studies had at least one domain with a high risk of bias, and for 20% (9/45) of studies, there were no applicability concerns.

Conclusions: This review summarizes recent evidence in automatic multilevel pain estimation from facial expressions and compared the test accuracy of results in a meta-analysis. Promising performance for pain estimation from facial images was established by current CV algorithms. Weaknesses in current studies were also identified, suggesting that larger databases and metrics evaluating multiclass classification performance could improve future studies.

Trial Registration: PROSPERO CRD42023418181; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=418181

Introduction

The definition of pain was revised to “an unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage” in 2020 [ 1 ]. Acute postoperative pain management is important, as pain intensity and duration are critical influencing factors for the transition of acute pain to chronic postsurgical pain [ 2 ]. To avoid the development of chronic pain, guidelines were promoted and discussed to ensure safe and adequate pain relief for patients, and clinicians were recommended to use a validated pain assessment tool to track patients’ responses [ 3 ]. However, these tools, to some extent, depend on communication between physicians and patients, and continuous data cannot be provided [ 4 ]. The continuous assessment and recording of patient pain intensity will not only reduce caregiver burden but also provide data for chronic pain research. Therefore, automatic and accurate pain measurements are necessary.

Researchers have proposed different approaches to measuring pain intensity. Physiological signals, for example, electroencephalography and electromyography, have been used to estimate pain [ 5 - 7 ]. However, it was reported that current pain assessment from physiological signals has difficulties isolating stress and pain with machine learning techniques, as they share conceptual and physiological similarities [ 8 ]. Recent studies have also investigated pain assessment tools for certain patient subgroups. For example, people with deafness or an intellectual disability may not be able to communicate well with nurses, and an objective pain evaluation would be a better option [ 9 , 10 ]. Measuring pain intensity from patient behaviors, such as facial expressions, is also promising for most patients [ 4 ]. As the most comfortable and convenient method, computer vision techniques require no attachments to patients and can monitor multiple participants using 1 device [ 4 ]. However, pain intensity, which is important for pain research, is often not reported.

With the growing trend of assessing pain intensity using artificial intelligence (AI), it is necessary to summarize current publications to determine the strengths and gaps of current studies. Existing research has reviewed machine learning applications for acute postoperative pain prediction, continuous pain detection, and pain intensity estimation [ 10 - 14 ]. Input modalities, including facial recordings and physiological signals such as electroencephalography and electromyography, were also reviewed [ 5 , 8 ]. There have also been studies focusing on deep learning approaches [ 11 ]. AI was applied in children and infant pain evaluation as well [ 15 , 16 ]. However, no study has focused on pain intensity measurement, and no comparison of test accuracy results has been made.

Current AI applications in pain research can be categorized into 3 types: pain assessment, pain prediction and decision support, and pain self-management [ 14 ]. We consider accurate and automatic pain assessment to be the most important area and the foundation of future pain research. In this study, we performed a systematic review and meta-analysis to assess the diagnostic performance of current publications for multilevel pain evaluation.

This study was registered with PROSPERO (International Prospective Register of Systematic Reviews; CRD42023418181) and carried out strictly following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [ 17 ] .

Study Eligibility

Studies that reported AI techniques for multiclass pain intensity classification were eligible. Records including nonhuman or infant participants or 2-class pain detection were excluded. Only studies using facial images of the test participants were accepted. Clinically used pain assessment tools, such as the visual analog scale (VAS) and numerical rating scale (NRS), and other pain intensity indicators, were rejected in the meta-analysis. Textbox 1 presents the eligibility criteria.

Study characteristics and inclusion criteria

  • Participants: children and adults aged 12 months or older
  • Setting: no restrictions
  • Index test: artificial intelligence models that measure pain intensity from facial images
  • Reference standard: no restrictions for systematic review; Prkachin and Solomon pain intensity score for meta-analysis
  • Study design: no need to specify

Study characteristics and exclusion criteria

  • Participants: infants aged 12 months or younger and animal subjects
  • Setting: no need to specify
  • Index test: studies that use other information such as physiological signals
  • Reference standard: other pain evaluation tools, e.g., NRS, VAS, were excluded from meta-analysis
  • Study design: reviews

Report characteristics and inclusion criteria

  • Year: published between January 1, 2012, and September 30, 2023
  • Language: English only
  • Publication status: published
  • Test accuracy metrics: no restrictions for systematic reviews; studies that reported contingency tables were included for meta-analysis

Report characteristics and exclusion criteria

  • Year: no need to specify
  • Language: no need to specify
  • Publication status: preprints not accepted
  • Test accuracy metrics: studies that reported insufficient metrics were excluded from meta-analysis

Search Strategy

In this systematic review, databases including PubMed, Embase, IEEE, Web of Science, and the Cochrane Library were searched until December 2022, and no restrictions were applied. Keywords were “artificial intelligence” AND “pain recognition.” Multimedia Appendix 1 shows the detailed search strategy.

Data Extraction

A total of 2 viewers screened titles and abstracts and selected eligible records independently to assess eligibility, and disagreements were solved by discussion with a third collaborator. A consentient data extraction sheet was prespecified and used to summarize study characteristics independently. Table S5 in Multimedia Appendix 1 shows the detailed items and explanations for data extraction. Diagnostic accuracy data were extracted into contingency tables, including true positives, false positives, false negatives, and true negatives. The data were used to calculate the pooled diagnostic performance of the different models. Some studies included multiple models, and these models were considered independent of each other.

Study Quality Assessment

All included studies were independently assessed by 2 viewers using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool [ 18 ]. QUADAS-2 assesses bias risk across 4 domains, which are patient selection, index test, reference standard, and flow and timing. The first 3 domains are also assessed for applicability concerns. In the systematic review, a specific extension of QUADAS-2, namely, QUADAS-AI, was used to specify the signaling questions [ 19 ].

Meta-Analysis

Meta-analyses were conducted between different AI models. Models with different algorithms or training data were considered different. To evaluate the performance differences between models, the contingency tables during model validation were extracted. Studies that did not report enough diagnostic accuracy data were excluded from meta-analysis.

Hierarchical summary receiver operating characteristic (SROC) curves were fitted to evaluate the diagnostic performance of AI models. These curves were plotted with 95% CIs and prediction regions around averaged sensitivity, specificity, and area under the curve estimates. Heterogeneity was assessed visually by forest plots. A funnel plot was constructed to evaluate the risk of bias.

Subgroup meta-analyses were conducted to evaluate the performance differences at both the model level and task level, and subgroups were created based on different tasks and the proportion of positive and negative samples.

All statistical analyses and plots were produced using RStudio (version 4.2.2; R Core Team) and the R package meta4diag (version 2.1.1; Guo J and Riebler A) [ 20 ].

Study Selection and Included Study Characteristics

A flow diagram representing the study selection process is shown in ( Figure 1 ). After removing 1039 duplicates, the titles and abstracts of a total of 5653 papers were screened, and the percentage agreement of title or abstract screening was 97%. After screening, 51 full-text reports were assessed for eligibility, among which 45 reports were included in the systematic review [ 21 - 65 ]. The percentage agreement of the full-text review was 87%. In 40 of the included studies, contingency tables could not be made. Meta-analyses were conducted based on 8 AI models extracted from 6 studies. Individual study characteristics included in the systematic review are provided in Tables 1 and 2 . The facial feature extraction method can be categorized into 2 classes: geometrical features (GFs) and deep features (DFs). One typical method of extracting GFs is to calculate the distance between facial landmarks. DFs are usually extracted by convolution operations. A total of 20 studies included temporal information, but most of them (18) extracted temporal information through the 3D convolution of video sequences. Feature transformation was also commonly applied to reduce the time for training or fuse features extracted by different methods before inputting them into the classifier. For classifiers, support vector machines (SVMs) and convolutional neural networks (CNNs) were mostly used. Table 1 presents the model designs of the included studies.

research papers on credit management

a No temporal features are shown by – symbol, time information extracted from 2 images at different time by +, and deep temporal features extracted through the convolution of video sequences by ++.

b SVM: support vector machine.

c GF: geometric feature.

d GMM: gaussian mixture model.

e TPS: thin plate spline.

f DML: distance metric learning.

g MDML: multiview distance metric learning.

h AAM: active appearance model.

i RVR: relevance vector regressor.

j PSPI: Prkachin and Solomon pain intensity.

k I-FES: individual facial expressiveness score.

l LSTM: long short-term memory.

m HCRF: hidden conditional random field.

n GLMM: generalized linear mixed model.

o VLAD: vector of locally aggregated descriptor.

p SVR: support vector regression.

q MDS: multidimensional scaling.

r ELM: extreme learning machine.

s Labeled to distinguish different architectures of ensembled deep learning models.

t DCNN: deep convolutional neural network.

u GSM: gaussian scale mixture.

v DOML: distance ordering metric learning.

w LIAN: locality and identity aware network.

x BiLSTM: bidirectional long short-term memory.

a UNBC: University of Northern British Columbia-McMaster shoulder pain expression archive database.

b LOSO: leave one subject out cross-validation.

c ICC: intraclass correlation coefficient.

d CT: contingency table.

e AUC: area under the curve.

f MSE: mean standard error.

g PCC: Pearson correlation coefficient.

h RMSE: root mean standard error.

i MAE: mean absolute error.

j ICC: intraclass coefficient.

k CCC: concordance correlation coefficient.

l Reported both external and internal validation results and summarized as intervals.

Table 2 summarizes the characteristics of model training and validation. Most studies used publicly available databases, for example, the University of Northern British Columbia-McMaster shoulder pain expression archive database [ 57 ]. Table S4 in Multimedia Appendix 1 summarizes the public databases. A total of 7 studies used self-prepared databases. Frames from video sequences were the most used test objects, as 37 studies output frame-level pain intensity, while few measure pain intensity from video sequences or photos. It was common that a study redefined pain levels to have fewer classes than ground-truth labels. For model validation, cross-validation and leave-one-subject-out validation were commonly used. Only 3 studies performed external validation. For reporting test accuracies, different evaluation metrics were used, including sensitivity, specificity, mean absolute error (MAE), mean standard error (MSE), Pearson correlation coefficient (PCC), and intraclass coefficient (ICC).

Methodological Quality of Included Studies

Table S2 in Multimedia Appendix 1 presents the study quality summary, as assessed by QUADAS-2. There was a risk of bias in all studies, specifically in terms of patient selection, caused by 2 issues. First, the training data are highly imbalanced, and any method to adjust the data distribution may introduce bias. Next, the QUADAS-AI correspondence letter [ 19 ] specifies that preprocessing of images that changes the image size or resolution may introduce bias. However, the applicability concern is low, as the images properly represent the feeling of pain. Studies that used cross-fold validation or leave-one-out cross-validation were considered to have a low risk of bias. Although the Prkachin and Solomon pain intensity (PSPI) score was used by most of the studies, its ability to represent individual pain levels was not clinically validated; as such, the risk of bias and applicability concerns were considered high when the PSPI score was used as the index test. As an advantage of computer vision techniques, the time interval between the index tests was short and was assessed as having a low risk of bias. Risk proportions are shown in Figure 2 . For all 315 entries, 39% (124) were assessed as high-risk. In total, 5 studies had the lowest risk of bias, with 6 domains assessed as low risk [ 26 , 27 , 31 , 32 , 59 ].

research papers on credit management

Pooled Performance of Included Models

In 6 studies included in the meta-analysis, there were 8 different models. The characteristics of these models are summarized in Table S1 in Multimedia Appendix 2 [ 23 , 24 , 26 , 32 , 41 , 57 ]. Classification of PSPI scores greater than 0, 2, 3, 6, and 9 was selected and considered as different tasks to create contingency tables. The test performance is shown in Figure 3 as hierarchical SROC curves; 27 contingency tables were extracted from 8 models. The sensitivity, specificity, and LDOR were calculated, and the combined sensitivity was 98% (95% CI 96%-99%), the specificity was 98% (95% CI 97%-99%), the LDOR was 7.99 (95% CI 6.73-9.31) and the AUC was 0.99 (95% CI 0.99-1).

research papers on credit management

Subgroup Analysis

In this study, subgroup analysis was conducted to investigate the performance differences within models. A total of 8 models were separated and summarized as a forest plot in Multimedia Appendix 3 [ 23 , 24 , 26 , 32 , 41 , 57 ]. For model 1, the pooled sensitivity, specificity, and LDOR were 95% (95% CI 86%-99%), 99% (95% CI 98%-100%), and 8.38 (95% CI 6.09-11.19), respectively. For model 2, the pooled sensitivity, specificity, and LDOR were 94% (95% CI 84%-99%), 95% (95% CI 88%-99%), and 6.23 (95% CI 3.52-9.04), respectively. For model 3, the pooled sensitivity, specificity, and LDOR were 100% (95% CI 99%-100%), 100% (95% CI 99%-100%), and 11.55% (95% CI 8.82-14.43), respectively. For model 4, the pooled sensitivity, specificity, and LDOR were 83% (95% CI 43%-99%), 94% (95% CI 79%-99%), and 5.14 (95% CI 0.93-9.31), respectively. For model 5, the pooled sensitivity, specificity, and LDOR were 92% (95% CI 68%-99%), 94% (95% CI 78%-99%), and 6.12 (95% CI 1.82-10.16), respectively. For model 6, the pooled sensitivity, specificity, and LDOR were 94% (95% CI 74%-100%), 94% (95% CI 78%-99%), and 6.59 (95% CI 2.21-11.13), respectively. For model 7, the pooled sensitivity, specificity, and LDOR were 98% (95% CI 90%-100%), 97% (95% CI 87%-100%), and 8.31 (95% CI 4.3-12.29), respectively. For model 8, the pooled sensitivity, specificity, and LDOR were 98% (95% CI 93%-100%), 97% (95% CI 88%-100%), and 8.65 (95% CI 4.84-12.67), respectively.

Heterogeneity Analysis

The meta-analysis results indicated that AI models are applicable for estimating pain intensity from facial images. However, extreme heterogeneity existed within the models except for models 3 and 5, which were proposed by Rathee and Ganotra [ 24 ] and Semwal and Londhe [ 32 ]. A funnel plot is presented in Figure 4 . A high risk of bias was observed.

research papers on credit management

Pain management has long been a critical problem in clinical practice, and the use of AI may be a solution. For acute pain management, automatic measurement of pain can reduce the burden on caregivers and provide timely warnings. For chronic pain management, as specified by Glare et al [ 2 ], further research is needed, and measurements of pain presence, intensity, and quality are one of the issues to be solved for chronic pain studies. Computer vision could improve pain monitoring through real-time detection for clinical use and data recording for prospective pain studies. To our knowledge, this is the first meta-analysis dedicated to AI performance in multilevel pain level classification.

In this study, one model’s performance at specific pain levels was described by stacking multiple classes into one to make each task a binary classification problem. After careful selection in both the medical and engineering databases, we observed promising results of AI in evaluating multilevel pain intensity through facial images, with high sensitivity (98%), specificity (98%), LDOR (7.99), and AUC (0.99). It is reasonable to believe that AI can accurately evaluate pain intensity from facial images. Moreover, the study quality and risk of bias were evaluated using an adapted QUADAS-2 assessment tool, which is a strength of this study.

To investigate the source of heterogeneity, it was assumed that a well-designed model should have familiar size effects regarding different levels, and a subgroup meta-analysis was conducted. The funnel and forest plots exhibited extreme heterogeneity. The model’s performance at specific pain levels was described and summarized by a forest plot. Within-model heterogeneity was observed in Multimedia Appendix 3 [ 23 , 24 , 26 , 32 , 41 , 57 ] except for 2 models. Models 3 and 5 were different in many aspects, including their algorithms and validation methods, but were both trained with a relatively small data set, and the proportion of positive and negative classes was relatively close to 1. Because training with imbalanced data is a critical problem in computer vision studies [ 66 ], for example, in the University of Northern British Columbia-McMaster pain data set, fewer than 10 frames out of 48,398 had a PSPI score greater than 13. Here, we emphasized that imbalanced data sets are one major cause of heterogeneity, resulting in the poorer performance of AI algorithms.

We tentatively propose a method to minimize the effect of training with imbalanced data by stacking multiple classes into one class, which is already presented in studies included in the systematic review [ 26 , 32 , 42 , 57 ]. Common methods to minimize bias include resampling and data augmentation [ 66 ]. This proposed method is used in the meta-analysis to compare the test results of different studies as well. The stacking method is available when classes are only different in intensity. A disadvantage of combined classes is that the model would be insufficient in clinical practice when the number of classes is low. Commonly used pain evaluation tools, such as VAS, have 10 discrete levels. It is recommended that future studies set the number of pain levels to be at least 10 for model training.

This study is limited for several reasons. First, insufficient data were included because different performance metrics (mean standard error and mean average error) were used in most studies, which could not be summarized into a contingency table. To create a contingency table that can be included in a meta-analysis, the study should report the following: the number of objects used in each pain class for model validation, and the accuracy, sensitivity, specificity, and F 1 -score for each pain class. This table cannot be created if a study reports the MAE, PCC, and other commonly used metrics in AI development. Second, a small study effect was observed in the funnel plot, and the heterogeneity could not be minimized. Another limitation is that the PSPI score is not clinically validated and is not the only tool that assesses pain from facial expressions. There are other clinically validated pain intensity assessment methods, such as the Faces Pain Scale-revised, Wong-Baker Faces Pain Rating Scale, and Oucher Scale [ 3 ]. More databases could be created based on the above-mentioned tools. Finally, AI-assisted pain assessments were supposed to cover larger populations, including incommunicable patients, for example, patients with dementia or patients with masked faces. However, only 1 study considered patients with dementia, which was also caused by limited databases [ 50 ].

AI is a promising tool that can help in pain research in the future. In this systematic review and meta-analysis, one approach using computer vision was investigated to measure pain intensity from facial images. Despite some risk of bias and applicability concerns, CV models can achieve excellent test accuracy. Finally, more CV studies in pain estimation, reporting accuracy in contingency tables, and more pain databases are encouraged for future studies. Specifically, the creation of a balanced public database that contains not only healthy but also nonhealthy participants should be prioritized. The recording process would be better in a clinical environment. Then, it is recommended that researchers report the validation results in terms of accuracy, sensitivity, specificity, or contingency tables, as well as the number of objects for each pain class, for the inclusion of a meta-analysis.

Acknowledgments

WL, AH, and CW contributed to the literature search and data extraction. JH and YY wrote the first draft of the manuscript. All authors contributed to the conception and design of the study, the risk of bias evaluation, data analysis and interpretation, and contributed to and approved the final version of the manuscript.

Data Availability

The data sets generated during and analyzed during this study are available in the Figshare repository [ 67 ].

Conflicts of Interest

None declared.

PRISMA checklist, risk of bias summary, search strategy, database summary and reported items and explanations.

Study performance summary.

Forest plot presenting pooled performance of subgroups in meta-analysis.

  • Raja SN, Carr DB, Cohen M, Finnerup NB, Flor H, Gibson S, et al. The revised International Association for the Study of Pain definition of pain: concepts, challenges, and compromises. Pain. 2020;161(9):1976-1982. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Glare P, Aubrey KR, Myles PS. Transition from acute to chronic pain after surgery. Lancet. 2019;393(10180):1537-1546. [ CrossRef ] [ Medline ]
  • Chou R, Gordon DB, de Leon-Casasola OA, Rosenberg JM, Bickler S, Brennan T, et al. Management of postoperative pain: a clinical practice guideline from the American Pain Society, the American Society of Regional Anesthesia and Pain Medicine, and the American Society of Anesthesiologists' Committee on Regional Anesthesia, Executive Committee, and Administrative Council. J Pain. 2016;17(2):131-157. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hassan T, Seus D, Wollenberg J, Weitz K, Kunz M, Lautenbacher S, et al. Automatic detection of pain from facial expressions: a survey. IEEE Trans Pattern Anal Mach Intell. 2021;43(6):1815-1831. [ CrossRef ] [ Medline ]
  • Mussigmann T, Bardel B, Lefaucheur JP. Resting-State Electroencephalography (EEG) biomarkers of chronic neuropathic pain. A systematic review. Neuroimage. 2022;258:119351. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Moscato S, Cortelli P, Chiari L. Physiological responses to pain in cancer patients: a systematic review. Comput Methods Programs Biomed. 2022;217:106682. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Thiam P, Hihn H, Braun DA, Kestler HA, Schwenker F. Multi-modal pain intensity assessment based on physiological signals: a deep learning perspective. Front Physiol. 2021;12:720464. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rojas RF, Brown N, Waddington G, Goecke R. A systematic review of neurophysiological sensing for the assessment of acute pain. NPJ Digit Med. 2023;6(1):76. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mansutti I, Tomé-Pires C, Chiappinotto S, Palese A. Facilitating pain assessment and communication in people with deafness: a systematic review. BMC Public Health. 2023;23(1):1594. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • El-Tallawy SN, Ahmed RS, Nagiub MS. Pain management in the most vulnerable intellectual disability: a review. Pain Ther. 2023;12(4):939-961. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gkikas S, Tsiknakis M. Automatic assessment of pain based on deep learning methods: a systematic review. Comput Methods Programs Biomed. 2023;231:107365. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Borna S, Haider CR, Maita KC, Torres RA, Avila FR, Garcia JP, et al. A review of voice-based pain detection in adults using artificial intelligence. Bioengineering (Basel). 2023;10(4):500. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • De Sario GD, Haider CR, Maita KC, Torres-Guzman RA, Emam OS, Avila FR, et al. Using AI to detect pain through facial expressions: a review. Bioengineering (Basel). 2023;10(5):548. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zhang M, Zhu L, Lin SY, Herr K, Chi CL, Demir I, et al. Using artificial intelligence to improve pain assessment and pain management: a scoping review. J Am Med Inform Assoc. 2023;30(3):570-587. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hughes JD, Chivers P, Hoti K. The clinical suitability of an artificial intelligence-enabled pain assessment tool for use in infants: feasibility and usability evaluation study. J Med Internet Res. 2023;25:e41992. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fang J, Wu W, Liu J, Zhang S. Deep learning-guided postoperative pain assessment in children. Pain. 2023;164(9):2029-2035. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Whiting PF, Rutjes AWS, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529-536. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sounderajah V, Ashrafian H, Rose S, Shah NH, Ghassemi M, Golub R, et al. A quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies: QUADAS-AI. Nat Med. 2021;27(10):1663-1665. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guo J, Riebler A. meta4diag: Bayesian bivariate meta-analysis of diagnostic test studies for routine practice. J Stat Soft. 2018;83(1):1-31. [ CrossRef ]
  • Hammal Z, Cohn JF. Automatic detection of pain intensity. Proc ACM Int Conf Multimodal Interact. 2012;2012:47-52. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Adibuzzaman M, Ostberg C, Ahamed S, Povinelli R, Sindhu B, Love R, et al. Assessment of pain using facial pictures taken with a smartphone. 2015. Presented at: 2015 IEEE 39th Annual Computer Software and Applications Conference; July 01-05, 2015;726-731; Taichung, Taiwan. [ CrossRef ]
  • Majumder A, Dutta S, Behera L, Subramanian VK. Shoulder pain intensity recognition using Gaussian mixture models. 2015. Presented at: 2015 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE); December 19-20, 2015;130-134; Dhaka, Bangladesh. [ CrossRef ]
  • Rathee N, Ganotra D. A novel approach for pain intensity detection based on facial feature deformations. J Vis Commun Image Represent. 2015;33:247-254. [ CrossRef ]
  • Sikka K, Ahmed AA, Diaz D, Goodwin MS, Craig KD, Bartlett MS, et al. Automated assessment of children's postoperative pain using computer vision. Pediatrics. 2015;136(1):e124-e131. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rathee N, Ganotra D. Multiview distance metric learning on facial feature descriptors for automatic pain intensity detection. Comput Vis Image Und. 2016;147:77-86. [ CrossRef ]
  • Zhou J, Hong X, Su F, Zhao G. Recurrent convolutional neural network regression for continuous pain intensity estimation in video. 2016. Presented at: 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); June 26-July 01, 2016; Las Vegas, NV. [ CrossRef ]
  • Egede J, Valstar M, Martinez B. Fusing deep learned and hand-crafted features of appearance, shape, and dynamics for automatic pain estimation. 2017. Presented at: 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017); May 30-June 03, 2017;689-696; Washington, DC. [ CrossRef ]
  • Martinez DL, Rudovic O, Picard R. Personalized automatic estimation of self-reported pain intensity from facial expressions. 2017. Presented at: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); July 21-26, 2017;2318-2327; Honolulu, HI. [ CrossRef ]
  • Bourou D, Pampouchidou A, Tsiknakis M, Marias K, Simos P. Video-based pain level assessment: feature selection and inter-subject variability modeling. 2018. Presented at: 2018 41st International Conference on Telecommunications and Signal Processing (TSP); July 04-06, 2018;1-6; Athens, Greece. [ CrossRef ]
  • Haque MA, Bautista RB, Noroozi F, Kulkarni K, Laursen C, Irani R. Deep multimodal pain recognition: a database and comparison of spatio-temporal visual modalities. 2018. Presented at: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018); May 15-19, 2018;250-257; Xi'an, China. [ CrossRef ]
  • Semwal A, Londhe ND. Automated pain severity detection using convolutional neural network. 2018. Presented at: 2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS); December 21-22, 2018;66-70; Belgaum, India. [ CrossRef ]
  • Tavakolian M, Hadid A. Deep binary representation of facial expressions: a novel framework for automatic pain intensity recognition. 2018. Presented at: 2018 25th IEEE International Conference on Image Processing (ICIP); October 07-10, 2018;1952-1956; Athens, Greece. [ CrossRef ]
  • Tavakolian M, Hadid A. Deep spatiotemporal representation of the face for automatic pain intensity estimation. 2018. Presented at: 2018 24th International Conference on Pattern Recognition (ICPR); August 20-24, 2018;350-354; Beijing, China. [ CrossRef ]
  • Wang J, Sun H. Pain intensity estimation using deep spatiotemporal and handcrafted features. IEICE Trans Inf & Syst. 2018;E101.D(6):1572-1580. [ CrossRef ]
  • Bargshady G, Soar J, Zhou X, Deo RC, Whittaker F, Wang H. A joint deep neural network model for pain recognition from face. 2019. Presented at: 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS); February 23-25, 2019;52-56; Singapore. [ CrossRef ]
  • Casti P, Mencattini A, Comes MC, Callari G, Di Giuseppe D, Natoli S, et al. Calibration of vision-based measurement of pain intensity with multiple expert observers. IEEE Trans Instrum Meas. 2019;68(7):2442-2450. [ CrossRef ]
  • Lee JS, Wang CW. Facial pain intensity estimation for ICU patient with partial occlusion coming from treatment. 2019. Presented at: BIBE 2019; The Third International Conference on Biological Information and Biomedical Engineering; June 20-22, 2019;1-4; Hangzhou, China.
  • Saha AK, Ahsan GMT, Gani MO, Ahamed SI. Personalized pain study platform using evidence-based continuous learning tool. 2019. Presented at: 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC); July 15-19, 2019;490-495; Milwaukee, WI. [ CrossRef ]
  • Tavakolian M, Hadid A. A spatiotemporal convolutional neural network for automatic pain intensity estimation from facial dynamics. Int J Comput Vis. 2019;127(10):1413-1425. [ FREE Full text ] [ CrossRef ]
  • Bargshady G, Zhou X, Deo RC, Soar J, Whittaker F, Wang H. Ensemble neural network approach detecting pain intensity from facial expressions. Artif Intell Med. 2020;109:101954. [ CrossRef ] [ Medline ]
  • Bargshady G, Zhou X, Deo RC, Soar J, Whittaker F, Wang H. Enhanced deep learning algorithm development to detect pain intensity from facial expression images. Expert Syst Appl. 2020;149:113305. [ CrossRef ]
  • Dragomir MC, Florea C, Pupezescu V. Automatic subject independent pain intensity estimation using a deep learning approach. 2020. Presented at: 2020 International Conference on e-Health and Bioengineering (EHB); October 29-30, 2020;1-4; Iasi, Romania. [ CrossRef ]
  • Huang D, Xia Z, Mwesigye J, Feng X. Pain-attentive network: a deep spatio-temporal attention model for pain estimation. Multimed Tools Appl. 2020;79(37-38):28329-28354. [ CrossRef ]
  • Mallol-Ragolta A, Liu S, Cummins N, Schuller B. A curriculum learning approach for pain intensity recognition from facial expressions. 2020. Presented at: 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020); November 16-20, 2020;829-833; Buenos Aires, Argentina. [ CrossRef ]
  • Peng X, Huang D, Zhang H. Pain intensity recognition via multi‐scale deep network. IET Image Process. 2020;14(8):1645-1652. [ FREE Full text ] [ CrossRef ]
  • Tavakolian M, Lopez MB, Liu L. Self-supervised pain intensity estimation from facial videos via statistical spatiotemporal distillation. Pattern Recognit Lett. 2020;140:26-33. [ CrossRef ]
  • Xu X, de Sa VR. Exploring multidimensional measurements for pain evaluation using facial action units. 2020. Presented at: 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020); November 16-20, 2020;786-792; Buenos Aires, Argentina. [ CrossRef ]
  • Pikulkaew K, Boonchieng W, Boonchieng E, Chouvatut V. 2D facial expression and movement of motion for pain identification with deep learning methods. IEEE Access. 2021;9:109903-109914. [ CrossRef ]
  • Rezaei S, Moturu A, Zhao S, Prkachin KM, Hadjistavropoulos T, Taati B. Unobtrusive pain monitoring in older adults with dementia using pairwise and contrastive training. IEEE J Biomed Health Inform. 2021;25(5):1450-1462. [ CrossRef ] [ Medline ]
  • Semwal A, Londhe ND. S-PANET: a shallow convolutional neural network for pain severity assessment in uncontrolled environment. 2021. Presented at: 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC); January 27-30, 2021;0800-0806; Las Vegas, NV. [ CrossRef ]
  • Semwal A, Londhe ND. ECCNet: an ensemble of compact convolution neural network for pain severity assessment from face images. 2021. Presented at: 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence); January 28-29, 2021;761-766; Noida, India. [ CrossRef ]
  • Szczapa B, Daoudi M, Berretti S, Pala P, Del Bimbo A, Hammal Z. Automatic estimation of self-reported pain by interpretable representations of motion dynamics. 2021. Presented at: 2020 25th International Conference on Pattern Recognition (ICPR); January 10-15, 2021;2544-2550; Milan, Italy. [ CrossRef ]
  • Ting J, Yang YC, Fu LC, Tsai CL, Huang CH. Distance ordering: a deep supervised metric learning for pain intensity estimation. 2021. Presented at: 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA); December 13-16, 2021;1083-1088; Pasadena, CA. [ CrossRef ]
  • Xin X, Li X, Yang S, Lin X, Zheng X. Pain expression assessment based on a locality and identity aware network. IET Image Process. 2021;15(12):2948-2958. [ FREE Full text ] [ CrossRef ]
  • Alghamdi T, Alaghband G. Facial expressions based automatic pain assessment system. Appl Sci. 2022;12(13):6423. [ FREE Full text ] [ CrossRef ]
  • Barua PD, Baygin N, Dogan S, Baygin M, Arunkumar N, Fujita H, et al. Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images. Sci Rep. 2022;12(1):17297. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fontaine D, Vielzeuf V, Genestier P, Limeux P, Santucci-Sivilotto S, Mory E, et al. Artificial intelligence to evaluate postoperative pain based on facial expression recognition. Eur J Pain. 2022;26(6):1282-1291. [ CrossRef ] [ Medline ]
  • Hosseini E, Fang R, Zhang R, Chuah CN, Orooji M, Rafatirad S, et al. Convolution neural network for pain intensity assessment from facial expression. 2022. Presented at: 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); July 11-15, 2022;2697-2702; Glasgow, Scotland. [ CrossRef ]
  • Huang Y, Qing L, Xu S, Wang L, Peng Y. HybNet: a hybrid network structure for pain intensity estimation. Vis Comput. 2021;38(3):871-882. [ CrossRef ]
  • Islamadina R, Saddami K, Oktiana M, Abidin TF, Muharar R, Arnia F. Performance of deep learning benchmark models on thermal imagery of pain through facial expressions. 2022. Presented at: 2022 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT); November 03-05, 2022;374-379; Solo, Indonesia. [ CrossRef ]
  • Swetha L, Praiscia A, Juliet S. Pain assessment model using facial recognition. 2022. Presented at: 2022 6th International Conference on Intelligent Computing and Control Systems (ICICCS); May 25-27, 2022;1-5; Madurai, India. [ CrossRef ]
  • Wu CL, Liu SF, Yu TL, Shih SJ, Chang CH, Mao SFY, et al. Deep learning-based pain classifier based on the facial expression in critically ill patients. Front Med (Lausanne). 2022;9:851690. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ismail L, Waseem MD. Towards a deep learning pain-level detection deployment at UAE for patient-centric-pain management and diagnosis support: framework and performance evaluation. Procedia Comput Sci. 2023;220:339-347. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vu MT, Beurton-Aimar M. Learning to focus on region-of-interests for pain intensity estimation. 2023. Presented at: 2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG); January 05-08, 2023;1-6; Waikoloa Beach, HI. [ CrossRef ]
  • Kaur H, Pannu HS, Malhi AK. A systematic review on imbalanced data challenges in machine learning: applications and solutions. ACM Comput Surv. 2019;52(4):1-36. [ CrossRef ]
  • Data for meta-analysis of pain assessment from facial images. Figshare. 2023. URL: https:/​/figshare.​com/​articles/​dataset/​Data_for_Meta-Analysis_of_Pain_Assessment_from_Facial_Images/​24531466/​1 [accessed 2024-03-22]

Abbreviations

Edited by A Mavragani; submitted 26.07.23; peer-reviewed by M Arab-Zozani, M Zhang; comments to author 18.09.23; revised version received 08.10.23; accepted 28.02.24; published 12.04.24.

©Jian Huo, Yan Yu, Wei Lin, Anmin Hu, Chaoran Wu. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 12.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

ScienceDaily

After being insulted, writing down your feelings on paper then getting rid of it reduces anger

A research group in Japan has discovered that writing down one's reaction to a negative incident on a piece of paper and then shredding it or throwing it away reduces feelings of anger.

"We expected that our method would suppress anger to some extent," lead researcher Nobuyuki Kawai said. "However, we were amazed that anger was eliminated almost entirely."

This research is important because controlling anger at home and in the workplace can reduce negative consequences in our jobs and personal lives. Unfortunately, many anger management techniques proposed by specialists lack empirical research support. They can also be difficult to recall when angry.

The results of this study, published in Scientific Reports , are the culmination of years of previous research on the association between the written word and anger reduction. It builds on work showing how interactions with physical objects can control a person's mood.

For their project, Kawai and his graduate student Yuta Kanaya, both at the Graduate School of Informatics, Nagoya University, asked participants to write brief opinions about important social problems, such as whether smoking in public should be outlawed. They then told them that a doctoral student at Nagoya University would evaluate their writing.

However, the doctoral students doing the evaluation were plants. Regardless of what the participants wrote, the evaluators scored them low on intelligence, interest, friendliness, logic, and rationality. To really drive home the point, the doctoral students also wrote the same insulting comment: "I cannot believe an educated person would think like this. I hope this person learns something while at the university."

After handing out these negative comments, the researchers asked the participants to write their thoughts on the feedback, focusing on what triggered their emotions. Finally, one group of participants was told to either dispose of the paper they wrote in a trash can or keep it in a file on their desk. A second group was told to destroy the document in a shredder or put it in a plastic box.

The students were then asked to rate their anger after the insult and after either disposing of or keeping the paper. As expected, all participants reported a higher level of anger after receiving insulting comments. However, the anger levels of the individuals who discarded their paper in the trash can or shredded it returned to their initial state after disposing of the paper. Meanwhile, the participants who held on to a hard copy of the insult experienced only a small decrease in their overall anger.

Kawai imagines using his research to help businesspeople who find themselves in stressful situations. "This technique could be applied in the moment by writing down the source of anger as if taking a memo and then throwing it away when one feels angry in a business situation," he explained.

Along with its practical benefits, this discovery may shed light on the origins of the Japanese cultural tradition known as hakidashisara ( hakidashi refers to the purging or spitting out of something, and sara refers to a dish or plate) at the Hiyoshi shrine in Kiyosu, Aichi Prefecture, just outside of Nagoya. Hakidashisara is an annual festival where people smash small discs representing things that make them angry. Their findings may explain the feeling of relief that participants report after leaving the festival.

  • Anger Management
  • Social Psychology
  • Disorders and Syndromes
  • Educational Psychology
  • Consumer Behavior
  • Anger management
  • Social psychology
  • Cognitive dissonance
  • Self-awareness
  • Obsessive-compulsive disorder
  • Collaboration

Story Source:

Materials provided by Nagoya University . Note: Content may be edited for style and length.

Journal Reference :

  • Yuta Kanaya, Nobuyuki Kawai. Anger is eliminated with the disposal of a paper written because of provocation . Scientific Reports , 2024; 14 (1) DOI: 10.1038/s41598-024-57916-z

Cite This Page :

Explore More

  • Genes for Strong Muscles: Healthy Long Life
  • Brightest Gamma-Ray Burst
  • Stellar Winds of Three Sun-Like Stars Detected
  • Fences Causing Genetic Problems for Mammals
  • Ozone Removes Mating Barriers Between Fly ...
  • Parkinson's: New Theory On Origins and Spread
  • Clash of Stars Solves Stellar Mystery
  • Secure Quantum Computing at Home
  • Ocean Currents: Collapse of Antarctic Ice ...
  • Pacific Cities Much Older Than Previously ...

Trending Topics

Strange & offbeat.

IMAGES

  1. Credit Management Process Research Proposal Table Of Contents One Pager Sample Example Document

    research papers on credit management

  2. Gs credit research papers

    research papers on credit management

  3. FREE 13+ Research Analysis Samples in MS Word

    research papers on credit management

  4. 👍 Research papers on risk management. Sample Research Paper on Quality and Risk Management in

    research papers on credit management

  5. Strategic Management Sample Research Paper PDF

    research papers on credit management

  6. Waob Business Administration Research Paper : 100+ Data Sources for Market Research

    research papers on credit management

VIDEO

  1. Assignment Topic: Credit Evaluation

  2. Assignment Topic: Customer Service

  3. What is Credit Management

  4. Perspectives on Capital Liquidity in the Banking System

  5. Best Business Idea

  6. #MCO-07 #Receivables Management #Credit Policy #Unit-19 #Very Important

COMMENTS

  1. 3481 PDFs

    This study aims to investigate the effects of credit risk management tools on the growth of commercial banks in Nepal. The study utilized a descriptive research design with quantitative research ...

  2. A Longitudinal Systematic Review of Credit Risk Assessment and Credit

    According to the findings of Onay and Öztürk (2018), the most studied topics in credit risk-related research are "statistical methods and accuracy" and "new determinants of creditworthiness."Moreover, Louzada et al. (2016) discovered that the most common objective of the papers related to credit scoring was the suggestion of new techniques.

  3. Credit scoring methods: Latest trends and points to consider

    Firstly, most research papers between 2016 and 2020 employed two or more datasets with an average of 2.75 (see Table 3).At the same time, the use of the public datasets, in comparison with private data, has been on the increase during the years (see Fig. 1, the sum of 'Public' and 'Both' categories), growing from 43% before 2006 to 65% between 2016 and 2021.

  4. Machine learning-driven credit risk: a systemic review

    Credit risk assessment is at the core of modern economies. Traditionally, it is measured by statistical methods and manual auditing. Recent advances in financial artificial intelligence stemmed from a new wave of machine learning (ML)-driven credit risk models that gained tremendous attention from both industry and academia. In this paper, we systematically review a series of major research ...

  5. Elements of Credit Rating: A Hybrid Review and Future Research Agenda

    Since 1934, 1886 research papers were added in Scopus (1114) and Web of Science (772) on "credit rating." Out of these 1886, only 62 (3.28%) research papers were added before year 2001. Between January 2001 and November 2020, 96.72% research papers were added to databases; hence, we considered this period for analysis.

  6. Artificial intelligence and bank credit analysis: A review

    Artificial intelligence (AI) is now essential for the bank of tomorrow. It is closely linked to changes in technology and consumption patterns. For the banking sector, it is a powerful tool for analysing the creditworthiness of credit applicants and anticipating customer needs. This type of system can also make the bank fairer and more responsible.

  7. Responsible Credit Risk Assessment with Machine Learning and ...

    The remainder of the paper is structured as follows: Section 2 presents related work and the context for the framework proposed in this paper. Section 3 describes the experimental setup and the framework for domain knowledge driven credit risk assessment. Section 4 describes the experimental results. Section 5 illustrates how domain knowledge could be leveraged to improve model performance in ...

  8. What are the possible future research directions for bank's credit risk

    Finally, the paper will outline the evolution of methodologies and theoretical underpinnings in credit risk management research and a landscape for possible future research directions. Banking prudence and efficiency to manage their risks in different business cycle and environment would help to alleviate crises and losses.

  9. Credit Risk Research: Review and Agenda

    This paper provides a comprehensive review of scholarly research on credit risk measurement during the last 57 years applying bibliometric citation analysis and elaborates an agenda for future research. The bibliography is compiled using the ISI Web of Science database and includes all articles with citations over the period 1960-2016.

  10. Risks

    Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. ... Credit risk management (CRM) is ...

  11. Full article: The relationship between internal control and credit risk

    The research paper applies a quantitative approach to empirically examine the possible nexus between internal control and credit risk. Based on the earlier research, e.g., Ellis and Gené (Citation 2015 & Citation 2016), the author constructs a model in which credit risk is a function of internal control and other explanatory variables. In ...

  12. Research on Credit Risk Evaluation of Commercial Banks Based on

    Traditional credit risk forecasting model In this paper, commonly used traditional credit risk prediction models are tested to verify their prediction accuracy, and the prediction effect is compared with the Artificial Neural Network model constructed below. 1172 Yong Hu et al. / Procedia Computer Science 199 (2022) 1168â€"1176 Yong Hu and ...

  13. The effect of credit risk management and bank-specific factors on the

    The present research paper provides empirical evidence on the interconnection between credit risk and bank-specific/internal factors on FP commercial banks. To analyze the data set, first, the study applies the descriptive analysis to identify the big picture of the data, then the correlation section and at the end, regression results are ...

  14. credit risk management Latest Research Papers

    This paper examined the effects of credit risk, intellectual capital as well as credit risk moderated by intellectual capital on financial performance of fifteen listed deposit money banks in Nigeria (DMBs) from 2007 to 2016. Data were sourced from annual reports of banks and Nigerian National Bureau of Statistics and analysed using Generalised ...

  15. Effectiveness of Credit Management System on Micro Credit ...

    The purpose of the research will be mainly to assess the applicability of credit management principles to achieve better performance in microlending. This research's target sample is East Africa commercial banks in conjunction with mobile operators providing online microcredit to mobile money subscribers and commercial banks 'customers.

  16. Explainable Machine Learning in Credit Risk Management

    The paper proposes an explainable Artificial Intelligence model that can be used in credit risk management and, in particular, in measuring the risks that arise when credit is borrowed employing peer to peer lending platforms. The model applies correlation networks to Shapley values so that Artificial Intelligence predictions are grouped according to the similarity in the underlying ...

  17. Impact of risk management strategies on the credit risk faced by

    This study aims to identify risk management strategies undertaken by the commercial banks of Balochistan, Pakistan, to mitigate or eliminate credit risk. The findings of the study are significant as commercial banks will understand the effectiveness of various risk management strategies and may apply them for minimizing credit risk. This explanatory study analyses the opinions of the employees ...

  18. PDF A Study on Credit Risk Management Practices in Indian Banks

    (Singh., 2015) this paper examines the effect of credit risk management on private and public sector banks in India. Credit risk comes when customers default or failed to comply with their obstacles to service debt, pushing a total or partial loss. The primary cause of credit risk is poor credit risk management. When

  19. Effects of Income on Infant Health: Evidence from the Expanded Child

    Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating research findings among academics ... Evidence from the Expanded Child Tax Credit and Pandemic Stimulus Checks. Wei Lyu, George Wehby & Robert Kaestner. Share. X LinkedIn Email. Working Paper 32310 DOI 10. ...

  20. Earn a Healthcare Communications Certificate at UChicago

    Healthcare Professionals: Doctors, nurses, pharmacists, and other healthcare providers can enhance their communication skills to better interact with patients, colleagues, and stakeholders. Medical Writers: Individuals involved in writing medical content, such as research papers, medical reports, or patient information materials, can refine their skills to convey complex information clearly ...

  21. Earn a Clinical Trials Management Certificate at UChicago

    Become a Better Clinical Researcher In six core courses built around real-world clinical trials, you will master the procedure cycle and administration of the entire clinical trials process, learn to navigate stakeholder interests, and implement up-to-the-minute regulatory compliance practices and ethical standards. You will finish the program with the ability to initiate clinical research ...

  22. Desalination system could produce freshwater that is cheaper than tap

    The Hill reporter Sharon Udasin writes that MIT researchers have developed a new solar-powered desalination device that "could last several years and generate water at a rate and price that is less expensive than tap water." The researchers estimated that "if their model was scaled up to the size of a small suitcase, it could produce about 4 to 6 liters of drinking water per hour ...

  23. Journal of Medical Internet Research

    Background: The continuous monitoring and recording of patients' pain status is a major problem in current research on postoperative pain management. In the large number of original or review articles focusing on different approaches for pain assessment, many researchers have investigated how computer vision (CV) can help by capturing facial expressions.

  24. After being insulted, writing down your feelings on paper then getting

    A research group in Japan has discovered that writing down one's reaction to a negative incident on a piece of paper and then shredding it or throwing it away reduces feelings of anger.