Sei sulla pagina 1di 10

Award Winning Student Essay

Constructing a Loan Default Model for Indian Banks using CIBIL Data
Hemant Khatwani Mridul Arora Priyank Kumar
anks play an important role in any economy by facilitating the growth of firms and ensuring the smooth flow of trade. Because of the nature of businesses, firms require money from banks on an ongoing basis, to finance the purchase of fixed assets and working capital requirements and also for banks to act as guarantors of payment in case there is a time lag between the order of goods and their arrival. Banks, by playing the role of financial intermediaries raise money from depositors and provide funds to businesses. In doing so they face the risk of default, which is the possibility of the borrower not repaying. The growing proportion of NPAs (Non Performing Assets) as a percentage of advances, which measures the number of loans that have been defaulted, prompted banks to employ quantitative models for the assessment of probability of default. Over the past 30 years, a vast literature has emerged concerned with the development of statistical models designed to predict if firms will either fail or experience some form of financial distress, such as loan default or the non-payment of creditors. These studies have concentrated on the use of accounting information to provide a quantitative framework for the assessment of financial health. Beaver1 conducted a series of univariate tests using different financial ratios to see which of them had sufficient explanatory power to distinguish between defaulters and non-defaulters. This set the stage for an impressive body of empirical research that has evolved over the last 40 years2. These studies essentially used a discriminant analysis approach wherein the objective was to find which variable truly distinguishes default and non-default firms. The research then extended to multivariate analysis where a combination of 4-5 variables was thought to provide a better explanation. Recent times have also seen the application of logit/probit models, which assign a conditional probability to identify the chances of a 127

Hemant Khatwani is a post-graduate from IIM Lucknow, specialised in Finance and Marketing. He holds a Bachelors degree in Information Technology. His areas of interest include credit risk management and credit derivatives. hemantprat@yahoo.com Mridul Arora is a post-graduate from IIM Lucknow, specialised in Finance and Marketing. He holds a Bachelors degree in Chemical Engineering from IIT Chennai. His areas of interest include capital markets and investment banking. mridul.arora@gmail.com Priyank Kumar is a post-graduate from IIM Lucknow. His areas of interest include weather derivatives and agri-business management. gm_priyank@yahoo.com IIMB Management Review, June 2006

The financial distress models do not have a theory behind them; instead they seem to employ an ad-hoc pragmatic approach, using easily available accounting data to predict the profile of defaulters. This approach makes the models susceptible to sample as well as time specificity. The history of these models can be traced back to univariate analysis.
firm being a defaulter/non-defaulter3. Given the number of techniques that have been used for defaulter prediction, it is important to critically review these techniques based on a literature study before we move on to develop a model for Indian bank defaulters.

during the late 1960s. These techniques use discriminant analysis, which classifies a company into one of two groups (failed/non-failed) on the basis of a statistic (Z-score) that is a weighted combination of ratios that best separates the two groups of firms. The Z-score is derived by assigning weights to the variables, such that the variance between the groups is maximised relative to the within-group variance. These studies provided high classification power and better captured the information from a set of ratios at the same time, but suffered from low predictive accuracy and high non-specificity in terms of firm type and time. Moreover, this analysis suffered from the assumption that variables are normally distributed, which is usually not the case, especially in the case of defaulter companies. Linear discriminant analysis (LDA) is theoretically more appropriate when the covariance matrices across the variables are equal for the groups of failed and non-failed firms, while quadratic discriminant analysis (QDA) is more appropriate when the covariance matrices for the two groups are unequal6. Most studies7 prove that the assumption in LDA of equal co-variance matrices is not true and therefore QDA is a more appropriate model to be used. However, QDA suffers from low explanatory power as number of variables increase and sample size is low. Interpretability is also difficult in QDA8. Multiple discriminant analysis (MDA) techniques are useful in providing a classification matrix but their coefficient cannot be tested for significance. An absolute test of the significance of individual variables is not practical, because the coefficients in a discriminant analysis can take on any value provided the ratios between the coefficients of the variables are maintained. Another assumption of the MDA model concerns the relative costs associated with the two types of misclassification error. Most studies, by evaluating models solely in terms of their overall predictive accuracy, assume equal costs for both misclassified failed and non-failed firms. However, the misclassification cost of a failed firm would be relatively higher than that of a non-failed firm9. This factor should, therefore, be incorporated in the model being used. To overcome some of the deficiencies in multivariate analysis, recent studies10 have used logit and probit techniques, which assign a conditional probability of an observation belonging to a category. These techniques use cumulative probability distributions. An advantage of using logit/probit techniques is that they do not require demanding assumptions like independent variables to be multivariate normal. Moreover, the individual coefficients can be tested for significance. This

Literature Review
The financial distress models do not have a theory behind them; instead they seem to employ an ad-hoc pragmatic approach, using easily available accounting data to predict the profile of defaulters. This approach makes the models susceptible to sample as well as time specificity. The history of these models can be traced back to univariate analysis, which took into consideration one financial ratio at a time4. This ratio is then compared to a historically derived benchmark for the ratio that separates failed from non-failed firms. The basic assumptions of ratio analysis are proportionality between the two variables whose ratio is calculated, as well as linearity of the relationship. These assumptions were proved wrong in 1980 by Whittington5 when he pointed out the possibility of non-linearity and constant terms in the relationship. However, most studies on ratio analysis say that ratios capture the relationship between financial variables adequately. Another problem that arises out of univariate analysis concerns the selection of sample of ratios since there is potential for conflicting classifications from different ratios. This creates the dilemma of choosing the ratio for its predictive accuracy or specificity to the problem. The univariate analysis gave way to multivariate techniques 128

Constructing a Loan Default Model for Indian Banks using CIBIL Data

analysis is based on the joint probability concept such that the weights are applied so as to maximise the joint probability of failure for the known failed firms and the probability of non-failure for the healthy firms. However, logit/probit analysis also suffers from the same predictive accuracy and specificity problems as MDA11. To overcome the assumption of linearity that models based on discriminant analysis and logit/probit analysis suffer from, artificial neural networks have been used. The neural network is a universal function approximator that is trained using historical repayment experience, financial and accounting variables and default data. Typically financial variables such as those mentioned in Exhibit 1 are the inputs to the model, while default data on firms serves as the output. In between the two a suitable neural model such as feed-forward network based on the back propagation principle is fitted and the model is trained. Structural matches are found that coincide with defaulting firms and then used to determine a weighting scheme for the predictor variables to forecast Probability of Default (PD). The back propagation rule determines the optimum weights by minimising the mean squared error between the model output and the actual output. Each time the neural network evaluates the credit risk of a new loan opportunity, it updates its weighting scheme so that it continually learns from experience. Thus, neural networks are flexible, adaptable systems that can incorporate changing conditions into the decision making process. Neural networks have also been found to perform better in out of sample tests. However, the network may be overfit to a particular database if excessive training has taken place, thereby resulting in poor out-of-sample estimates. Moreover, neural networks are costly to implement and maintain. Because of the large number of possible connections, the neural network can grow prohibitively large rather quickly. Finally, neural networks suffer from a lack of transparency. Since there is no economic interpretation attached to the hidden intermediate steps, the system cannot be checked for plausibility and accuracy. Structural errors will not be detected until PD estimates become noticeably inaccurate. To conclude our discussion on the review of techniques used to classify defaulters, it can be said that, although these techniques are successful in weighing the data and provide good classification accuracy, they are sensitive to the assumptions made and suffer from non-specificity in terms of sample and time. This makes the predictive accuracy of these techniques weak.
IIMB Management Review, June 2006

Neural networks have been found to perform better in out-ofsample tests. However, the network may be overfit to a particular database if excessive training has taken place, thereby resulting in poor out-of-sample estimates. Moreover, neural networks are costly to implement and suffer from a lack of transparency. Loan Default Model for Indian Banks
Methodology for Model Decisions
The criterion event that this model is meant to prevent and predict is loan default. This is a form of distress which is less severe than bankruptcy, and will be of ultimate interest to banks. In order to do this, we need to go back from the date of suit filing by the banks, which is available from the Credit Information Bureau (CIBIL) Database. Although default happens much before the date the suit is filed, CIBIL unfortunately does not provide the date of default. Hence we have developed an alternative methodology (explained in the next section) to determine the date of default. A bank that wants to apply this model can easily substitute the methodology presented below by using its own proprietary database, which will have the exact year of default. The statistical techniques used for analysis are Linear Discriminant Analysis and Logistic Regression. A detailed discussion on the techniques is provided in the section titled Data Analysis. The sample or population of interest is basically companies that have defaulted in the past and can therefore be used to construct a model, based on select information, that can predict bank default. In order to get a sample representative of such a population, we looked at the Suit Filed cases in CIBIL Database. The database provided data on suits filed on companies between 2003 and 2005. From this set of firms (3200 in number, defaulters with nationalised and private banks) only 150 firms were listed. Most of these firms belonged to the manufacturing sector. Because of the small population of defaulter firms in the services sector on which 129

The choice of predictor variables is affected by the lack of a robust and holistic theory of corporate failure that takes into account the macro variables, the interests of different stakeholders, their preferences and the firm level financial variables. This means that the process of failure or default has not been modelled.
public information was available, we restricted the analysis to the manufacturing sector firms. Our analysis required us to go back from the year of suit filing to the year of default. This was done by studying two financial figures viz. Profit before depreciation and interest (PBDIT) and Interest (I) for all the firms of interest. The data analysis was done from 2003 backwards to 1989. For most of the firms, a consistent pattern of decreasing PBDIT was observed over the years. The year in which PBDIT first became less than Interest Payments due was taken to be the year of default. Companies where this kind of pattern was not visible were taken out of the sample. Moreover, in some of the cases full data was not available which led to further reduction in the sample. In the end, we had data on 90 default firms with their year of default. The sample of failed firms was then used to find a comparable sample of non-failed firms. It was ascertained that this sample consisted wholly of non-distressed firms by checking their status as a defaulter in CIBIL Database. Moreover, a basic ratio analysis of these non distressed firms was done, including ratios like Interest Coverage and Current Ratio, to make sure they were financially sound and would not default within the next one year at the least. However, the absence of clear criteria to distinguish such firms poses problems since we ourselves are trying to get a criteria model. The final sample set included 90 defaulting firms and a corresponding number of non defaulting firms manufacturing cotton and synthetic yarn and fabrics, drugs and pharmaceuticals, finished steel, steel products and other metal products, agricultural products, automobile ancillaries, computer hardware, plastic products, paints and varnishes, machine tools and diversified products, among others. The choice of predictor variables is affected by the lack of a 130

robust and holistic theory of corporate failure that takes into account the macro variables, the interests of different stakeholders, their preferences and the firm level financial variables. This means that the process of failure or default has not been modelled. Most studies start with a huge set of financial variables and then allow statistical methods to reduce this set. The ratios that have been generally used are profitability, liquidity and solvency. The problem with mechanical approaches is statistical overfitting. Moreover, the use of accounting based ratios makes the analysis subject to distortions due to accounting policy influences. To mitigate this problem, some of the studies have used Cash Flow based models to determine the true characteristics of firms. Empirical research done in this area12 has shown that the inclusion of cash flow variables in place of accounting variables does not lead to any significant improvement in the ability of the model to predict default in out of sample tests. This is because accounting and cash flow variables are found to be highly correlated and so using one in place of the other does not matter. The studies do indicate some benefit from studying the variance of Cash Flows. But no studies have been done on this, though there do exist studies on the variance of accounting data. They found that the inclusion of measures of Standard Deviation and Standard Error of Mean and Coefficient of Variation within the discriminant function improve its predictive power. However the improvement could be attributed to including several years of information rather than the function per se. Thus it was decided to exclude cash flow variables from our analysis. Some studies also include variables that incorporate the effect of macroeconomic variables. However, estimating these variables for the future once we create a model poses a problem.

Predictive ability is an important aspect of a model. Overfitting, which may be labelled the curse of predictive modeling, is the phenomenon in which a predictive model may well describe the relationship between predictors and outcome in the training/sample set used to develop the model, but subsequently fail to provide valid predictions in out of sample tests. The model shows an adequate fit in the data set under study, but does not validate, that is, does not provide accurate predictions for observations from a new dataset. This constitutes the main problem in Bank Default prediction models as well13.

Data Analysis
The financial ratios used in the study (Exhibit 1) were selected

Constructing a Loan Default Model for Indian Banks using CIBIL Data

based on previous studies and by conducting discussions with people in the banking industry who have been actively Exhibit 1
Ratio R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15 R16 R17 R18 R19 R20 R21 R22 R23 R24 R25 R26 R27 R28 R29 R30 R31

involved in loan appraisal (see box for an explanation of the key ratios).

Financial Ratios
Description Earnings Before Interest and Taxes/ Net Sales Retained Earnings/ Total Assets Earnings Before Interest and Taxes/(Total Debt + Net Worth) Return On Equity Return On Assets Net Profit Margin Working Capital/ Total Assets Current Ratio Quick Ratio Cash/ Current Liabilities Cash/ Earnings Before Interest Taxes and Depreciation Cash/ Working Capital Debt/ Common Equity Earnings Before Interest and Taxes/ Interest Expenses Net Sales/ Total Assets Cost of Sales/ Inventory Sales/ Receivables Net Fixed Assets/ Total Assets Firm Size Working Capital/ Net Worth Working Capital/ Net Sales Quick Assets/ Total Assets Earnings Before Interest and Taxes/ Total Assets Quick Assets/ Inventory Net Worth/ Net Sales Current Liabilities/ Net Worth Current Assets/ Total Assets Current Assets/ Net Sales Cash/ Net Sales Cash/ Total Assets Net Worth/ Total Assets Abbreviation EBIT_Sl RE_TA EBIT_TDNW ROE ROA NPM WC_TA CR QR Cash_CL Cash_Burn Cash_WC DE Int_Cov Asset_Turn Inv_Turn Rec_Turn FA_TA Size WC_NW WC_NetSl QA_TA EBIT_TA QA_Inv NW_NetSl CL_NW CA_TA CA_NetSl Cash_NetSl Cash_TA NW_TA

IIMB Management Review, June 2006

131

Interpretation of Key Financial Ratios


Return on Assets: Return on Assets is measured as Net Profits/Total Assets. This ratio is a measure of return on owners' and creditors' money. It measures the efficiency with which resources are mobilised and utilised by a company. Current Ratio: Current Ratio is defined as Current Assets/Current Liabilities. It measures the ability of a company to pay off its current liabilities as and when they come due. Interest Coverage Ratio: Interest Coverage Ratio measures how well a companys earnings cover its interest payments on debt. It is defined as Earnings before Interest and Tax (EBIT)/Interest. Receivables Turnover Ratio: Receivables Turnover Ratio is an indicator of how quickly the firm collects its accounts receivables. Our analysis uses the ratio as Net Sales/ Receivables. Fixed Assets/ Total Assets: This ratio basically measures the ability of a firm to secure its loan with Fixed Assets. The higher the fixed assets, the higher the credit worthiness of a firm. Working Capital /Net Sales: This ratio examines the effect of net sales on the ability of the company to meet emergencies. However, this ratio is subject to the problem of some companies having a negative working capital, which can be good as well as bad depending on the situation. Generally, the lower the ratio, the better equipped the company is in its ability to meet emergencies. Cash/Working Capital: This ratio measures the proportion of Working Capital that is funded by Cash. A higher ratio exhibits a higher liquidity for a firm. Assets Turnover Ratio: Assets Turnover Ratio is defined as Net Sales/Total Assets. It measures how well the management handles competition and how efficiently the firm uses assets to generate sales. Failure to grow market share translates into a low or falling Asset Turnover.

The model that we get from discriminant analysis is not very robust if the variables are highly correlated. As can be seen in the list of ratios, many variables share the same numerator or denominator and many of them have similar interpretations. For example both current ratio and quick ratio measure the liquidity of the firm. It is important therefore to detect the problem of collinearity and address it before using the variables for discriminant analysis. We utilised a well known procedure known as Variance Inflation Factor for addressing this problem. The results and procedures used are outlined below.

the variable with the highest VIF from the model and then looked at the effect on the model. The exercise was repeated and at each iteration the variable with the highest VIF was removed until all variables had a VIF of less than 2. The model was run one last time by removing the variable which had the least negative effect on the model and which removed multicollinearity among the variables. Thirteen variables were removed in this way in as many steps. The variables that were still left were analysed using the stepwise discriminant procedure.

Variance Inflation Factor


Both regression analysis and discriminant analysis produce a linear model that gives as its output a linear combination of the explanatory or predictor variables. However, the coefficients estimated tend to get inflated or biased as the problem of multicollinearity increases. Multicollinearity is the inter-relationships between explanatory variables. Tolerances (TOL) and Variance Inflation Factors (VIF) measure the strength of inter-relationships among the explanatory variables in the model. Tolerance is 1- R2 for the R2 that results from the regression of the explanatory variable on the other explanatory variables in the model. The variance inflation measures the inflation in the variance of the parameter estimate due to collinearity between the explanatory variable and other variables. These measures are related by VIF = 1 / TOL. To remove this problem we analysed the data in the following steps. We first calculated the VIFs of all the variables by running a regression model with default as the criterion variable and the ratios as the predictor variables. We removed 132

Discriminant Analysis
Discriminant analysis is a statistical procedure that: Identifies a set of variables that best discriminates between two groups Identifies a new variable, which is a discriminant function of the above set of variables and provides the maximum separation or discrimination between the two groups Classifies future observations into one of the two groups. This is done by choosing weights in the discriminant function such that the between-group sum of squares (S1) is maximised and the within-group sum of squares (S2) is minimised. The procedure produces a linear discriminant function Z = f (x1,x2,x3,x4,x5) where x1-x5 are the predictor variables. The function weighs each variable by estimating coefficients. The Z-score so calculated is used to classify the group into one of two or several groups. The results of the stepwise discriminant procedure showed

Constructing a Loan Default Model for Indian Banks using CIBIL Data

that six variables, viz. Return on Assets, Current Ratio, Interest Coverage, Receivables Turnover, Fixed Assets/Total Assets and Working Capital/Net Sales, are the most important for predicting default. The cut-off value for the entry of a variable was a p-value of .05 or alternatively an F-value of 3.14. We assumed different values for the p-value such as .01, .05 but there was no material change in the results.

Details of the Discriminant Function


The discriminant function estimated below assumes positive misclassification cost. It implies that the model assumes it is twice as costly to commit a Type I error i.e. classify a default firm as non-default than it is to commit a Type II error i.e. classify a good firm as bad. The rationale for this is that if a bank wrongly classifies a default firm as non-default it stands to lose the entire principal loaned out whereas if it classifies a non default company wrongly it only loses the profit margin that could have been earned. The estimated equations are: Default ault: Default -4.033 -.553 ROA + .627 CR + .021 Int_Cov + .206 Rec_Turn + 13.268 FA_TA + .42033 WC_NetSl Default ault: Non- Default -4.941 +1.087 ROA + 1.21 CR + .096 Int_Cov + .314 Rec_Turn + 10.671 FA_TA + .051 WC_NetSl After calculating the above equations, scores are calculated for each category and the Z-score is calculated as: Z-Score = Score (Non-Default) Score (Default) Z-Score = .909 + 1.64ROA +.583 CR +.074 Int_Cov + .109 Rec_Turn + 2.596 FA_TA + .369 WC_NetSl

The model assumes positive misclassification cost in the discriminant function. The rationale for this is that if a bank wrongly classifies a default firm as non-default it stands to lose the entire principal loaned out. However if it classifies a non default company wrongly, it only loses the profit margin that could have been earned.
model is biased towards predicting more firms as default rather than non-default. Therefore its efficacy is higher in predicting firms that are default.

Logistic Regression
Binary responses (for example, success and failure), ordinal responses (for example, normal, mild, and severe), and nominal responses (for example, major TV networks viewed at a certain hour) arise in many fields of study. Logistic regression analysis is often used to investigate the relationship between these discrete responses and a set of explanatory variables. We will use this procedure to calculate the probability of default for a firm. There are two response variables, default and non-default, and they are unordered. The model will predict with what probability the value of the dependent variable will be 1 and with what probability it will be 0. More formally we are interested in modelling the following:

Validation
Classification Matrix
Using the function above, 88.33% of the Default firms and 85% of the non-default firms overall were classified correctly.

P (Yi = 1\X) = + X
where Yi is the even t of interest i.e. default and X is the vector of explanatory variables i.e. the ratios we have used. Thus, the higher the value of P (Yi = 1|X), the higher the chance that the firm may default. To model the above probability equation we have used the logit model. The logit model states that Prob(Yi=1) = F ( + Xi) Prob(Yi=0) = 1 - F ( + Xi)

Holdout Sample Validation


The estimated linear discriminant function is used to classify a sample for a group of observations that have not been used to estimate the function. The initial sample of 90 default and 90 non-default firms was split in the ratio 2:1 and 60 pairs were used to estimate the function while the remaining 30 were used in the holdout sample. Of the default firms, 83.33 % were predicted correctly, while 73.33% of the non default firms were predicted correctly. These results were expected because, by assuming a positive misclassification cost, the
IIMB Management Review, June 2006

The function F is called the logistic function. Thus the dependent variable is not a direct linear combination of the 133

The discriminant model we have used employs six ratios as predictors of corporate loan default. The ratios measure liquidity and profitability and leverage all important determinants of the propensity of a firm to default on its loans. This study is the first of its kind since it uses data on actual defaulters in the Indian context.
explanatory variables but is modelled as a function of the explanatory variables. When there is only one explanatory variable the logistic function is

concordance tells us that in 89.75% of the cases both the firms in the pair were classified correctly; while in 10.7% of the cases one of the firms was not classified correctly. Only 0.1% of the cases tied, i.e. the model was not able to predict which firm was to be classified in each group (the probabilities in this case were calculated to be equal).

Conclusion
The research on predicting loan default has primarily relied on using financial ratios as a predictor of default and in doing so has used statistical procedures to unearth the relationships between default and the financial variables. Lack of a unified theory to choose the ratios has meant that most studies concentrate on empirical research to find the most accurate model. Until the 1980s the most commonly used method was discriminant analysis. The results of our study are in agreement with most studies done earlier. The discriminant model we have used employs six ratios as predictors of corporate loan default. Studies done earlier have also found around 4-6 variables. The addition of more variables does not add to the explanatory power of the model. This is because the Partial R-square (as given in Exhibit 5), which measures the incremental predictive ability of a variable, is not significant (as measured with the p-value) with the addition of an extra variable. The six ratios measure liquidity and profitability and leverage all important determinants of the propensity of a firm to default on its loans. For the first time in India information on defaulters is available in the public medium. In that respect this study is the first of its kind since it uses data on actual defaulters in the Indian context. Logistic regression is an improvement over discriminant analysis in the sense that it does not impose strict assumptions on the data such as multivariate normality and equal variancecovariance matrices for the two groups. Though there are techniques available to modify the discriminant function so as to work around these assumptions they can never be fully corrected. The logistic technique does not require the two assumptions and therefore produces a more robust and efficacious model. In this study, all the five variables had significant coefficients and three of them were the same as those found in the discriminant procedure. The model can be used by Indian banks to predict potential loan default, analyse the credit-worthiness of firms, and identify and set targets for investment risks. It will also be an effective aid in audits with respect to bankruptcy/liquidation

Therefore we need to find the estimates of b0 and b1 so that the logistic function best fits the data. The logistic regression procedure scores above the discriminant analysis procedure as it doesnt assume that the variables used in the model have multivariate normality and equal variance covariance matrices. Using the stepwise logistic regression procedure with the cut off point for a variable to enter the model being a p-value of .05, five variables were estimated to be significant. These were: Return on Assets Current Ratio Cash/Working capital Asset Turnover Working Capital/ Net Sales

The percentage of concordance is equal to 89.7%. This measure takes one default firm and one non default firm from the sample and calculates the probability of default for each of the firms using the logistic regression equation. The firm with the higher probability is then classified as a default firm. This exercise is repeated for each such pair. For example, in our sample containing 90 default and 90 non default firms, the total number of pairs will be 8100 and the percentage of 134

Constructing a Loan Default Model for Indian Banks using CIBIL Data

concerns. The logistic regression and the discriminant models can provide us with the probability or likelihood of default (a number between 0 and 1). Even if a firm is identified as a non-default firm the number denoting the likelihood of default can be used as a benchmark or limit for giving loans and thus a benchmark can be set for the extent of risk the bank is willing to take. For example, a bank can set a limit of 0.5 for the probability of default to limit its risk taking to a safe level. Moreover different limits can be set for different sectors depending on the comfort level of the bank in giving loans to each sector. However, the study is restricted to manufacturing firms and suffers from some of the assumptions mentioned earlier in the paper. The scope for further research in this area is immense, and should primarily focus on the time-specificity problem. A time series model is suggested for this which will take into account the movement in financial variables over a period of time rather than concentrating on one or two years before the default.

Spring, pp. 109-1 3; Zavgren, C V, 1988, The Association between Probabilities of Bankruptcy and Market Responses A Test of Market Anticipation, Journal of Business Finance and Accounting, Vol. 15, No. l, pp. 27-45. 4 Beaver, Financial Ratios as Predictors of Failures. 5 Whittington, G, 1980, Some Basic Properties of Accounting Ratios, Journal of Business Finance and Accounting, Vol. 7, No. 2, pp. 219-232. 6 Keasey, K, and R Watson, 1991, Financial Distress Prediction Models: A Review of Their Usefulness, British Journal of Management, Vol. 2, pp. 89-I02. 7 Altman, E, 1980, Commercial Bank Lending: Process, Credit Scoring, and Costs of Errors in Lending, Journal of Financial and Quantitative Analysis, Nov., Vol. 15, No. 4, pp. 813-832. 8 Marks, S, and D Dunn, 1974, Discriminant Functions when Covariance Matrices are Unequal, Journal of the American Statistical Association, June, pp. 555-559. 9 This is because in the case of a failed firm the bank is facing the risk of losing its entire principal while the benefit is only the interest amount received, which is a fraction of the principal. Hence the downside is larger than the upside which increases the cost of misclassifying a failed firm. 10 Ohlson, Financial Ratios and the Probabilistic Prediction of Bankruptcy; Zavgren, The Association between Probabilities of Bankruptcy and Market Responses. 11 Zavgren, The Association between Probabilities of Bankruptcy and Market Responses. 12 Casey, C, and N Bartczak, 1984, Cash-Flow Its not the Bottom Line, Harvard Business Review, Aug., pp. 61-66. 13 Altman and Narayan, 1997, Managing Credit Risk: The Next Great Financial Challenge, John Wiley and Sons; Mester, L J, 1997, Whats the Point of Credit Scoring, Business Review, Sep., article provided by Federal Reserve Bank of Philadelphia, pp. 3-16.

References and Notes


1 Beaver, W, 1967, Financial Ratios as Predictors of Failures, Empirical Research in Accounting Selected Studies, Supplement to the Journal of Accounting Research, Jan. 2 Altman, E, 1968, Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy, Journal of Finance, Sept., pp 589-609; Deakin, E B, 1972, A Discriminant Analysis of Predictors of Business Failure, Journal of Accounting Research, March, pp. 167-179; Jones, F L, 1987, Current Techniques in Bankruptcy Prediction, Journal of Accounting Literature, Vol. 6, pp. 131164. 3 Ohlson, J S, 1980, Financial Ratios and the Probabilistic Prediction of Bankruptcy, Journal of Accounting Research,

Reprint No 06202

IIMB Management Review, June 2006

135

Potrebbero piacerti anche