Sei sulla pagina 1di 48

January 11, 2010 13:27 WSPC/173-IJITDM 00370

International Journal of Information Technology & Decision Making


Vol. 8, No. 4 (2009) 629675
c World Scientic Publishing Company
EMPIRICAL STUDIES OF STRUCTURAL CREDIT
RISK MODELS AND THE APPLICATION IN DEFAULT
PREDICTION: REVIEW AND NEW EVIDENCE
HAN-HSING LEE

Graduate Institution of Finance


National Chiao Tung University, Hsunchu, Taiwan
hhlee@mail.nctu.edu.tw
REN-RAW CHEN
Finance and Economics, Fordham University
New York, NY 10023, USA
CHENG-FEW LEE
Department of Finance and Economics
Rutgers Business School, Rutgers University
Piscataway, NJ 08854, USA
This paper rst reviews empirical evidence and estimation methods of structural credit
risk models. Next, an empirical investigation of the performance of default prediction
under the down-and-out barrier option framework is provided. In the literature review,
a brief overview of the structural credit risk models is provided. Empirical investigations
in extant literature papers are described in some detail, and their results are summarized
in terms of subject and estimation method adopted in each paper. Current estimation
methods and their drawbacks are discussed in detail. In our empirical investigation, we
adopt the Maximum Likelihood Estimation method proposed by Duan [Mathematical
Finance 10 (1994) 461462]. This method has been shown by Ericsson and Reneby
[Journal of Business 78 (2005) 707735] through simulation experiments to be superior
to the volatility restriction approach commonly adopted in the literature. Our empirical
results surprisingly show that the simple Merton model outperforms the Brockman and
Turtle [Journal of Financial Economics 67 (2003) 511529] model in default prediction.
The inferior performance of the Brockman and Turtle model may be the result of its
unreasonable assumption of the at barrier.
Keywords: Structural credit risk model; estimation approach; default prediction; Maxi-
mum Likelihood Estimation (MLE).
1. Introduction
Structural credit risk modelling or defaultable claim modelling is pioneered by the
seminal paper by Black and Scholes
1
in which corporate liabilities can be viewed as

Corresponding author.
629
January 11, 2010 13:27 WSPC/173-IJITDM 00370
630 H.-H. Lee, R.-R. Chen & C.-F. Lee
a covered call own the asset but short a call option. This approach to modelling
default claims is named structural approach since the model explicitly ties default
risk to the rm value process and its capital structure. Recently, the structural
credit risk modelling literature has grown into an important area of research. This
paper rst reviews previous empirical studies and current estimation methods of
structural credit risk models. While theoretically elegant, structural models gener-
ally do not perform well empirically in risky corporate bond pricing. We suggest that
prediction of default probabilities or default events shall be potentially important
applications of structural models.
a
The performance of default prediction under
the down-and-out barrier option framework of structural modelling is empirically
investigated in this paper.
Credit risk models can be divided into two main categories: credit pricing mod-
els, and portfolio credit value-at-risk (VaR) models.
b
Credit pricing models can
be subdivided into two main approaches: structural-form models and reduced-form
models.
c
Portfolio credit VaR models, developed by banks and consultants, aim at
measuring the potential loss with a predetermined condence interval that a port-
folio of credit exposures could suer within a specied time horizon. These models
typically employ simpler assumptions and place less emphasis on the causes of a
single rms default. Reduced-form models are mainly represented by the Jarrow
Turnbull
14
and DueSingleton
15
models. These models typically assume exoge-
nous random variables drive defaults and do not condition default on the rm value
and other structure features, such as asset value volatility and leverage, of the rm.
In addition to the models mentioned above, Shumway
16
argues that a good default
model should be an econometric model. He contends that many theoretical works
only take care of only some aspects of why default occurs, but not all. Thus, he
proposes a hazard model that incorporates both theoretical and empirical factors.
d
In our empirical study, we limit our empirical analysis of default prediction to the
single-rm structural models.
Following the seminal work of Black and Scholes,
1
Merton
18
rigorously elab-
orates the pricing of corporate debt. Valuation of any corporate security can be
modelled as a contingent claim on the underlying value of the rm. However, the
European option approach by Merton
17
ignores the possibility of failure prior to
a
Researchers and practitioners have been forecasting bankruptcy for decades. For example, Kumar
and Ravi
2
presents a comprehensive survey of a number of research works published during 1968
2005, where various statistical and intelligent techniques were applied to solve bankruptcy predic-
tion problem. Readers can also refer to survey paper by Altman,
3
Zhang et al.,
4
and Shin et al.,
5
and can refer to papers by Li et al.,
6
Tseng et al.,
7
and Zhang et al.
8
for recent development and
application of intelligent techniques.
b
For the comprehensive analysis of these models, see Crouhy et al.,
9
Saunders and Allen,
10
Yao et al.,
11
and Better et al.
12
c
See Lando
13
for review of reduced-form models.
d
There exists an extensive literature on default or bankruptcy prediction. Readers who are inter-
ested in this subject can refer to papers by Shumway
16
and Due et al.
17
In this paper, we
have no intention to incorporate those previously identied variables, such as rms trailing stock
return, trailing S&P 500 returns, and U.S. interest rates, into our analysis; we rather focus on
various default boundary assumptions in structural models.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 631
debt maturity, and implicitly models corporate debt and equity as path-independent
securities of the underlying asset value process. Researchers, therefore, introduce
the default barrier to model this deciency. Black and Cox
19
rst propose a model
in which default can occur during the life of the bond a company defaults on
its debt if the asset value of the company falls below a certain exogenous default
boundary due to bond covenants.
Subsequent researchers extend the structural approach further and incorpo-
rate more realistic assumptions. Based on the key objectives of these models,
one can classify them as corporate bond pricing models and capital structure
models. Corporate bond pricing models, which typically assume stochastic inter-
est rate, are primarily developed to price defaultable corporate debt. In order
to accommodate a more realistic and complex debt structure, these models typ-
ically assume an exogenous default barrier, and bondholders receive a recovery
rate times the face value of debt at maturity if a rm defaults. For example, the
Longsta and Schwartz
20
model is the most well-known corporate bond pricing
model. By contrast, capital structure models incorporate corporate nance theory
and put the emphasis on the default event itself. Valuation of corporate securi-
ties and nancial decisions are jointly determined, and the analyses focus on the
complex relations between credit risk, risk premiums, and the rms nancing deci-
sions, rather than on valuation of complex credit securities. In contrast to corpo-
rate bond pricing models, the simple stationary debt structure assumption, debt
structure with time-independent payouts as assumed in the Leland
21
model, is
widely adopted in order to obtain closed-form expression of debt, equity, and rm
value.
e
Structural modelling has been grown into a large literature. In this paper, we
give a brief review of the development of structural models, and put more emphasis
on the empirical evidence of structural models and estimation methods currently
employed by researchers. The empirical performance in pricing risky debt is gener-
ally unsatisfactory (see Refs. 2327 and among others). Even with the incorporation
of jump process, the illiquid corporate bond market still hinders structural models
from accurately pricing risky debt. However, predicting the credit quality of a cor-
porate security could be a good application of structural models because they are
less aected by the micro structure issues. As Leland
28
stated
We focus on default probabilities rather than credit spreads because
(i) they are not aected by additional factors such as liquidity, tax
dierences, and recovery rates; and (ii) prediction of the relative
likelihood of default is often stated as the objective of bond ratings.
Since recent structural models put a great deal of emphasis upon the event of
bankruptcy, prediction of default probabilities or default events shall be potentially
important applications of structural models.
e
Readers who are interested in other structural models can refer to the survey paper by Elizalde.
22
January 11, 2010 13:27 WSPC/173-IJITDM 00370
632 H.-H. Lee, R.-R. Chen & C.-F. Lee
While default boundary is widely adopted in the literature, empirical support
of default boundary assumption is not directly tested until Brockman and Turtle
29
provide the rst empirical evidence that implied default boundaries are statisti-
cally signicant for a large cross-section of industrial rms. In addition, Brockman
and Turtle
29
also investigate the bankruptcy prediction performance under the
down-and-out call (DOC) framework using industrial rm data, from 1989 to 1998.
Their empirical evidence shows that the failure probabilities implied by the DOC
framework never underperform the well-known accounting approach Altmans
Z-score. On the other hand, in default forecasting using the Cox proportional haz-
ard model, Bharath and Shumway
30
compare the KMV-Merton default probability
with several forecasting variables, such as the nave probability estimate (without
implementing the iterated procedure), market equity, and past returns. They nd
that the KMV-Merton probability is a marginally useful default forecaster, but it
is not a sucient statistic for default. Therefore, it motivates us to incorporate
the default boundary into default prediction to examine if barrier modelling can
improve default forecasting.
However, Brockman and Turtle
29
employ a simple proxy approach to calcu-
late asset values by approximating the market value of corporate assets as the
sum of the market value of equity and the book value of liabilities. The proxy
approach has been criticized by Wong and Choi
31
because the proxy forces the
default barrier to be greater than the future promised payment of liabilities under
the down-and-out call option framework. Therefore, we adopt a better estimation
methodology, the Maximum Likelihood Estimation (MLE) method proposed by
Duan
32
and Duan et al.,
33
which views the observed equity time series as a trans-
formed data set of unobserved rm values with the theoretical equity pricing for-
mula serving as the transformation. This method has been evaluated by Ericsson
and Reneby
34
through simulation experiments, and their results show that eciency
of the MLE is superior to the commonly adopted volatility restriction approach in
the literature. Another reason to employ MLE is that the major data required for
this method, in the context of structural models, is common stock prices, which
have much less microstructure issues compared with bond prices.
Employing the MLE method, we conduct an empirical investigation to com-
pare the default forecasting ability of the Merton and the Brockman and Turtle
models. Our empirical results surprisingly show that the simple Merton model out-
performs the Brockman and Turtle model in default prediction, and the dierence
of predictive ability is statistically signicant. The results hold for the in-sample,
six-month-ahead and one-year-ahead out-of-sample tests for two alternative deni-
tions of default the broad denition of bankruptcy as in Brockman and Turtle,
29
and the alternative denition in which bankruptcy is corresponding to Chapter 11
lings. In addition, we also nd that the inferior performance of the Brockman
and Turtle model may be the result of its unreasonable assumption of the at bar-
rier. Furthermore, these results are still preserved in our robustness test as we use
risk-neutral default probabilities, instead of physical default probabilities.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 633
Our paper contributes to existing literature in two aspects: rst, we review and
summarize recent empirical results of structural models from dierent papers, in
terms of subject and estimation method adopted. Current estimation methods and
their drawbacks are discussed in detail. Second, in contrast to previous research,
we adopt the theoretically superior MLE approach and empirically test default
prediction capabilities of the at-barrier model. One of the advantages of the MLE
approach is that it can jointly estimate asset volatility and default barrier. The role
of the default barrier in structural models has long been adopted by researchers
while its validity is not empirically investigated until the research by Brockman
and Turtle
29
and Wong and Choi.
31
The paper is organized as follows. Section 2 reviews prior empirical studies of
structural models. Section 3 presents the current estimation methods in literature
and the issues of these estimation approaches. A simulation study of the MLE
method is also reported. Section 4 reports empirical results in default prediction.
Section 5 presents the summary and concluding remarks.
2. Structural Credit Risk Models and Previous Empirical Studies
In Sec. 2.1, we review the evolution of structural credit risk models. In Sec. 2.2, we
review the empirical studies of structural credit risk models. These researches are
summarized based on the subjects of the studies: bond pricing and yield prediction,
CDS premium, equity returns, default prediction, and default probability estima-
tion. In the end of this section, we present in Table 1, a summary of these empirical
works in terms of their subject of research, estimation methods and input data.
The detailed description of these estimation approaches as well as their problems
are discussed later in Sec. 3.
2.1. Overview of structural credit risk models
Black and Scholes
7
rst proposed the revolutionary idea that shareholders of the
company are call option holders with total debt value as the strike price. This
striking idea is opposed to the traditional view that shareholders are owners of
a rm. As a result, any call option model can be a structural model given that
the equity value is the call option of the underlying asset value. However, the
BlackScholes model has the following arguably overly simplied assumptions:
(1) it permits only a single debt that does not pay any coupon; (2) default can
only occur at the maturity of the debt.
Merton
18
extended the BlackScholes model by incorporating the coupon paying
issue. The Merton model assumes the value of the rm, V , follows the process
dV = (V C)dt +V dw (2.1.1)
where C is the dollar payout by the rm per unit time either to its shareholders or
debt holders by dividends or coupons.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
634 H.-H. Lee, R.-R. Chen & C.-F. Lee
Table 1. Summary of previous empirical studies of structural models.
Reference Subject Estimation Method Main Input Data for
the Estimation
Wei and Guo
23
Credit spreads Yield curve approach Eurodollar and T-bill
data
Anderson and
Sundare-
san
24
Bond pricing and yield
spreads
Asset value proxy Yield indices of
investment
Lyden and
Saraniti
54
Bond pricing and yield
spreads
Asset value proxy Matrix bond prices
Delianedis and
Geske
25
Bond pricing and yield
spreads
Volatility restriction Matrix bond prices
Huang and
Huang
26
Bond pricing and yield
spreads
Calibration Bond prices
Eom et al.
27
Bond pricing and yield
spreads
Asset value proxy with
rened volatility
estimation
Bond prices
Hsu et al.
40
Bond pricing GMM Exchange traded bond
prices
Ericsson and
Reneby
55
Yield spreads MLE Stock prices, bond
prices, dividend
information
Chen et al.
56
CDS spreads Minimize of pricing
error and absolute
pricing error
CDS transaction data
Ericsson
et al.
57
CDS premia and bond
pricing
MLE Credit default swaps,
bond prices
Vassalou and
Xing
58
Equity returns KMV (simplied) Equity prices
Brockman and
Turtle
29
Default prediction and
default boundary
Asset value proxy Equity prices
Bharath and
Shumway
30
Default prediction KMV (simplied) Equity prices
Chen et al.
60
Default prediction Volatility restriction Equity prices
Wong and
Choi
31
Default barrier MLE Equity prices
Davydenko
61
Default prediction and
default boundary
Market values of bond,
equity, and bank loan
Bond prices, bank loans,
Equity prices
Leland
28
Default probability
estimation
Calibration Moodys corporate bond
default data
Tarashev
62
Default probability
estimation
KMV (simplied) Moodys corporate bond
default data
Suo and
Wang
63
Default probability
estimation
MLE Equity and bond prices
Merton derived a partial dierential equation which must be satised by any
security whose value can be expressed as the function of rm value and time.
0 =
1
2

2
V
2
F
V V
+ (rV C)F
V
rF +F
t
+Cy (2.1.2)
where F is any of the rms securities, and Cy is the payout received or payment
made by security F.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 635
Researchers have sought to overcome the second limitation imposed by the
BlackScholes model. In reality, default may happen before the maturity of the debt
in many cases. The paper by Black and Cox,
19
assumed a time dependent default
barrier e
(Tt)
K, was the rst of the so-called rst passage model assuming con-
tinuous monitoring from the bond covenants. These safety covenants can prevent
the asset value from dropping out of some critical level by making an early default.
This is an idea parallel to the down-and-out barrier call option in the context of
option pricing. The rst passage models specify default as the rst time the rms
asset value hits a lower barrier, therefore allowing default to take place at any time.
On the other hand, Geske
35
and Geske and Johnson
36
argued that default cannot
occur continuously without cash payment pressure. Thus, companies will only face
default when they have to make coupon or principal payments to their debt owners.
This assumption matches the compound option model Geske
37
proposed and treats
equity as a compound call option. Each time the company faces a coupon payment,
shareholders consider if it is worthwhile to pay the coupon. Hence, the coupon is
paid only if the company has a positive value to the shareholders. Otherwise, the
shareholders will decide not to pay the coupon and the company goes default. The
no-arbitrage default condition is at the point where the company can raise new
equity to pay o its debt obligation when it has positive value. As a result, there is
an implied default barrier for the asset value, and Geske
35
proved that this barrier
is identical to the market value of all debt.
In the line of rst passage models, later studies introduced the stochastic interest
rate process and the stochastic default threshold, and the default threshold can
be further classied as an exogenous or endogenous default barrier. Considering
stochastic interest rate processes, Kim et al.
38
and Longsta and Schwartz
20
assume
an exogenous constant default threshold. Briys and de Varenne
39
make the default
threshold stochastic by assuming the default barrier is proportional to the face
value of corporate debt discounted at the stochastic interest rate. Hsu et al.
40
extend the Longsta and Schwartz model and assume the default threshold K
t
to
be a stochastic process. This allows them to obtain the stochastic process of the log-
solvency ratio log(V
t
/K
t
), and they price the risky bond only through this ratio.
In addition, they assume that at default the holder of a corporate coupon bond
receives a fraction of otherwise equivalent Treasury bonds. This enables them to
price the corporate bond independent of the rms other liabilities and thus makes
the detailed description of the rms capital structure unnecessary.
Alternatively, Leland
21
and Leland and Toft
41
assume non-stochastic interest
rates but endogenously determine the default barrier, as a result of stationary debt
structure and the shareholders attempt to choose the default threshold which max-
imizes the value of the rm. Leland
21
assumes the case of perpetual debt where cash
outows for paying continuous coupons are nanced by raising equity, while Leland
and Toft
41
assume that debt is continuously rolled over. Because of the character
of the time-independent stationary debt structure, these two studies are able to
assume an ex ante at default boundary. The authors can then modify the general
January 11, 2010 13:27 WSPC/173-IJITDM 00370
636 H.-H. Lee, R.-R. Chen & C.-F. Lee
PDE to an ODE for all the factors which aect the decision of the optimal cap-
ital structure, and obtain the closed-form solutions of corporate risky debt. The
endogenously determined at barrier V
B
is then derived under the smooth-pasting
condition,
dE
dV

V =V
B
= 0.
Later on, Leland
42
developed a model in which capital structure and invest-
ment risk are jointly determined. Agency costs, resulting from asset substitution,
restrict leverage and debt maturity and increase yield spread, but their impor-
tance is small. Huang et al.
43
consider the Leland and Toft
41
debt structure in the
Longsta and Schwartz
20
setting, and argue that the current level of the interest
rate is crucial in pricing risky bonds while the long-run mean of the interest rate
process is the key to determination of a rms optimal capital structure. Along
the line of endogenous default models, some researchers further consider strategic
default where equity holders may act strategically, forcing concessions from debt
holders and paying less than the originally-contracted interest payments. Models of
Anderson and Sundaresan
72
and Mella-Barral and Perraudin
45
are the examples.
One drawback of Mertons model and some of the rst passage models is the
predictability of default. Structural models consider continuous diusion processes
for the rms asset value and complete information about asset value and default
threshold. Hence, the actual distance from the asset value to the default threshold
describes the closeness to default, which makes default a predictable event. In other
words, default does not come as a surprise. Therefore, if at a given time point the
asset value is far away from default, the probability of default in the short-term
will be close to zero since the asset value follows a continuous diusion and needs
time to reach the default point. This contradicts the short-term credit spreads that
are bound by a number not equal to zero. The same characteristic also implies the
predictability of recovery.
Elizalde
22
summarized two ways out of the predictability eects in the litera-
ture. The rst line of research in solving this issue is to consider the incomplete
information about the rm value process and/or the default threshold. Investors
can only infer a distribution function for these processes, which makes defaults
impossible to predict. The literature includes Due and Lando,
77
Giesecke,
46
and
Jarrow and Protter.
47
The second way is to add jumps into the asset value process,
which implies that the asset value of the rm can suddenly drop drastically and
causes a default. Hence, default is not a predictable event any more, and the credit
spread increases in the short-term. Another characteristic of jump models is that
they convert the recovery payment at default into a random variable due to the
fact that the rm value can drop suddenly below the default threshold. Zhou
48
and
Hilberink and Rogers
49
overcome the perfectly predicted default in this way.
Finally, Tauren
50
and Collin-Dufresne and Goldstein
51
argued that in reality
the dollar amount of rms liabilities does not remain constant. This suggests an
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 637
unreasonable waste of the rms debt capacity as the rms grow in value. Therefore,
they proposed alternative models to reect the rms tendency to maintain a sta-
tionary leverage ratio. The ratio V/K is modelled as mean-reverting, and the rms
go default when V/K falls to a dangerously low level. In contrast, Ju and Ou-Yang
52
assumed that a rm adjusts its capital structure periodically and bankruptcy occurs
when the rms unlevered asset value falls below the present value of the principal
adjusted for the convenience yield. The optimal capital structure and an optimal
maturity are jointly determined in their model under the Vasicek
53
interest rate
process.
2.2. Previous empirical studies of the structural credit risk models
2.2.1. Bond pricing, yield spreads and CDS premium
Wei and Guo
23
make an empirical comparison of Merton
18
and Longsta and
Schwartz
20
using Eurodollars as risky debt and U.S. Treasury bills as riskfree debt.
The data are weekly (Thursday but using those on Friday if not available) in the
year 1992, and on each given date, they have 33 Treasury observations and ve
Eurodollar observations. The striking result shows that the Merton
18
model per-
forms better than the much more complicated Longsta and Schwartz
20
model.
They conclude that modeling covariance between recovery rate and default prob-
ability is crucial, and this covariance term is zero in the Longsta and Schwartz
model since the recovery rate of risky debt is an exogenously specied constant.
They also show that the Merton model is not nested in the Longsta and Schwartz
model, and is more general in terms of recovery rate.
Using the asset value proxy approach and aggregate time series data for the U.S.
corporate bond market, Anderson and Sundaresan
24
nd that the performance of
endogenous default barrier models are superior to the original Merton
18
model.
In contrast, Lyden and Saraniti
54
use the asset value proxy approach and the
noncallable bond prices of 56 individual rms to compare the Merton
18
and the
Longsta and Schwartz
20
models, and they nd that both models underestimate
yield spreads.
Delianedis and Geske
25
study the proportion of the credit spread that is
explained by the default risk under a modied Merton
18
framework. They use
the corporate bond dataset for the period of November 1991 to December 1998 and
employ the volatility restriction approach. Their empirical results show that default
risk only explains a small fraction of the credit spreads, and the rest is attributable
to taxes, jumps, liquidity, and market risk factors. They also include jump compo-
nents in the Merton model and nd that jumps may explain only a portion of the
residual spread, but it is unlikely to explain it entirely.
Huang and Huang
26
use several structural models to predict yield spread, includ-
ing the Longsta and Schwartz
20
model, the strategic model, the endogenous-
default model, the stationary leverage model as well as two models they proposed:
one with a time-varying asset risk premium and another with a jump-diusion rm
January 11, 2010 13:27 WSPC/173-IJITDM 00370
638 H.-H. Lee, R.-R. Chen & C.-F. Lee
value process. They calibrate inputs, including asset volatility, for each model so
that target variables including leverage, equity premium, recovery rate, and the
cumulative default probability at a single time horizon are matched. They show
that the models make quite similar predictions on yield spreads. In addition, the
observed yield spreads relative to Treasury bonds are considerably greater than the
predicted spreads, especially for highly-rated debt. Hence, they conclude that addi-
tional factors such as liquidity and taxes must be important in explaining market
yield spreads.
Later on, Eom et al.
27
carry out an empirical analysis of ve structural mod-
els including Merton,
18
Geske,
35
Longsta and Schwartz,
20
Leland and Toft,
41
and
Collin-Dufresne and Goldstein.
51
They test these models using bond data of rms
with simple capital structure on the last trading day of each December from 1986
to 1997. They calibrate these models using the book value of total liabilities in the
balance sheet as the default boundary, and calculate the corresponding asset value
as the sum of the market value of equity and the book value of total debt. Then
they estimate the asset return volatility using bond-implied volatility, as well as six
equity return volatilities measured by dierent time horizons before and after the
bond price observation. In contrast to previous studies which have suggested that
structural models generally predict yield spreads too low, the result of Eom et al.
27
shows more complicated phenomena although all of these models make signicant
errors in predicting the credit spread. The Merton
18
and the Geske
35
models under-
estimate the spreads, while the Longsta and Schwartz,
20
the Leland and Toft,
41
and the Collin-Dufresne and Goldstein
51
models overestimate the spreads.
Employing the Generalized Method of Moment (GMM), Hsu et al.
40
estimate
their model using panel data of bond prices from nine NYSE listed and traded
rms. The rms represented in this sample are selected to satisfy some criteria
such as availability of the traded prices (but not soft quote or matrix inferred
prices) and liquidity issues based on the trading volume and daily price movement.
The empirical results indicate that their model produces pricing errors only of the
same size as the bid-ask spreads of the bonds. Also, in contrast to the results of
Eom et al.,
27
they nd no diculties pricing bonds from rms with low leverage
ratios, low asset volatilities, or with low durations. Therefore, they conclude that
their model, combined with the GMM estimation method, produces low pricing
errors and does not suer from the pricing biases observed by the prior empirical
studies on existing structural models.
Using the MLE approach, Ericsson and Reneby
55
estimate yield spreads between
1 and 50 months out-of-sample by an extended version of the Leland
21
model, which
allows for violation of the absolute priority rule and future debt issues. In addition
to the stock price series, they also include the bond price and dividend information
in estimating their model. The bond samples consist of 141 U.S. corporate issues
and a total of 5594 dealer quotes. Their empirical results show that, for the one
month-ahead prediction, a mean error is merely 2 basis points. This is similar to
the tting error of those reduced-form models in Ref. 76. Therefore, they conclude
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 639
that the inferior performance of structural models may result from the estimation
approaches used in the existing empirical studies.
Chen et al.
56
argue that many prior empirical researches such as the one done
by Eom et al.
27
did not use nested models. Hence, the performance dierences
cannot easily be attributed to any particular risk. In addition, the credit default
swap (CDS) transaction data they use are superior to the interpolated data used by
Wei and Guo
23
and the matrix data used by Eom et al.
27
In addition, unlike bond
prices, CDSs are commonly thought to be less inuenced by non-default factors.
They tested nested structural models for CDS spreads with 3496 trade observa-
tions from February 2, 2000 to April 8, 2003. Their empirical results indicate that
random interest rates and random recovery are both important assumptions, while
the continuous default is not. Their result is also consistent with the result of Wei
and Guo
23
that the Merton model can outperform the complex LongstaSchwartz
model.
Ericsson et al.
57
used the MLE to perform an empirical test on both CDS spreads
and bond spreads, including three structural models Leland,
21
Leland and Toft,
41
and Fan and Sundaresan.
77
In contrast to previous evidence from corporate bond
data, CDS premia are not systematically underestimated. Also, as expected, bond
spreads are systematically underestimated, which is consistent with the fact that
they are driven by signicant non-default factors. In addition, they also conduct
regression analysis on residuals against default and non-default proxies. Little evi-
dence is found for any default risk components in either CDS or bond residuals,
while strong evidence, in particular an illiquidity premium, is related to the bond
residuals. They conclude that structural models are able to capture the credit risk
price in the markets but they fail to price corporate bonds adequately due to omit-
ted risks.
2.2.2. Equity returns
Vassalou and Xing
58
use the Merton
18
model to compute default measures for
individual rms and assess the eects of default risk on equity returns. They adopt
the KMV method to estimate the unobserved asset value and asset volatility. Their
results indicate that the FamaFrench factors SMB (size eect) and HML (book-
to-market eect) contain some default-related information, but this is not the main
reason why the FamaFrench model can explain the cross-section of equity returns.
The size and book-to-market eects both exist in high default risk segments of the
market.
2.2.3. Default prediction
Brockman and Turtle
29
investigated the bankruptcy prediction performance under
down-and-out call (DOC) framework using a large cross-section of industrial rms
for the period 19891998. Brockman and Turtle
29
use the proxy approach measuring
January 11, 2010 13:27 WSPC/173-IJITDM 00370
640 H.-H. Lee, R.-R. Chen & C.-F. Lee
the market value of a rms assets as the book value of assets less the book value of
shareholders equity, plus the market value of equity as reported in Compustat. The
asset volatility is measured as the square root of four times the quarterly variance
measure, where the quarterly variance measure is computed by quarterly percentage
changes in asset values for each rm in the sample with at least 10 years of data.
The promised debt payment is measured by all non-equity liabilities, computed as
the total value of assets less the book value of shareholders equity. Finally, the life
span of each rm is set to be 10 years, and they argued that barrier estimates are
not particularly sensitive to lifespan assumption by the robustness test.
The empirical evidence shows that the failure probabilities implied by the DOC
framework never underperform the well-known accounting approach Altmans
Z-score. In detail, the logistic regressions by including one or both of the implied
failure probability and Z-score, the DOC approach dominates Z-score in predicting
corporate failure percentage of the one, three, and ve year tests as well as their
size or book-to-market categorized tests. In addition, in the quintile-based test, the
failure probability of DOC framework also straties failure risks across rms and
years much more eectively than the corresponding Z-score. We should note that
another empirical nding by Brockman and Turtle
29
is that implied default barriers
are statistically signicant for a large cross-section of industrial rms. However,
Wong and Choi
31
argue that it is the proxy approach of Brockman and Turtle
29
that leads to barrier levels above the value of corporate liabilities. Hence, they adopt
the transformed-data MLE approach and nd that default barriers are positive but
not very signicant in the empirical study of a large sample of industrial rms
during 1993 to 2002.
Bharath and Shumway
30
examine the default predictive ability of the Merton
distance to default (DD) model by studying all the non-nancial rms for the
period 19802003. The method they use to estimate the expected default fre-
quency (EDF) is the same as the iterated procedure employed by Vassalou and
Xing.
58
They compare the Merton DD probability with several variables the
nave probability estimate (without implementing the iterated procedure), market
equity, and past returns, and nd that the Merton DD model does not produce
sucient statistics for the probability of default. Implied default probabilities form
the CDSs and corporate bond yield spreads are only weakly correlated with the
Merton DD probabilities after adjusting for agency ratings, bond characteristics,
and their alternative predictors. Moreover, they nd that the nave probability
they propose, which captures both the functional form and the same basic inputs
of the Merton DD probability, performs slightly better as a predictor in hazard
models and in out-of-sample forecasts. They conclude that the Merton DD proba-
bility is a marginally useful default forecaster, but it is not a sucient statistic for
default.
f
f
Campbell et al.
59
also show similar results that failure risk cannot be adequately summarized by
a measure of distance to default by the KMV-Merton model.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 641
Recently, Chen et al.
60
use the volatility restriction method to test ve structural
models including the models of Merton, Brockman and Turtle, Black-Cox, Geske
(two periods), and LongstaSchwartz as well as the proposed non-parametric
model. The default companies in the study are those led Chapter 11 for the
period January 1985December 2002 with assets greater than $50 million. Their
results indicate that the distribution characteristics of equity returns and endoge-
nous recovery are two important assumptions. On the other hand, random interest
rates, that play an important role in pricing credit derivatives, are not an important
assumption in predicting default.
Lastly, Davydenko
61
uses a unique sample of risky rms with observed market
values of equity, bonds, and bank debt to investigate whether default is associated
with insucient cash reserves relative to required payments or with low market
values of assets relative to debt level. Davydenko estimates the market value of
rms assets as the sum of market values of bonds, bank debt, and equity. Estimates
of the market value of rms public debts are from the monthly quotes from Merrill
Lynch bond trading desks, for bonds included in the Merrill Lynch U.S. High Yield
Master II Index (MLI) between December 1996 and March 2004. Estimates of bank
loan prices are based on quotes provided by the LSTA/LPC Mark-to-Market Pricing
service. In default prediction, his empirical results suggest that the simple boundary
specied in terms of the face value of debt performs at least as well as more complex
alternatives, the Leland and Toft
41
or the KMV boundary. In addition, predictions
based solely on liquidity measures, the ow measure in cash ow-based models
such as interest coverage and quick ratio, are signicantly less accurate than those
based on asset values. However, his empirical observation indicates that liquidity
shortages can precipitate default even by rms with high asset values when they
are restricted from accessing external nancing. Therefore, even though boundary-
based default predictions can match observed average default frequencies, they
misclassify a large number of rms in cross-section.
2.2.4. Default probability estimation
Leland
28
examines the default probabilities predicted by the Longsta and
Schwartz
20
model with the exogenous default boundary, and the Leland and
Toft
41
model with endogenous default boundary. Leland uses Moodys corporate
bond default data from 1970 to 2000 in his study and follows similar calibration
approach similar to Huang and Huang.
26
Rather than matching the observed default
frequencies, Leland instead chooses common inputs across models to observe how
well they match observed default statistics. The empirical results show that when
costs and recovery rates are matched, the exogenous and endogenous default bound-
ary models t observed default frequencies equally well. The models predict longer-
term default frequencies quite accurately, while shorter-term default frequencies
tend to be underestimated. Thus, he suggests that a jump component should be
included in asset value dynamics.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
642 H.-H. Lee, R.-R. Chen & C.-F. Lee
Tarashev
62
analyses the intertemporal evolution of PDs (probabilities of default)
of ve structural models, including models by Leland and Toft,
41
Anderson
et al.,
44
Longsta and Schwartz,
20
Collin-Dufresne and Goldstein,
51
and Huang
and Huang.
26
Tarashev uses rm-level data from Moodys for the period 1990:Q1
to 2003:Q2, and focuses on rms with BBB, BB, or B rating. The iterated proce-
dure by KMV is used to estimate asset volatility. The sample data are of quarterly
frequency and the average size are 78 BBB-, 80 BB-, and 67-rated rms. Tarashev
nds that the PDs implied by the model tend to match the level of actual default
rates, and the models explain a substantial portion of the variability of default rates
over time. However, the models fail to fully reect its dependence on macroeconomic
cycle. The model-based forecast of default rates can be substantially improved by
the introduction of macroeconomic variables.
Suo and Wang
63
study the empirical performance of structural models by exam-
ining the default probabilities calculated from the models for dierent time horizons.
Following Eom et al.,
27
the sample covers only rms having a single bond outstand-
ing at the time when bond prices are observed. The models studied are Merton,
Merton with stochastic interest rate, Longsta and Schwartz,
20
Leland and Toft,
41
and Collin-Dufresne and Goldstein.
51
Only non-nancial rms are included and
the sampling period is from January 1989 to December 2004. The sample covers
a total of 55 single bonds issued by 55 rms and 6.787 weekly observations. The
two stage MLE by Duan and Simonato
64
as well as the proxy approach are used
for the estimation of dierent models. They found that the default probabilities
for investment-grade rms predicted by Mertons model are too low, and stochastic
interest rate can improve the performance of Mertons model. Both the Longsta
and Schwartz
20
and the Leland and Toft
41
models predict default probabilities rea-
sonably well. However, the Collin-Dufresne and Goldstein
51
model predicts unrea-
sonably high default probabilities for longer time horizon, and they attribute this
result to the mean reverting leverage feature.
3. Current Estimation Methods of the Structural Credit
Risk Models
In Sec. 3.1, we rst describe in detail the estimation procedures that have been used
in the literature, and then we summarize in Sec. 3.2 the problems of these estimation
approaches. In Sec. 3.3, we report our results of Monte Carlo experiments with the
MLE method.
3.1. Current estimation methods
Traditionally, structural credit risk models are estimated by the volatility restriction
approach or an even simpler approach such as the proxy approach. However, these
two approaches and their variants lack the statistical basis, and the empirical results
they produce are less convincing. Thus, the new estimation method such as the
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 643
MLE has been introduced into the empirical researches of structural models. In
this section, we summarize the prevailing estimation methods of structural models.
3.1.1. Maximum likelihood estimation method
Duan
32
develops a transformed data MLE approach to estimate continuous time
models with unobservable variables using derivative prices. The obvious advantages
are that (1) the resulting estimators are known to be statistically ecient in large
samples; and (2) the sampling distribution is readily available for computing con-
dence intervals or for testing hypotheses. In the context of structural credit risk
models, equity prices are the derivative of the underlying asset value process and
are readily available with large samples. In this section, we rst briey summarize
the transformed-data MLE approach proposed by Duan,
32
and then turn to the
implementation of this method in structural credit risk models.
Let X be an n-dimensional vector of unobserved variates. Assume that its den-
sity function, f(x; ), exists and it is continuously twice dierentiable in both argu-
ments. A vector of observed random variates, Y , results from a data transformation
of the unobservable vector X. This transformation from R
n
to R
n
is a function of
the unknown parameter , and is one-to-one for every , where is an open
subset of R
k
.
Denote this transformation by T(; ), where T(; ) is continuously twice dier-
entiable in both arguments. Accordingly, Y = T(X; ) and X = T
1
(Y ; ). The
log-likelihood function of the observed data Y is L(Y ; ). By change of variable,
the log-likelihood function for the transformed data Y can be expressed by the log-
likelihood function of the unobserved random vector X, denoted as L
X
(; ), and
the Jacobian, J, of a given transformation.
L(Y ; ) = L
X
(T
1
(Y ; ); ) + ln |J(T(X; )
1
)| (3.1)
Due to the diculty in explicitly deriving the inverse transformation, Duan
32
fur-
ther states that it is better to avoid the direct computation of the Jacobian for the
inverse transformation but employing some analytical simplication, which makes
it easy to use some numerical optimization routines for solving the maximization
likelihood problem.
Theorem 3.1.
L(Y ; ) = L
X
(T
1
(Y ; ); ) + ln |det([D
X
T(X; )|
X=T
1
(Y ;)
]
1
)| (3.2)
where D
X
denote the n n rst partial derivative matrix with respect to the rst
argument of T(; ).
Theorem 3.2. If the transformation from X to Y is on an element-by-element
basis, i.e. y
i
= T
i
(X
i
; ) for all i, then
L(Y ; ) = L
X
( x
i
(), i = 1, . . . , n; )
n

i=1
ln

dT
i
( x
i
(); )
dx
i

(3.3)
where x
i
() = T
1
i
(y
i
; ).
January 11, 2010 13:27 WSPC/173-IJITDM 00370
644 H.-H. Lee, R.-R. Chen & C.-F. Lee
In a typical nancial application, T(X; ) is a pricing function for the deriva-
tive contract. This pricing function denes the one-to-one mapping needed for the
application of Theorem 2.2. A closed-form pricing function is sometimes available,
for example, in the Merton model. However, in other more complicated modelling
contexts, a closed-form pricing function may not be available. Then the value of a
contingent claim may be solved by numerical computation; for example, the nite
dierence methods. In sum, the log-likelihood function based on the observed vari-
ates of the derivative contract can be numerically assessed, regardless of whether
or not a closed-form pricing function exists.
3.1.1.1. Implementation of the transformed-data MLE in the context
of structural credit risk models (Duan et al.
33
)
Step 1: Assign initial values of the parameters , and compute the implied asset
value time series by

V
ih
(

(0)
) = T
1
(S
ih
;

(0)
), where h is the length of the
time period and

(m)
denotes the mth iteration. Let m = 1.
Step 2: Compute the log-likelihood function
L(S;

(m)
) = L
V
(

V
ih
(

(m)
), i = 1, . . . , n;

(m)
)

i=1
ln

dT(

V
ih
(

(m)
);

(m)
)
dV
ih

(3.4)
given in Theorem 2.2 to obtain the estimated parameters

(m)
.
Step 3: Compute the implied asset value time series by

V
ih
(

(m)
) = T
1
(S
ih
;

(m)
),
and let m = m+1, go back to Step 2 until the maximization criterion is met.
Step 4: Use the MLE

to compute the implied asset value

V
nh
and the correspond-
ing default probability.
Duan and Simonato
64
further develop an MLE method for the two unobserved
variables, namely, the rm value V
t
and the instantaneous interest rate r
t
. In this
case, also contains parameters of the interest rate process and its correlation with
rm value process. Thus, one needs to modify the log-likelihood function in Step 2
to incorporate this change.
3.1.2. Volatility restriction method
Volatility restriction method, or the JMR-RV approach, is derived from Itos
Lemma. This approach was employed by Jones et al.
78
to conduct an empirical
study of Mertons
18
risky bond pricing model, and later by Ronn and Verma
80
to implement the deposit insurance model of Merton.
79
Duan et al.
33
refers this
approach as the JMR-RV estimation method. This approach uses some observed
quantities and the corresponding restrictions derived from the theoretical model to
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 645
extract point estimates for the unobserved asset value and asset volatility parame-
ter.
g
In the context of structural credit risk modelling, equity value is an option of
asset value V
t
. Therefore, for example, under Mertons model,
S
t
= V
t
N(d
t
) Ke
rT
N(d
t

T) (3.5)
Also by It os Lemma,

S
=
S
t
V
t
V
t
S
t

v
(3.6)
Since S
t
= g(V
t
;
v
) is a one-to-one function of V
t
, the inverse exists. One can then
rst estimate the equity volatility
S
using historical data, and the two unknown
variables (asset value V
t
and asset volatility
v
) left can be solved by the two-
equation system Eqs. (3.5) and (3.6).
3.1.3. The KMV estimation method
The KMV method is a simple two-step iterative algorithm which begins with an
arbitrary value of the asset volatility and repeats the two steps until the convergent
criterion is reached. The default barrier of the KMV method is assumed to be
the sum of short-term liabilities plus one-half long-term liabilities, i.e. K = D
ST
+
1
2
D
LT
. It appears that KMV considers a liability to be short-term if it is due within
the horizon over which the default probability is computed. The following are the
two steps going from the mth to (m+ 1)th iteration described by Duan et al.
33
Step 1: Compute the implied asset value time series {

V
0
(
(m)
),

V
h
(
(m)
),

V
2h
(
(m)
), . . . ,

V
nh
(
(m)
)} corresponding to the observed equity value data
set {S
0
, S
h
, S
2h
, . . . , S
nh
}, where

V
ih
(
(m)
) = g
1
(S
ih
;
(m)
)
Step 2: Compute the implied asset returns {

R
(m)
1
,

R
(m)
2
, . . . ,

R
(m)
n
}, where

R
(m)
i
=
ln(

V
ih
(
(m)
)/

V
(i1)h
(
(m)
)), and update the asset drift and volatility
parameters are as follows:

R
(m)
=
1
n
n

k=1

R
(m)
k
(
(m+1)
)
2
=
1
nh
n

k=1
(

R
(m)
k


R
(m)
)
2

(m+1)
=
1
h

R
(m)
+
1
2
(
(m+1)
)
2
g
A three-equation extension of this approach was used in Duan et al.
81
to implement their deposit
insurance model with stochastic interest rate where the third equation related the equity duration
to the asset duration.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
646 H.-H. Lee, R.-R. Chen & C.-F. Lee
We should note that this procedure is only the rst part of the approach described
in Crosbie and Bohn.
65
Therefore, some dierences should be mentioned: (1) KMV
estimates volatility through the rst part, and then uses Bayesian adjustments for
the country, industry, and size of the rm to obtain the nal estimate; (2) a model
slightly more general than that of Merton
18
is used by KMV; and (3) Crosbie and
Bohn
65
provide no description of how to estimate the drift parameter. Thus, Duan
et al.
33
follow Vasslou and Xing
58
to compute the drift parameter estimate using a
sample average of the implied asset returns.
Duan et al.
33
show that the KMV method produces the point estimate iden-
tical to the transformed data maximum likelihood estimate under the setting
of the Merton
18
model. The theoretical argument is based on a statistical tool
known as the ExpectationMaximization (EM) algorithm, which is essentially an
alternative way of obtaining the maximum likelihood estimate for the incom-
plete data model that contains some random variables without corresponding
observations.
3.1.4. Other approaches
In this section, we briey discuss approaches adopted in the recent empirical struc-
tural model studies other than what we have presented in the rst three sections.
To reduce the computational complexity of the estimation problem, parameters
of structural credit risk models are often calibrated, rather than being estimated,
from actual price data. The calibrated model is then used to t the price data and to
determine the model performance. For example, Huang and Huang
26
use four target
variables including observed default probabilities, leverage ratios, recoveries given
default, and equity premiums to match four parameters asset value, market price
of risk parameter, asset volatility and recovery rate. Leland
28
follows a calibration
approach similar to Huang and Huang.
26
Rather than matching observed default
frequencies, Leland instead chooses common inputs across models to observe how
well they match observed default statistics.
Wei and Guo
23
use an approach similar to that often used when calibrating
models of the risk-free yield curve. For data on the spread between the credit term
structures of the studied security (Eurodollar in their study) and the term structure
of US Treasury (T-bills in their study), they choose parameters that minimize the
squared tting error for each time period they have observations. This enables them
to back out the asset value, implied volatility as well as other model parameters.
Some studies even employ a simple proxy approach to calculate asset values.
They typically approximate market value of corporate assets by the sum of the
market value of equity and the book value of liabilities. For instance, Brockman
and Turtle
29
simply measure the market value of a rms assets as the book value
of assets less the book value of shareholders equity, plus the market value of equity.
Asset volatility is then calculated directly from returns of the estimated asset value.
Eom et al.
27
add the book value of debt and the observed market value of equity to
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 647
estimate the asset value. Then they use a rening approach to calculate the asset
volatility estimate using the relation
S
= (S
t
/V
t
)(V
t
/S
t
)
v
, given historical
equity return volatility
S
and S
t
/V
t
= N(d
1
(K
t
, t)).
Finally, Hsu et al.
40
employ the Generalized Method of Moment (GMM) to
illustrate their model with selected exchange traded bond data. They impose the
moment restriction on bond yield, and the log-solvency ratio associated with the
conditional mean, variance, and covariance. Then the parameters estimates can
be obtained by minimizing the objective function containing the time t vector of
sample moments corresponding to these restrictions.
3.2. Comparison of alternative estimation approaches
In this section, we summarize the problems of the existing estimation approaches
that have been pointed out in the literature. In the end of this section, we present
in Table 2 a summary of the main drawbacks of these estimation methods adopted
in previous empirical studies.
3.2.1. Problems of volatility restriction approach
Duan
24
addresses that the shortcoming of the volatility restriction method by Ronn
and Verma
80
can be seen as follows: Under their model specication, the asset
price follows a lognormal process, and equity is viewed as a call option on the rm
assets. This implies that equity volatility must be stochastic. In fact, the volatility
relationship used in Ronn and Verma is a redundant condition which provides a
restriction only because equity volatility is inappropriately treated as a constant,
which is calculated from historical data.
Therefore, while testing the Merton model, it not only assumes the volatility of
a rms asset is constant, it also imposes one more assumption that the volatility of
stock prices is also constant, which has been shown to be untrue in many empirical
Table 2. Summary of the current estimation methods in the empirical studies.
Estimation Method Drawback
Volatility restriction Not statistical and provides no distribution information about the
parameters. The drift of the unobservable asset value process could
not be estimated.
KMV Not statistical and cannot generate a meaning estimate for variables
other than asset drift and volatility.
Proxy Not statistical and produces biased estimation results.
Yield Curve Ignores the information in the time series of data. Impractical to obtain a
term structure of yields for individual rms given the very small
number of actively traded corporate bonds.
GMM Moment conditions are not unique and estimates are often sensitive to
the choice of moment conditions.
MLE Only the price of equity is used. Zero equity pricing error assumption.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
648 H.-H. Lee, R.-R. Chen & C.-F. Lee
studies (see Ref. 66 and references therein). Therefore, even the Merton model
correctly species the true asset value process, the parameters of the Merton model
cannot be correctly estimated by erroneously forcing the volatility of the stock
process, a stochastic variable, to be a constant. Consequently, the consistency of
the JMR-RV estimation method is in doubt.
Moreover, since the volatility restriction approach is not statistical, it provides
no distribution information about the parameters and cannot perform statistical
inferences. In addition, Duan et al.
82
also pointed out that the drift of the unob-
servable asset value process could not be estimated by the JMR-RV method since
the theoretical equity pricing formula does not contain the drift of the asset value
process under the physical probability measure. As a result, the default probability
could not be obtained.
Ericsson and Reneby
34
also argue that the described volatility restriction eect
implies that increasing stock prices result in underpriced bonds, while decreasing
stock prices produce overpriced bonds. The explanation for the failure of the VR
method is intuitive. A highly volatile historical stock price series translates into
high estimated asset volatility and vice versa. This is a direct eect of solving the
system of equations. However, high stock volatility is not necessarily the result of
high asset volatility. It could be the result of a historically high leverage. There-
fore, in a situation where asset value and hence the stock prices have risen over the
sample period, leverage and stock volatility have fallen. The negative relationship
between stock prices and stock volatility has been observed early in the litera-
ture by Black.
83
Historical stock volatility, computed as the average of realized
volatility, therefore, is higher than the current level. This in turn translates into
an excessive asset volatility estimate, and thus a low bond price estimate. This
is also the reason why the estimation of the VR method in their Monte Carlo
experiment performs even poorer when the nancial risk, leverage, is high. The
higher is the leverage, the more pronounced the eect is on stock volatility. In
a low-leverage rm, eect of the assumption of constant equity volatility is less
severe.
Ericsson and Reneby
34
perform a simulation experiment and compare the per-
formance of the transformed-data maximum likelihood estimators with those of
the volatility restriction method (or called the JMR-RV method). They analyzed
the model performance along two dimensions: (1) nancial risk measured by the
quasi-debt ratio, the ratio of risk-free debt to the asset value at the beginning of
the sample period; and (2) business risk measured by the instantaneous volatil-
ity of the asset value. Under the settings of four scenarios of dierent nancial
risk and business risk levels, they choose to test three structural models includ-
ing the BlackScholesMerton, the Briys and de Varenne,
39
and the Leland and
Toft
41
models. To make the estimators comparable to each other, they estimated
only asset value and volatility, although the transformed-data maximum likelihood
approach allows for the estimation of other parameters. They found that the bias
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 649
of the transformed-data maximum likelihood approach is negligible for practical
purposes in 12 of the Monte Carlo experiments, while the VR approach exhibits an
average spread error of 23%.
3.2.2. Problems of the KMV approach
Duan et al.
33
proves that the KMV method produces the point estimate identical to
the transformed data ML estimate in the context of the Merton
8
model. However,
the KMV method cannot provide the sampling error of the estimate, which is
crucial for statistical inference. In short, the KMV method can be regarded as
an incomplete ML method. Moreover, in general, structural models may contain
unknown parameters other than the rms asset value and volatility: for example,
the unknown parameters specic to the nancial distress level in the barrier models.
In these models, estimates of the KMV method no longer coincide with those of the
EM algorithm, and therefore the KMV method cannot generate a meaning estimate
for these variables. A Monte Carlo study is then presented and the results show
that the KMV approach yields biased estimates of all the parameters. Finally,
Duan et al.
33
also point out that the KMV approach may not be an attractive
approach operationally, when the maximization step of EM algorithm does not
have an analytical solution.
3.2.3. Problems of proxy approach
Eom, Helwege and Huang (EHH)
27
use the sum of the market value of equity and
total debt as a proxy of the asset value of a rm. That is, V
proxy
= K+S. Then using
the fact that
S
= (S
t
/V
t
)(V
t
/S
t
)
v
, one can compute the volatility of the asset
quickly using historical equity return volatility
S
and S
t
/V
t
= N(d
1
(K
t
, t)). In
addition, the asset return is from the average monthly change in V .
However, Wong and Li
76
show this assumption is unreasonable even under Mer-
tons model. Under the option theory, assuming the true asset value V
true
, one can
nd C(V
true
, K, T) = S = V
proxy
K < C(V
proxy
, K, T). The inequality above comes
from the fact that a call option premium must be higher than its intrinsic value
before the maturity date. Since call option is an increasing function of its underlying
asset, the relationship V
true
< V
proxy
is implied by C(V
true
, K, T) < C(V
proxy
, K, T).
Therefore, we can nd that the EHH approach overestimates the true asset value,
and it yields biased estimation results. As the market value of assets has been over-
estimated, the predicted price of corporate bonds will be too high and the corre-
sponding predicted yield spread will be underestimated. This implies the European
option framework will automatically be rejected whenever the proxy approach is
adopted. Wong and Li
76
also perform a Monte Carlo experiment to support this
result.
Wong and Choi
31
further criticize the proxy approach under the barrier
model framework. In fact, the proxy forces the implied barrier to be positive
January 11, 2010 13:27 WSPC/173-IJITDM 00370
650 H.-H. Lee, R.-R. Chen & C.-F. Lee
under the down-and-out call option framework of Brockman and Turtle.
29
Wong
and Choi
31
rst theoretically point out the aw of the proxy approach as
follows:
Let DOC(V
true
, X, H) denote the current price for a DOC option on V
true
with
a strike price X and a barrier level H. Wong and Choi show that DOC(V, X, X) >
V X by the no arbitrage argument.
h
In Ref. 29, a proxy of asset value is specied
as the sum of book value of debt and market value of equity, i.e., V
proxy
= X+S. In
other words, instead of the true relationship DOC(V
true
, X, H) = S, Brockman and
Turtle
29
erroneously set the relationship as DOC(V
proxy
, X, H) = S = V
proxy
X.
Combined with the no arbitrage argument that DOC(V
proxy
, X, X) > V
proxy
X,
one can obtain DOC(V
proxy
, X, X) > V
proxy
X = S = DOC(V
proxy
, X, H). Since
the DOC option price is a decreasing function of the barrier level H, we have
H > X.
Therefore, employing the proxy is equivalent to presuming that the default bar-
rier is greater than the future promised payment of liabilities. This result holds
for the arbitrary sets of input parameters including industry sector, option matu-
rity, and rebate level. Hence, it explains why the hypotheses and robustness tests
of Brockman and Turtle
29
work well. Firms are presumed to have positive bar-
riers exceeding the book value of corporate liabilities: there is no doubt that the
implied barriers in Brockman and Turtle
29
are signicantly positive with over 99%
condence.
Next, Wong and Choi
31
conduct a simulation analysis and show that the default
barriers estimated by the proxy approach are biased and signicantly overesti-
mated. As predicted, the averaged barrier-to-asset ratios are greater than the
upper bounds of liability-to-asset ratios. In addition, the performance is also poor
in the volatility estimation and the percentage error of asset values. In contrast,
the transformed-data MLE approach gives good estimates in asset barriers, asset
volatilities, although the drifts are overestimated due to the survivorship issue.
i
However, the estimation quality of other parameters remains the same.
3.2.4. Problems of the yield curve approach
Bruche
67
points out the problems of the yield curve approach used by Wei and
Guo.
23
First, it ignores the information in the time series of data about how yield
spreads change over time, and focuses solely on the cross-sectional element. Next,
it is impractical for the application in credit risk models since it is practically
impossible to obtain a term structure of yields for individual rms given the very
small number of actively traded corporate bonds.
h
See Ref. [31, pp. 67].
i
This is to be expected since only survival rms are considered in the simulation. Duan et al.
33
made some modications to remove this bias.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 651
3.2.5. Problems of the GMM method
GMM has the advantage that it requires specication only of certain moment con-
ditions rather than the full density. Nonetheless, it is also a drawback of GMM that
it often does not make ecient use of all the information in the sample. Moreover,
neither the traditional econometric method nor economic theory necessarily identi-
es a unique set of moment conditions. The GMM estimates are often sensitive to
the choice of moment conditions (see Ref. 68 and the references therein).
3.2.6. Problems of the MLE method
Bruche
67
points out two related drawbacks of the MLE approach: (1) only the price
of equity is used in model estimation, while other information like bond prices, credit
and equity derivatives, and accounting information are not utilized; (2) given that
the asset prices, including equity prices, are inuenced by market microstructure
or agency problems, it is not sensible to make the assumption of zero (observed)
equity pricing error.
j
3.3. Monte Carlo experiment
We follow Duan et al.
53
and set the following parameter values to perform the
simulation experiment: interest rate r = 0.05, asset drift
V
= 0.1, asset volatility

V
= 0.3, initial rm value V
0
= 1.0, face value of debt F = 1.0, and option
maturity T = 2. The sampling period is set to be 252 days a year, and maturity is
set to be (2i) years for the ith data point of the simulated time series. Finally, we
change the value of the default barrier in order to examine its eect on parameter
estimation.
Our results in Table 3 are based on 1000 simulated samples following the pro-
cedure by Duan et al.
33
to mimic the daily sample of observed equity value of a
survived rm. We use the same numerical optimization algorithm of Nelder-Mead
(in Matlab software package) as that in Wong and Choi,
31
and the initial value
of the barrier is set as 0.5. Our experiment results clearly show the strength, as
well as the limitation, of the MLE method. The MLE method can estimate and
uncover the true asset volatility and default barrier, well and simultaneously, when
the barrier hitting probability of the asset value process is not too low. However,
as the true default barrier is under 0.5 in our experiment, the barrier estimates are
seriously biased.
j
Bruche
67
proposes a simulated maximum likelihood procedure, which allows one to use data on
any of the rms traded claims, including equity, bonds, CDS as well as balance sheet information
to improve the eciency of the estimation. This approach explores the possibility of considering
the noise of security prices, and, therefore, it can avoid the bias that may be incurred by the zero
equity pricing error assumption of the MLE approach. In addition, the estimation is related to
the problem of recovering the value of the latent or unobserved asset value from the observed
variables; it is one of calculating posterior densities, or ltering.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
652 H.-H. Lee, R.-R. Chen & C.-F. Lee
Table 3. A Monte Carlo study of the MLE method for the Brockman and Turtle
29
model.
Model Parameters F = 1 T = 2

V

V
H (Barrier) Barrier
Hitting Probability
True value 0.1 0.3 0.9 67.746936%
Mean 0.36377 0.30211 0.89479
Median 0.34914 0.29857 0.89837
Standard deviation 0.21523 0.04856 0.07941
True value 0.1 0.3 0.8 39.585685%
Mean 0.24807 0.29789 0.79156
Median 0.22296 0.29490 0.80203
Standard deviation 0.21503 0.04449 0.11039
True value 0.1 0.3 0.75 28.074173%
Mean 0.23082 0.30232 0.69968
Median 0.17726 0.29878 0.74795
Standard deviation 0.24533 0.05624 0.18828
True value 0.1 0.3 0.7 18.671759%
Mean 0.19528 0.29924 0.61289
Median 0.17426 0.29643 0.69106
Standard deviation 0.23842 0.03912 0.22313
True value 0.1 0.3 0.6 6.409692%
Mean 0.11387 0.29343 0.49035
Median 0.09683 0.29164 0.57849
Standard deviation 0.26237 0.03410 0.24217
True value 0.1 0.3 0.5 1.347824%
Mean 0.11484 0.29314 0.41125
Median 0.11833 0.29224 0.35967
Standard deviation 0.28141 0.03252 0.24325
True value 0.1 0.3 0.4 0.127036%
Mean 0.09522 0.29244 0.41637
Median 0.07599 0.29224 0.35732
Standard deviation 0.29297 0.03222 0.24788
True value 0.1 0.3 0.0000001 0.000000%
Mean 0.08946 0.29237 0.40017
Median 0.08844 0.29143 0.29074
Standard deviation 0.29598 0.03291 0.24124
Although the default barrier estimates are biased when the hitting probability of
asset value process is low, this is what the statistical theory precisely predicts, since
the value of likelihood function is at and not sensitive to the change of the barrier
level. A low barrier relative to the rm value (or the low hitting probability of the
barrier) obviously implies that the barrier is immaterial. In other words, where it
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 653
is exactly located does not materially aect equity values. Thus, one cannot expect
to pin down the barrier using the equity time series.
One important consequence regarding the estimate of the barrier parameter is
that the testable hypothesis proposed by Brockman and Turtle
29
should not be
carried out by the estimates of the barriers. Brockman and Turtle
29
use the nested
concept of standard call option and down-and-out barrier option model to argue
that when the default is zero, the down-and-out option collapses to the standard
European call option. However, due to the nature of the likelihood function of
down-and-out option framework, one cannot expect to pin down the barrier when
the barrier is low relative to the asset value, i.e., the default probability is low.
When the default probability is low, the low barrier estimate can vary for a wide
range since it does not aect the likelihood function and equity pricing results.
Fortunately, for our empirical studies in default prediction, this should present
no practical diculties. The bias of low barrier cases could hardly aect the default
probabilities of sample rms, even when the barrier estimates vary for a wide range.
Furthermore, a formal test shall be carried out by the performance of default predic-
tion capability using alternative statistical test. In our study, we adopt the Receiver
Operating Characteristic Curve and Accuracy Ratio for this issue and we discuss
them in Sec. 4.1.
4. Default Prediction of Structural Credit Risk Models and
Empirical Results
In Sec. 4.1, we present the method we use to measure the capability of predicting
nancial distress. The structural credit risk models to be tested in our empirical
study are described in Sec. 4.2, and data and descriptive statistics are given in
Sec. 4.3. In Sec. 4.4, empirical results are reported and discussed. Robustness tests
are presented in Sec. 4.5.
4.1. Measuring capability of predicting nancial distress
receiver operating characteristic curve and accuracy ratio
To analyze the capability of predicting nancial distress, we adopt the accuracy
ratio (AR) and Receiver Operating Characteristic (ROC) method proposed by
Moodys, which is also widely used in academic literature such as the studies by
Vassalou and Xing
58
and Chen et al.,
60
and Due et al.
17
Stein
69,70
argues that
the power of a model to predict defaults is its ability to detect True Default,
and the capability of a model to calibrate to the data is its ability to detect True
Survival.
The ROC curve in the context of bankruptcy prediction is a plot of cumulative
probability of the survival group against the cumulative probability of the default
group. Assuming a rm defaults when its default probability is less than a cut-
o threshold, the survival sample contains true survivals and false defaults, and
the default sample contains true defaults and false survivals. Thus, the probabili-
ties within the survival (default) group of true survival (default) and false default
January 11, 2010 13:27 WSPC/173-IJITDM 00370
654 H.-H. Lee, R.-R. Chen & C.-F. Lee
Fig. 1. Four models with dierent powers.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Cumulative Probability of Survival Group
C
u
m
u
l
a
t
i
v
e

P
r
o
b
a
b
i
l
i
t
y

o
f

D
e
f
a
u
l
t
l

G
r
o
u
p
Perfect Model
Model with No Power
Model with More Power
Model with Less Power
Fig. 2. ROC curves.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 655
(survival) sum to unity. Figures 1 and 2 demonstrate the ROC curves; the more
powerful model successfully sets apart the default and survival distribution, the
more concave is the ROC curve. In contrast, a model with no dierentiating power
shows a 45

line in its ROC curve since the default and survival samples overlap
completely and two distributions are, in reality, one distribution.
The key statistic in the ROC methodology, known as the Cumulative Accuracy
Prole (CAP), is the accuracy ratio (AR). AR is dened as the ratio of the area of
the tested model A to the area of the perfect model A
P
, i.e. AR = A/A
P
, where
0 AR 1. Hence, the higher the AR is, the more powerful is the model.
In our study, we modify the approach by Chen et al.
60
as follows
k
:
(1) Rank all default probabilities (P
Def
) from the largest to the smallest.
(2) Compute the 100 percentiles of default probabilities (P
Def
).
(3) Divide the sample into default and survival groups.
(4) In the default group, compute the cumulative probability greater than each
percentile of default probabilities. This will be plotted on the y axis.
(5) In the survival group, compute the cumulative probability greater than each
percentile of default probabilities. This will be plotted on the x axis.
(6) Plot the ROC curve.
(7) For each structural model, repeat Steps 16. Calculate the Accuracy Ratio (AR)
and the z-statistic of the dierence of ARs between two models. The z-statistic
is computed by the method of comparing the areas under ROC curves by Hanley
and Hajian-Tilaki.
l,73
4.2. The models
In our empirical study, we test the default prediction capability of the down-and-
out call option framework. We rst summarize the Merton
18
and the Brockman
and Turtle
29
models, and then we present the closed-form solutions and their cor-
responding default probabilities of risky debts.
4.2.1. The Merton model
In the Merton
18
model, the rms assets are assumed to be nanced by equity and a
zero-coupon bond with a face value of K and maturity T. Following the accounting
identity, the rms asset value equals the sum of equity and debt, i.e., V
t
= S
t
+D
t
,
and it holds for every time point. Since equity can be treated as the call option
k
A similar approach is adopted by Chen et al.
60
using distance to default (DD) instead of
default probability. However, this relationship cannot be applied in the barrier option frame-
work since the default probability is not merely a transformation of distance of default. Therefore,
we use the default probability directly in our study. The same argument is also addressed by
Leland.
28
l
SAS program is available on http://www.medicine.mcgill.ca/epidemiology/hanley/software/.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
656 H.-H. Lee, R.-R. Chen & C.-F. Lee
with strike price K, the value of equity at time t T can be expressed as the call
option value under Black-Scholes framework:
S
t
(V
t
,
V
, T t) = V
t
N(d
1
) e
r(Tt)
KN(d
2
) (4.1)
where N() is a standard normal distribution function, and
d
1
=
ln(
V
t
K
) + (r +

2
V
2
)(T t)
_

2
V
(T t)
d
2
= d
1

2
V
(T t)
Therefore, the value of the risky debt at time t is D
t
= V
t
S
t
.
Default probability of risky debt: The default probability of the risky debt at
time T is the probability that the rm value at time T is lower than the face value
of bond K, i.e., P
def
= P(V
T
K). Note that an implicit assumption of Mertons
model is that the rm can only default at time T. Since the rms asset value process
can be rewritten as d ln V
t
= (
V
(
2
V
/2))dt +
V
dW
t
, the transition density of
logarithmic asset value is normally distributed as
ln V
T
N
_
ln V
t
+
_

V

1
2

2
V
_
(T t),
2
V
(T t)
_
Therefore,
P
def
= P(V
T
K) = P
_
Z
ln K [ln V
t
+ (
V

1
2

2
V
)]
_

2
V
(T t)
_
= P
_
Z
ln V
t
ln K (
V

1
2

2
V
)
_

2
V
(T t)
_
= N (d
2
) = 1 N(d

2
)
where
d

2
=
ln(
V
t
K
) + (
V


2
V
2
)(T t)
_

2
V
(T t)
(4.2)
4.2.2. The Down-and-Out Barrier model: The Brockman and Turtle model
Brockman and Turtle
29
adopt the barrier option formula as a tool to understand
the Down-and-Out call (DOC) approach to the corporate security valuation. In
the context of a structural model, the market value of rms equity, S, can be
expressed as
S = VN(a) Ke
r(Tt)
N(a
V

T t)
V (H/V )
2
N(b) +Ke
r(Tt)
(H/V )
22
N(b
V

T t)
+R(H/V )
21
N(c) +R(V/H)N(c 2
V

T t) (4.3)
where V is the market value of the rms assets; K is the promised future debt
payment required on the pure discount bonds issued by the corporation, due at
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 657
time T; H is the value of the rms assets that triggers bankruptcy (default barrier);
R is the rebate paid to the rms owners if the rms asset value reaches the barrier;
T t is the time until the option expires; r is the continuously compounded riskless
rate of return; and N() is the standard normal cumulative distribution function.
a =
_

_
ln(V/K) + (r + (
2
V
/2))(T t)

T t
, for K H
ln(V/H) + (r + (
2
V
/2))(T t)

T t
, for K < H
b =
_

_
ln(H
2
/V K) + (r + (
2
V
/2))(T t)

T t
, for K H
ln(H/V ) + (r + (
2
V
/2))(T t)

T t
, for K < H
c =
ln(H/V ) + (r + (
2
V
/2))(T t)

T t
and
r

2
V
+
1
2
Default probability of risky debt
N
_
(h v) (
V

2
V
/2)(T t)
_

2
V
(T t)
_
+ exp
_
2(
V

2
V
/2)(h v)

2
V
_

_
1 N
_
(h v) (
V

2
V
/2)(T t)
_

2
V
(T t)
__
(4.4)
where h = ln H and v = ln V .
m
4.3. Data and summary statistics
In our empirical test, equity prices are collected from CRSP (the Center for Research
in Security Prices) and the nancial statement information is retrieved from Com-
pustat. The sampling period of the rms is from January 1986 to December 2005,
while the quarterly accounting information is from 1984 to 2005 since some rms
under nancial distress stop ling nancial reports a long time before they are
delisted from the stock exchanges. The accounting information we use in our study
is quarterly reports from CRSP/Compustat Merged (CCM) Database. This is to
obtain the most updated debt levels and payout information, especially for those
defaulted rms. In our empirical test, we consider only ordinary common shares
(rst digit of CRSP share type code 1) and exclude certicates, American trust
components, and ADRs. Our nal sample covers a 20-year period from 1986 to
2005 and includes 15,607 companies.
m
Note that for the Brockman and Turtle and the Black and Cox models, a rm can default either
before maturity or at maturity. Therefore, we also need to use incorporate the default probability
at maturity for each rm.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
658 H.-H. Lee, R.-R. Chen & C.-F. Lee
In our empirical test, we adopt two dierent denitions of default:
Denition I. The broad denition of bankruptcy by Brockman and Turtle,
29
which includes rms that are delisted because of bankruptcy, liquidation, or poor
performance. A rm is considered performance delisted, named by Brockman and
Turtle, if it is given a CRSP delisting code of 400, or 550 to 585. Note that there
are still other delisted rms due to mergers, exchanges, or being dropped by the
exchange for other reasons, and they are considered as survival rms.
Denition II. This denition of bankruptcy is similar to that adopted by Chen
et al.
60
Default rms are collected from the BankruptcyData.com database, which
includes over 2500 public and major company lings dating back to 1986. We next
match the performance delisted rms with those samples collected from Bankrupt-
cyData.com, and add back the liquidated rms (with delisting code 400), to be our
default group. All remaining rms are classied as survival rms. Note that one
dierence between our classication and Chen et al.
60
is that some of the compa-
nies that led bankruptcy petitions but were later acquired by (merged with) other
companies (Delisting code: 200) are classied into survival group.
Before proceeding to the summary statistics of our nal sample of rms, we rst
describe our sample selection criteria. First, companies with more than one class of
shares are excluded in our test. Second, since we also need accounting information in
order to empirically test these models, rms without accounting information within
two quarters going backward from the end of the estimation period are excluded.
Thirdly, rms that were active (delisting code 100) during our sampling period but
were delisted in 2006 are excluded. This is to ensure survival rms with delisting
code 100 are nancially healthy companies. Finally, to ensure adequate sample size
for the MLE approach, we consider only companies with over 252 days common
share prices available.
Next, we report in Table 4 the main rm characteristics of our default samples,
in terms of market equity value, book leverage (total liabilities divided by asset
value), and market leverage (total liabilities divided by market value of the rm).
We nd that on average rms in the default group are smaller and tend to have
higher book and market leverage. In addition, the mean and median of book and
market leverage of default group of default Denition II are higher than those
of Denition I. This is because rms that delisted without ling Chapter 11 are
considered as default rms in default Denition I, and as survival rms in default
Denition II; such rms may not have debt levels as high as companies which led
Chapter 11. Finally, in Table 5, a summary of the default rms by industry and
year is presented.
In the end of this section, we present our key inputs for the structural models.
Determining the amount of debt for our empirical study is not an obvious matter.
As opposed to the simplest approach, for example, by Brockman and Turtle,
29
to set the face value of debt equal to the total liabilities, we adopt the rough
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 659
Table 4. Summary statistics of sampling rms.
Group Number of Firms Mean Median Maximum Minimum
Default Denition I
Market equity value Survival 10729 1770.7762 173.0430 367495.1442 0.3016
Default 4878 58.1062 8.7146 36633.7544 0.0271
Book leverage Survival 10729 0.5541 0.5500 4.0093 0.0008
Default 4878 0.7628 0.6880 203.0000 0.0003
Market leverage Survival 10729 0.4321 0.3932 0.9961 0.0007
Default 4878 0.5188 0.5387 0.9997 0.0001
Default Denition II
Market equity value Survival 14244 1340.6120 81.9271 367495.1442 0.0271
Default 1363 136.7769 23.4698 36633.7544 0.3356
Book leverage Survival 14244 0.5996 0.5666 203.0000 0.0003
Default 1363 0.8253 0.7888 12.5273 0.0025
Market leverage Survival 14244 0.4389 0.4058 0.9997 0.0001
Default 1363 0.6718 0.7577 0.9995 0.0024
formula provided by Moodys KMV the value of current liabilities including
short-term debt, plus half of the long-term debt. This formula is also adopted by
some researchers such as Vassalou and Xing.
58,n
Secondly, the payout rate g captures the payouts in the form of dividends, share
repurchase, and bond coupons to stock holders and bondholders.
o
To estimate the
payout rate, we adopt the weighted average method similar to Eom et al.
27
and
Ericsson et al.
57
as
(Interest Expenses/Total Liabilities) Leverage
+(Equity Payout Ratio) (1 Leverage)
where
Leverage = Total Liabilities/(Total Liabilities + Market Equity Value).
For the market value of equity, we chose the number of shares outstanding
times market price per share on the day closest to the nancial statement date.
The equity payout rate is estimated as the total equity payout, which is the sum of
n
There are several reasons for choosing this default point: First, KMV has observed from a
large sample of several hundreds companies that rms default when the asset value reaches a level
somewhere between the value of total liabilities and the value of short-term debt. Therefore, as
argued in Ref. 9, the probability of the asset value falling below the total face value may not be
an accurate measure of the actual default probability. Secondly, as pointed out by Vassalou and
Xing,
58
it is important to include long-term debt in the calculation because rms need to service
the long-term debt, and these interest payments are part of the short-term liabilities. Furthermore,
the size of the long-term debt may aect the ability of a rm to roll over its short-term debt, and
in turn aect the default risk.
o
The original Merton, and Brockman and Turtle do not assume the asset payout, but they can
be easily added into the models.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
660 H.-H. Lee, R.-R. Chen & C.-F. Lee
Table 5. Number of the default rms by industry and year.
Year SIC Code
Missing 0 1 2 3 4 5 6 7 8 9 Total
Default Denition I
1986 4 1 62 21 73 8 27 11 27 9 0 243
1987 0 5 23 16 44 9 31 10 18 4 0 160
1988 0 1 26 22 53 11 31 10 40 13 0 207
1989 1 4 23 30 63 14 29 17 30 6 0 217
1990 0 1 19 22 86 16 33 21 29 10 0 237
1991 0 2 28 33 85 10 31 24 39 13 0 265
1992 0 2 53 26 84 16 45 31 38 22 0 317
1993 0 2 15 15 52 8 11 13 15 9 0 140
1994 0 0 20 13 52 12 20 19 25 8 2 171
1995 0 1 19 21 50 12 38 21 39 16 1 218
1996 0 0 7 20 42 10 33 12 17 11 2 154
1997 0 0 16 25 52 17 44 15 36 17 0 222
1998 0 2 36 45 97 25 61 35 64 32 2 399
1999 0 4 48 53 87 22 43 28 50 34 0 369
2000 0 0 10 34 71 26 50 28 53 26 0 298
2001 0 2 15 42 87 44 58 23 131 24 1 427
2002 0 0 14 31 87 45 23 31 94 17 0 342
2003 0 1 9 20 75 20 34 16 54 18 0 247
2004 0 0 3 13 23 7 16 20 20 4 0 106
2005 0 0 5 20 43 14 8 17 25 7 0 139
Total 5 28 451 522 1306 346 666 402 844 300 8 4878
Default Denition II
1986 1 0 5 3 11 3 4 2 3 1 0 33
1987 0 2 3 1 4 0 3 0 2 1 0 16
1988 0 0 3 1 4 6 7 1 0 0 0 22
1989 0 0 4 7 6 4 11 6 5 0 0 43
1990 0 0 3 3 12 4 8 7 2 1 0 40
1991 0 2 2 3 21 3 7 7 7 2 0 54
1992 0 1 5 2 10 6 13 7 3 5 0 52
1993 0 0 4 4 11 0 6 4 2 0 0 31
1994 0 0 2 4 14 3 5 3 4 2 2 39
1995 0 0 2 5 8 6 16 8 6 4 0 55
1996 0 0 5 6 7 3 16 2 2 0 0 41
1997 0 1 5 5 12 7 13 4 7 5 0 59
1998 0 1 6 13 28 7 20 12 13 11 1 112
1999 0 0 13 14 21 14 18 8 8 13 0 109
2000 0 0 6 17 28 15 30 12 21 13 0 142
2001 0 0 3 15 41 31 28 10 49 10 0 187
2002 0 0 6 9 33 27 11 12 23 8 0 129
2003 0 0 1 11 33 12 16 3 14 5 0 95
2004 0 0 2 7 7 3 8 5 7 0 0 39
2005 0 0 3 5 22 13 8 3 5 6 0 65
Total 1 7 83 135 333 167 248 116 183 87 3 1363
SIC Code: 0, agriculture, forestry, and shing; 1, mining and construction; 2 and 3, manufacturing;
4, transportation, communications, electric, gas, and sanitary service; 5, wholesale trade, and retail
trade; 6, nance, insurance, and real estate; 7 and 8, service; 9, public administration.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 661
cash dividends, preferred dividends, and purchase of common and preferred stock,
divided by the total equity payout plus market value of equity.
Thirdly, since four models in our study assume constant interest rate, one needs
to feed in the appropriate interest rate for model estimation. The three month T-
bill rate from the Federal Reserve website is chosen as the risk-free rate. However,
the three month T-bill rate uctuated heavily; from a high of 9.45% in March 1989,
it dropped to a low of 0.81% in June 2003, and then went back to 4.08% in the
end of December 2005. Therefore, to assure the proper discount rate for each rm
across a 20-year sampling period, interest rates are estimated as the average of 252
daily 3-month Constant Maturity Treasury (CMT) rates for each rm during the
sampling period.
4.4. Empirical results
In our empirical test, we use the numerical optimization algorithm of Nelder-Mead
(in Matlab software package), adopted by Wong and Choi.
31
Inputs of parameters
for debt levels, asset payouts, and interest rates are as described in Sec. 4.3, and
the option time to maturity is two years. The original Merton
18
and Brockman and
Turtle
29
models do not assume the asset payout rate, but they can be easily added
into the models. Discount rates are assumed to be the average risk-free rates during
the equity time series sampling period.
The delisting date of a delisted rm is simply the very last security trading
day, while the delisting date of an active rm (delisting code 100) is set as the last
trading day in year 2005. Inputs of equity time series for in-sample estimation are
the equity values, ending on the delisting date and travelling back 252 trading days.
The six-month (one-year) out-of-sample estimation uses equity time series from 377
to 126 (503252) trading days before the delisting date. The sample sizes of the
in-sample, six-month out-of-sample, and one-year out-of-sample tests are 15,607,
14,775, and 13,750, respectively. The dierences in the sample sizes come from the
availability of equity trading data. As we push the estimation period backward in
time, we lose some rms due to the relatively shorter lives of these companies. After
numerical optimization, nal samples for in-sample test, six-month out-of-sample,
and one-year out-of-sample tests include 15,598, 14,765, and 13,744 rms.
p
4.4.1. Testing results of default Denition I
We rst present in Table 6 the performance of default prediction by decile-based
analysis and provide the percentages of performance delisting in each decile.
Defaulting rms are sorted into deciles by their corresponding physical default
probability estimates from each model; the physical default probabilities of rms
p
We lost some samples due to convergent issue in the MLE maximization process of the Brockman
and Turtle. We lost 9, 10, and 6 rms of the in-sample, six-month out-of-sample, and one-year
out-of-sample tests, respectively.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
662 H.-H. Lee, R.-R. Chen & C.-F. Lee
Table 6. Percentages of performance delisting rms in each decile
(default Denition I).
Decile (P
Def
) Merton Brockman and Turtle
In sample test One week
1 (Large) 30.86% 30.82%
2 28.27% 28.04%
3 22.34% 20.80%
4 9.46% 8.91%
5 3.37% 4.52%
610 (Small) 5.71% 6.92%
Out-of-sample test Six months
1 (Large) 28.04% 27.05%
2 24.69% 22.81%
3 18.47% 17.36%
4 11.03% 12.11%
5 7.32% 7.54%
610 (Small) 10.46% 13.13%
Out-of-sample test One year
1 (Large) 26.83% 25.66%
2 22.23% 20.50%
3 17.09% 16.78%
4 12.25% 12.15%
5 8.01% 8.13%
610 (Small) 13.59% 16.78%
for in-sample and out-of-sample tests are computed using the estimated rm values
one week (5 trading days), six months (126 trading days), and one year (252 trading
days) before the delisting date, respectively. One can clearly observe that the Mer-
ton model outperforms the Brockman and Turtle model, especially in out-of-sample
predictions.
We next present in-sample, six-month-ahead and one-year-ahead out-of-sample
ROC curves of the tested models in Figs. 35, respectively.
Formal statistical tests are carried out by the Accuracy Ratios (ARs) and the
Z statistics. Z statistics of the AR dierences between the Merton model and the
Brockman and Turtle (BT) model are reported in parentheses in Table 7 (Panel A).
We nd that in accordance with the results in the decile-based analysis, the Brock-
man and Turtle model is clearly inferior to the Merton model. Our empirical results
show that the simple Merton model surprisingly outperforms the at barrier model
in default prediction. The results of the Z test indicate that the dierence of predic-
tion capability between the Merton and the at barrier models is statistically sig-
nicant and the results hold for both in-sample and out-of-sample tests. Although
theoretically the down-and-out option framework should nest the standard call
option model, in practice, it may not perform better in the default prediction.
Several possible reasons may explain our empirical results.
One of the possible explanations is that the continuous monitoring assumption
of the at barrier model makes it possible to default before debt maturity, and thus
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 663
ROC - One Week
0.00%
10.00%
20.00%
30.00%
40.00%
50.00%
60.00%
70.00%
80.00%
90.00%
100.00%
0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% 70.00% 80.00% 90.00% 100.00%
Default Probability of Survival Group ordered by Percentiles
D
e
f
a
u
l
t

P
r
o
b
a
b
i
l
i
t
y

o
f

D
e
f
a
u
l
t

G
r
o
u
p

o
r
d
e
r
e
d

b
y

P
e
r
c
e
n
t
i
l
e
s
Merton
BT
Fig. 3. ROC curves one week in-sample test (all samples).
ROC - 6 Months
0.00%
10.00%
20.00%
30.00%
40.00%
50.00%
60.00%
70.00%
80.00%
90.00%
100.00%
0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% 70.00% 80.00% 90.00% 100.00%
Default Probability of Survival Group ordered by Percentiles
D
e
f
a
u
l
t

P
r
o
b
a
b
i
l
i
t
y

o
f

D
e
f
a
u
l
t

G
r
o
u
p

o
r
d
e
r
e
d

b
y

P
e
r
c
e
n
t
i
l
e
s
Merton
BT
c
Fig. 4. ROC curves six-month out-of-sample test (all samples).
increases the estimated default probabilities of the survival rms. One may argue
that the implied default probabilities of the default rms increase as well. However,
the magnitude of the increments may not be the same, and we do observe this in
our empirical results.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
664 H.-H. Lee, R.-R. Chen & C.-F. Lee
ROC - 1 Year
0.00%
10.00%
20.00%
30.00%
40.00%
50.00%
60.00%
70.00%
80.00%
90.00%
100.00%
0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% 70.00% 80.00% 90.00% 100.00%
Default Probability of Survival Group ordered by Percentiles
D
e
f
a
u
l
t

P
r
o
b
a
b
i
l
i
t
y

o
f

D
e
f
a
u
l
t

G
r
o
u
p

o
r
d
e
r
e
d

b
y

P
e
r
c
e
n
t
i
l
e
s
Merton
BT
Fig. 5. ROC curves one-year out-of-sample test (all samples).
Table 7. Accuracy ratios and Z-statistics of physical probabilities (default Denition I).
Accuracy Ratio One Week Six Months One Year
(In Sample) (Out-of-Sample) (Out-of-Sample)
Panel A All Sample
Merton 0.9361 0.8750 0.8424
Brockman and Turtle 0.9256 (10.2186)

0.8532 (13.5027) 0.8155 (14.1514)


In-sample one-week (15,598 rms 10,727 survival and 4871 performance delisting rms)
Out-of-sample 6-month (14,765 rms 10,232 survival and 4533 performance delisting rms)
Out-of-sample 1-year (13,744 rms 9637 survival and 4107 performance delisting rms)
Panel B Financial Firms
Merton 0.8948 0.8503 0.8327
Brockman and Turtle 0.8906 (0.9413) 0.8541 (0.7054) 0.8237 (1.6109)
In-sample one-week (2809 rms 2409 survival and 400 performance delisting rms)
Out-of-sample 6-month (2694 rms 2313 survival and 381 performance delisting rms)
Out-of-sample 1-year (2,556 rms 2195 survival and 361 performance delisting rms)
Panel C Non-Financial Firms
Merton 0.9375 0.8714 0.8380
Brockman and Turtle 0.9257 (10.6700) 0.8437 (15.1404) 0.8055 (15.2183)
In-sample one-week (12,789 rms 8318 survival and 4471 performance delisting rms)
Out-of-sample 6-month (12,071 rms 7919 survival and 4152 performance delisting rms)
Out-of-sample 1-year (11,188 rms 7442 survival and 3746 performance delisting rms)

Numbers in parentheses are the Z-statistics of the AR dierence of the Brockman and Turtle
model and the Merton model.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 665
For example, the case of Alfacell Corporation, a survival rm, (CRSP perma-
nent company number 35) clearly reects this issue as shown in Fig. 6. Alfacell
experienced a drastic downfall in share prices in year 2005. However, it still sur-
vived through the end of 2006. In Fig. 6, we present the one-year market equity,
estimated rm value of the Merton model, estimated rm value of the Brockman
and Turtle model, the implied barrier, and the debt level of the KMV formula,
respectively. Both models generate reasonable rm value estimates based on the
corresponding model assumptions. Estimated rm values of the at barrier model
are higher than those of the Merton model due to the existence of the claims of
bondholders modelled as the down-and-in option. The implied default probability
of Alfacell Corporation is merely 0.04% by the Merton model, while default proba-
bility of the at barrier model is as high as 61.21%. This large dierence comes from
the implied default barrier. The debt level by the KMV formula is $1.75 million,
but the implied barrier from the Brockman and Turtle model is $31.37 million!
Such a high implied barrier leads to a high default probability by the at barrier
model. In contrast, default in Mertons model is only related to the debt level at
debt maturity and thus the default probability is very low. Note that to prevent
from the local optimum problem of the barrier estimate, we also use another opti-
mization routine, the fmincon function in Matlab, to re-estimate the Alfacell case
but still obtain the same implied default barrier.
ALFACELL CORP
0
20
40
60
80
100
120
140
160
180
0 50 100 150 200 250
Trading Days
V
a
l
u
e

(
M
i
l
l
i
o
n

D
o
l
l
a
r
s
)
Market Equity
Estimated Firm Value (Merton)
Estimated Firm Value (BT)
Debt Level
Default Barrier
Fig. 6. An illustration of the problem of the Brockman and Turtle model in the Alpacell corpora-
tion case.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
666 H.-H. Lee, R.-R. Chen & C.-F. Lee
One may argue that imposing constraints on the default barrier can solve this
issue. However, the high implied default barrier is a result of the return distribution
of the equity value process. Imposing constraints clearly violates the fundamental
of the maximum likelihood estimation method and hinders the MLE method from
searching the global optimum. In the case of Alfacell Corporation, the likelihood
function of the Brockman and Turtle model and the Merton model are 566.397
and 562.288, respectively. This indicates that the introduction of the barrier does
improve the tting of the return distribution of the equity value process. Further-
more, the equity pricing function of the at barrier model in Eq. (4.3) does not
pre-specify the location of the barrier. The at default barrier can be higher than
the debt level, as assumed in the Brockman and Turtle model. Accordingly, the
fundamental issue is that the at barrier assumption itself might be unreasonable
and unrealistic.
Another possible explanation is from our measure of the default prediction capa-
bility. The AR only preserves the ranking information of the default probabilities
in our empirical test. The at barrier model may generate the default probabil-
ity distribution closer to the true default probability distribution, compared with
that of the Merton model. It is the tails of the default probability distributions of
survival and default groups that truly determine the ARs. Nonetheless, one can
clearly observe from the decile-based results in Table 6 that the Brockman and
Turtle model does not have the same dierentiating power for default and survival
groups as that of the Merton model.
Finally, we cannot completely rule out the local optimum possibility, since it is
well known that high dimensional optimization may not uncover the global opti-
mum. The superior default prediction capability of the Merton model may come
from the better estimates of model parameters due to its simpler likelihood function
and lower dimension in the optimization procedure.
We next turn to the sub-sample analysis by nancial (Table 7 Panel B) versus
non-nancial (Table 7 Panel C) rms. Financial companies have industry-specic
high leverage ratios and thus cannot be modelled well in nance literature. In out-
of-sample prediction, consistent with the ndings by Chen et al.,
60
we nd that the
Brockman and Turtle model performs much better in nance sector than in the
industrial sector, while the Merton model performs better in the industrial sector.
Accordingly, the dierence of default prediction power of the at barrier and the
Merton model in nance sector is no longer signicant.
4.4.2. Testing results of default Denition II
In this section, we regroup our survival and default groups using the denition of
bankruptcy similar to that adopted by Chen et al.
60
Following their approach, we
collect default rms from the BankruptcyData.com database, which includes over
2500 public and major company lings dating back to 1986. Next, we match the
performance delisted samples with companies collected from BankruptcyData.com,
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 667
and add back the liquidated rms (with delisting code 400) to form our default
group. All of the remaining rms are classied as survival rms. Note that a dif-
ference between our classication and Chen et al.
6
is that some of the companies
that led bankruptcy petitions but were later acquired by (or merged with) other
companies (Delisting code: 200) are classied in the survival group. The numbers
of default rms following this approach greatly reduce from 4871 to 1325 for the
in-sample test and from 4533 to 1260 (4107 to 1183) for the six-month (one-year)
out-of-sample tests. To conserve space, ROC curves are not reported. The decile-
based analysis as well as the accuracy ratios and Z statistics are reported in Tables 8
and 9, respectively.
From Table 9, our results still show that the Merton model outperforms the
at barrier model, and the dierence of default prediction capability is statistically
signicant as that in Sec. 4.4.1. In addition, one can observe that all these models
perform slightly worse than the broad denition of bankruptcy. The dierences are
around 2% across dierent models and tests. The reason may be the uncertainty
of bankruptcy lings by companies delisted from the stock exchange. One can use
the MLE approach to capture information from the market equity values of those
poorly performing and delisted rms, and obtain default probabilities of these rms.
However, whether those rms will eventually le bankruptcy may be subject to
various rm-specic human and company potential issues. These issues may not
easily be captured just by the dynamics of the rms market equity values.
Table 8. Percentages of default rms in each decile (default
Denition II).
Decile (P
Def
) Merton Brockman and Turtle
In sample test One week
1 (Large) 56.98% 52.53%
2 26.42% 26.64%
3 9.36% 12.23%
4 3.40% 3.62%
5 0.98% 1.74%
610 (Small) 2.87% 3.25%
Out-of-sample test Six months
1 (Large) 44.20% 37.73%
2 24.07% 23.60%
3 14.13% 15.47%
4 7.73% 9.71%
5 4.26% 6.08%
610 (Small) 5.60% 7.42%
Out-of-sample test One year
1 (Large) 36.01% 30.35%
2 21.98% 20.20%
3 14.88% 17.41%
4 12.09% 12.00%
5 6.34% 8.12%
610 (Small) 8.71% 11.92%
January 11, 2010 13:27 WSPC/173-IJITDM 00370
668 H.-H. Lee, R.-R. Chen & C.-F. Lee
Table 9. Accuracy ratios and Z-statistics of physical probabilities (default Denition II).
Accuracy Ratio One Week Six Months One Year
(In Sample) (Out-of-Sample) (Out-of-Sample)
Panel A All Sample
Merton 0.9153 0.8575 0.8167
Brockman and Turtle 0.9007 (11.3992)

0.8279 (13.7357) 0.7813 (13.1887)


In-sample one-week (15,598 rms 14,273 survival and 1,325 default rms)
Out-of-sample 6-month (14,765 rms 13,498 survival and 1267 default rms)
Out-of-sample 1-year (13,744 rms 12,561 survival and 1183 default rms)
Panel B Financial Firms
Merton 0.8982 0.8679 0.8596
Brockman and Turtle 0.9009 (0.3805) 0.8713 (0.4121) 0.8597 (0.0024)
In-sample one-week (2809 rms 2698 survival and 111 default rms)
Out-of-sample 6-month (2694 rms 2588 survival and 106 default rms)
Out-of-sample 1-year (2556 rms 2453 survival and 103 default rms)
Panel C Non-Financial Firms
Merton 0.9118 0.8481 0.8041
Brockman and Turtle 0.8946 (12.3778) 0.8129 (14.5338) 0.7614 (14.4172)
In-sample one-week (12,789 rms 11,575 survival and 1214 default rms)
Out-of-sample 6-month (12,071 rms 10,910 survival and 1161 default rms)
Out-of-sample 1-year (11,188 rms 10,108 survival and 1080 default rms)

Numbers in parentheses are the Z-statistics of the AR dierence of the Brockman and Turtle
model and the Merton model.
In Panels B and C of Table 9, the nancial versus non-nancial sector analyses
are reported. The performances among models are also similar to those of the broad
denition of bankruptcy in Sec. 4.4.1. Unlike the performances with broad denition
of default, not only the Brockman and Turtle model performs much better in the
nance sector, but the Merton model also performs better in the nancial sector.
However, the accuracy ratios of the at barrier are even higher than that of the
Merton model in the nance sector, although the dierences are not statistically
signicant.
4.5. Robustness test
In this section, we conduct a similar analysis using risk-neutral default probabilities
instead of physical default probabilities. To conserve space, we report only Accuracy
Ratios and Z statistics. The results from Tables 10 and 11 show that default predic-
tion capabilities of the Merton
18
and the Brockman and Turtle
29
models remain the
same: the Brockman and Turtle model is inferior to the Merton model in almost
all tests. The only exception is the six-month-ahead prediction of the nancial
sectors.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 669
Table 10. Accuracy ratios and Z-statistics of risk-neutral probabilities (default Denition I).
Accuracy Ratio One Week Six Months One Year
(In Sample) (Out-of-Sample) (Out-of-Sample)
Panel A All Sample
Merton 0.9326 0.8806 0.8503
Brockman and Turtle 0.9297 (2.664)

0.8723 (5.0499) 0.8350 (7.7156)


In-sample one-week (15,598 rms 10,727 survival and 4871 performance delisting rms)
Out-of-sample 6-month (14,765 rms 10,232 survival and 4533 performance delisting rms)
Out-of-sample 1-year (13,744 rms 9637 survival and 4107 performance delisting rms)
Panel B Financial Firms
Merton 0.8850 0.8431 0.8318
Brockman and Turtle 0.8862 (0.2633) 0.8555 (2.0205) 0.8334 (0.2605)
In-sample one-week (2809 rms 2,409 survival and 400 performance delisting rms)
Out-of-sample 6-month (2694 rms 2313 survival and 381 performance delisting rms)
Out-of-sample 1-year (2556 rms 2195 survival and 361 performance delisting rms)
Panel C Non-Financial Firms
Merton 0.9342 0.8790 0.8469
Brockman and Turtle 0.9304 (3.281) 0.8654 (7.4503) 0.8254 (9.7702)
In-sample one-week (12,789 rms 8318 survival and 4471 performance delisting rms)
Out-of-sample 6-month (12,071 rms 7919 survival and 4152 performance delisting rms)
Out-of-sample 1-year (11,188 rms 7442 survival and 3746 performance delisting rms)

Numbers in parentheses are the Z-statistics of the AR dierence of the Brockman and Turtle
model and the Merton model.
We next examine the default probability estimates of physical versus risk-neutral
probability measures. Duan et al.
71
claim that the transformed-data MLE approach
can estimate the default probability under physical probability measure.
q
From our
empirical results, we do observe higher ARs in all the models of in-sample test under
two alternative denitions of default. In out-of-sample tests, the ARs in general are
higher under physical default probabilities by the default Denition II. ARs of the
Brockman and Turtle model are the exceptions. However, for default Denition I
of out-of-sample test, the ARs show an entirely opposite pattern in the common
sample and non-nancial sectors ARs of physical default probabilities are lower
than those of risk-neutral probabilities. We should note that the only dierence
between survival and default group classication in two alternative settings is that
those rms being delisted without ling Chapter 11 are assumed to be default rms
by Brockman and Turtle.
29
In other words, the drift estimates using equity time
series a certain period of time before delisting date cannot help improve default
prediction for those rms being delisted without ling Chapter 11.
q
The physical default probability here is under the assumption of constant asset risk premium.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
670 H.-H. Lee, R.-R. Chen & C.-F. Lee
Table 11. Accuracy ratios and Z-statistics of risk-neutral probabilities (default Denition II).
Accuracy Ratio One Week Six Months One Year
(In Sample) (Out-of-Sample) (Out-of-Sample)
Panel A All Sample
Merton 0.9019 0.8484 0.8141
Brockman and Turtle 0.8982 (2.4412) 0.8309 (7.8455) 0.7887 (8.7635)
In-sample one-week (15,598 rms 14,273 survival and 1325 default rms)
Out-of-sample 6-month (14,765 rms 13,498 survival and 1267 default rms)
Out-of-sample 1-year (13,744 rms 12,561 survival and 1183 default rms)
Panel B Financial Firms
Merton 0.8908 0.8548 0.8443
Brockman and Turtle 0.9037 (1.4159) 0.8736 (2.0717) 0.8550 (1.1254)
In-sample one-week (2809 rms 2698 survival and 111 default rms)
Out-of-sample 6-month (2694 rms 2588 survival and 106 default rms)
Out-of-sample 1-year (2556 rms 2453 survival and 103 default rms)
Panel C Non-Financial Firms
Merton 0.8963 0.8377 0.8015
Brockman and Turtle 0.8911 (3.2702) 0.8143 (9.3251) 0.7683 (10.3636)
In-sample one-week (12,789 rms 11,575 survival and 1214 default rms)
Out-of-sample 6-month (12,071 rms 10,910 survival and 1161 default rms)
Out-of-sample 1-year (11,188 rms 10,108 survival and 1080 default rms)

Numbers in parentheses are the Z-statistics of the AR dierence of the Brockman and Turtle
model and the Merton model.
We conclude that estimating asset drift can improve default prediction, which
can be seen from the in-sample testing results. Nonetheless, the out-of-sample drift
estimate itself, using equity time series 6 months or one year before the delisting
date, may not help improve default prediction, especially for those rms delisted
without ling Chapter 11. The eect of asset drift estimation in default prediction
may be conned to a relatively short forecasting horizon.
5. Summary and Conclusions
This paper rst reviews empirical evidences and estimation methods of structural
credit risk models. An empirical investigation of the performance of default predic-
tion under the down-and-out barrier option framework is provided next. In the
literature review, a brief overview of structural credit risk models is provided.
Empirical investigations by researchers are described in some detail, and their
results are summarized in terms of subject and estimation method adopted for
each paper. Currently used estimation methods and their drawbacks are discussed
in detail. While theoretically elegant, structural models generally do not perform
well empirically in risky corporate bond pricing. However, predicting the credit
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 671
quality of a corporate security could be a good application of structural models
because they are less aected by micro-structure issues. Since recent structural
models put a great deal of emphasis on the event of bankruptcy, we suggest that
prediction of default probabilities or default events shall be potentially important
applications of structural models.
In our empirical investigation, we adopt the Maximum Likelihood Estimation
method proposed by Duan
32
and Duan et al.,
33
which views the observed equity
time series as a transformed data set of unobserved rm values with the theoretical
equity pricing formula serving as the transformation. This method has been shown
by Ericsson and Reneby
55
through simulation experiments to be superior to the
commonly adopted volatility restriction approach in the literature. Since the default
boundary is unknown, the Brockman and Turtle model shall have three unknown
parameters asset drift, asset volatility, and a level of default boundary. One of
the advantages of the MLE approach is that it can estimate these three model
parameters simultaneously.
In our simulation experiment, we uncover the limitation of the MLE method.
The MLE method cannot pin down the barrier using the equity time series when the
default boundary, relative to the rm value, is low (or the low hitting probability of
the default boundary). This is what the statistical theory precisely predicts since the
value of likelihood function is at and not sensitive to the change of the boundary
level. However, for default prediction, this should present no practical diculties.
The bias of low barrier cases could hardly aect the default probabilities of sample
rms, even when the barrier estimates vary for a wide range.
In default prediction, our empirical results surprisingly show that the simple
Merton model outperforms the Brockman and Turtle model, and the dierence
of predictive ability is statistically signicant. The results hold for the in-sample,
six-month-ahead and one-year-ahead out-of-sample tests for two alternative deni-
tions of default the broad denition of bankruptcy as in Brockman and Turtle,
29
and the alternative denition in which bankruptcy corresponds to Chapter 11 l-
ings. In addition, we also nd that the inferior performance of the Brockman
and Turtle model may be the result of its unreasonable assumption of the at
default boundary. Furthermore, these results are still preserved in our robust-
ness test as we use risk-neutral default probabilities, instead of physical default
probabilities.
Finally, in this paper, we only empirically investigate the default prediction
capability of the at barrier model. In future studies, we will investigate default
prediction capabilities on alternative default boundary assumptions in the struc-
tural credit risk model literature. The empirical investigation may move on to the
models with various default boundary assumptions such as the exponential barrier
model of BlackCox,
19
the endogenous barrier models of Leland
21
and Leland and
Toft,
41
and the models with stochastic interest rate by Longsta and Schwartz
20
and Briys and de Varenne.
39
January 11, 2010 13:27 WSPC/173-IJITDM 00370
672 H.-H. Lee, R.-R. Chen & C.-F. Lee
References
1. F. Black and M. Scholes, The pricing of options and corporate liabilities, J. Polit.
Econ. 81 (1973) 637654.
2. P. R. Kumar and V. Ravi, Bankruptcy prediction in banks and rms via statistical
and intelligent techniques a review, Eur. J. Oper. Res. 180 (2007) 128.
3. E. I. Altman, Corporate Financial Distress and Bankruptcy: A Complete Guide to
Predicting and Avoiding Distress and Proting from Bankruptcy (Wiley, New York,
1993).
4. G. Zhang, M. Y. Hu, B. E. Patuwo and D. C. Indro, Articial neural networks in
bankruptcy prediction: General framework and cross-validation analysis, Eur. J. Oper.
Res. 116 (1999) 1632.
5. K. S. Shin, T. S. Lee and H. J. Kim, An application of support vector machines in
bankruptcy prediction model, Expert Syst. Appl. 28 (2005) 127135.
6. J. P. Li, Z. Y. Chen, L. W. Wei, W. X. Xu and G. Kou, Feature selection via least
squares support feature machine, Int. J. Inform. Technol. Decision Making 6(4) (2007)
671686.
7. K. J. Tseng, Y. H. Liu and J. F. Ho, An ecient algorithm for solving a quadratic
programming model with application in credit card holders behavior, Int. J. Inform.
Technol. Decision Making 7(3) (2008) 421430.
8. Y. Zhang, L. Chen, Z. F. Zhou and Y. Shi, A geometrical method on multidimen-
sional dynamic credit evaluation, Int. J. Inform. Technol. Decision Making 7(1) (2008)
103114.
9. M. Crouhy, D. Galai and R. Mark, A comparative analysis of current credit risk
models, J. Bank. Financ. 24 (2000) 57117.
10. A. Saunders and L. Allen, Credit Risk Measurement (John Wiley & Sons, Inc., New
York, 2002).
11. J. Yao, Z. Li and K. Ng, Model risk in VaR estimation: An empirical study, Int. J.
Inform. Technol. Decision Making 5(3) (2006) 503512.
12. M. Better, F. Glover, G. Kochenberger and H. Wang, Simulation optimization: Appli-
cation in risk management, Int. J. Inform. Technol. Decision Making 7(4) (2008)
571587.
13. D. Lando, Credit Risk Modeling: Theory and Applications (Princeton Series in
Finance, 2004).
14. R. Jarrow and S. Turnbull, Pricing derivatives on nancial securities subject to default
risk, J. Financ. 50 (1995) 5386.
15. D. Due and K. Singleton, Modeling the term structure of defaultable bonds, Rev.
Financ. Stud. 12 (1999) 687720.
16. T. Shumway, Forecasting bankruptcy more accurately: A simple hazard model, J. Bus.
74 (2001) 101124.
17. D. Due, L. Saita and K. Wang, Multi-period corporate default prediction with
stochastic covariates, J. Financ. Econ. 83 (2007) 635665.
18. R. C. Merton, On the pricing of corporate debt: The risk structure of interest rates,
J. Financ. 28 (1974) 449470.
19. F. Black and J. C. Cox, Valuing corporate securities: Some eects of bond indenture
provisions, J. Financ. 31 (1976) 351367.
20. F. Longsta and E. Schwartz, A simple approach to valuing risky xed and oating
rate debt and determining swaps spread, J. Financ. 50 (1995) 789819.
21. H. E. Leland, Corporate debt value, bond covenants, and optimal capital structure,
J. Financ. 49 (1994) 12131252.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 673
22. A. Elizalde, Credit Risk Models II: Structural Models, Working Paper (CEMFI and
UPNA, 2005).
23. D. Wei and D. Guo, Pricing risky debt: An empirical comparison of the Longsta and
Schwartz and Merton models, J. Fixed Income 7 (1997) 828.
24. R. Anderson and S. Sundaresan, A comparative study of structural models of corpo-
rate bond yields: An exploratory investigation, J. Bank. Financ. 24 (2000) 255269.
25. G. Delianedis and R. Geske, The Components of Corporate Credit Spreads: Default,
Recovery, Tax, Jumps, Liquidity, and Market Factors, Working Paper (UCLA, 2001).
26. J. Huang and M. Huang, How Much the Corporate-Treasury Yield Spread is Due to
Credit Risk? Working Paper (Penn State University and Stanford University, 2003).
27. Y. H. Eom, J. Helwege and J. Huang, Structural models of corporate bond pricing:
An empirical analysis, Rev. Financ. Stud. 17 (2004) 499544.
28. H. E. Leland, Prediction of default probabilities in structural models of debt, J. Invest.
Manage. 2(2) (2004).
29. P. Brockman and H. J. Turtle, A barrier option framework for corporate security
valuation, J. Financ. Econ. 67 (2003) 511529.
30. S. T. Bharath and T. Shumway, Forecasting default with the Merton distance to
default Model, Rev. Financ. Stud. 21 (2008) 13391369.
31. H. Y. Wong and T. W. Choi, Estimating Default Barriers from Market Information,
Working Paper (The Chinese Hong Kong University and Citic Kawah Bank, 2006).
32. J. C. Duan, Maximum likelihood estimation using pricing data of the derivative con-
tract, Mathematical Finance 4 (1994) 155167.
33. J. C. Duan, G. Gauthier and J. G. Simonato, On the Equivalence of the KMV and
Maximum Likelihood Methods for Structural Credit Risk Models, Working Paper
(University of Toronto, 2004).
34. J. Ericsson and J. Reneby, Estimating structural bond pricing models, J. Bus. 78
(2005) 707735.
35. R. Geske, The valuation of corporate liabilities as compound options, J. Financ.
Quant. Anal. 12 (1977) 541552.
36. R. Geske and H. E. Johnson, The valuation of corporate liabilities as compound
options: A correction, J. Financ. Quant. Anal. 19 (1984) 231232.
37. R. Geske, The valuation of compound options, J. Financ. Econ. 7 (1979) 6381.
38. I. Kim, K. Ramaswamy and S. Sundaresan, Does default risk in coupons aect the
valuation of corporate bonds? A contingent claims model, Financ. Manage. 22 (1993)
117131.
39. E. Briys and F. de Varenne, Valuing risky xed rate debt: An extension, J. Financ.
Quant. Anal. 32 (1997) 239248.
40. J. C. Hsu, J. Sa`a-Requejo and P. Santa-Clara, Bond Pricing with Default Risk, Work-
ing Paper (UCLA and Vector Asset Management, 2003).
41. H. E. Leland and K. B. Toft, Optimal capital structure, endogenous bankruptcy, and
the term structure of credit spreads, J. Financ. 51 (1996) 9871019.
42. H. E. Leland, Agency cost, risk management, and capital structure, J. Financ. 53
(1998) 12131243.
43. J. Huang, N. Ju and H. Ou-Yang, A Model of Optimal Capital Structure with Stochas-
tic Interest Rates, Working Paper (New York University, 2003).
44. R. Anderson, S. Sundaresan and P. Tychon, Strategic analysis of contingent claim,
Eur. Econ. Rev. 40 (1996) 871881.
45. P. Mella-Barral and W. Perraudin, Strategic debt service, J. Financ. 52 (1997) 531
566.
46. K. Giesecke, Default and Information, Working Paper (Cornell University, 2005).
January 11, 2010 13:27 WSPC/173-IJITDM 00370
674 H.-H. Lee, R.-R. Chen & C.-F. Lee
47. R. A. Jarrow and P. Protter, Structure versus reduced form models: A new information
based perspective, J. Invest. Manage. 2 (2004) 110.
48. C. Zhou, The term structure of credit spreads with jump risk, J. Banking Financ. 25
(2001) 20152040.
49. B. Hilberink and L. C. G. Rogers, Optimal capital structure and endogenous default,
Financ. Stochastics 6 (2002) 237263.
50. M. Tauren, A Model of Corporate Bond Prices with Dynamic Capital Structure,
Working Paper (Indiana University, 1999).
51. P. Collin-Dufresne and R. S. Goldstein, Do credit spreads reect stationary leverage
ratios? J. Financ. 56 (2001) 19291957.
52. N. Ju and H. Ou-Yang, Capital structure, debt maturity, and stochastic interest rates,
J. Bus. 79 (2006) 24692502.
53. O. Vasicek, An equilibrium characterization of the term structure, J. Financ. Econ.
5 (1977) 177188.
54. S. Lyden and D. Saraniti, An Empirical Examination of the Classical Theory of
Corporate Security Valuation (Bourne Lyden Capital Partners and Barclays Global
Investors, San Francisco, CA, 2001).
55. J. Ericsson and J. Reneby, An empirical study of structural credit risk models using
stock and bond prices, J. Fixed Income 13 (2004) 3849.
56. R. Chen, F. J. Fabozzi, G. Pan and R. Sverdlove, Sources of credit risk: Evidence
from credit default swaps, J. Fixed Income 16(3) (2006) 721.
57. J. Ericsson, J. Reneby and H. Wang, Can Structural Models Price Default Risk? Evi-
dence from Bond and Credit Derivative Markets, Working Paper (McGill University
and Stockholm School of Economic, 2006).
58. M. Vassalou and Y. Xing, Default risk in equity returns, J. Financ. 59 (2004) 831868.
59. J. Y. Campbell, J. Hilscher and J. Szilagyi, In Search of Distress Risk, Working Paper
(Harvard University, 2004).
60. R. Chen, S. Hu and G. Pan, Default Prediction of Various Structural Models, Work-
ing Paper (Rutgers University, National Taiwan University, and National Ping-Tung
University of Sciences and Technologies, 2006).
61. S. A. Davydenko, When Do Firms Default? A Study of the Default Boundary, Working
Paper (University of Toronto, 2007).
62. N. A. Tarashev, An empirical evaluation of structural credit-risk models, Int. J. Cen-
tral Banking 4 (2008) 153.
63. W. Suo and W. Wang, Assessing Default Probabilities from Structural Credit Risk
Models, Working Paper (Queens University, 2009).
64. J. C. Duan and J. G. Simonato, Maximum likelihood estimation of deposit insurance
value with interest rate risk, J. Empir. Financ. 9 (2002) 109132.
65. P. Crosbie and J. Bohn, Modeling default risk, Moodys KMV Technical Document
(2003), http://www.defaultrisk.com/pp model 35.htm.
66. R. F. Engle and V. K. Ng, Measuring and testing the impact of news on volatility,
J. Financ. 48 (1993) 17491778.
67. M. Bruche, Estimating Structural Bond Pricing Models via Simulated Maximum Like-
lihood, Working Paper (London School of Economics, 2005).
68. J. D. Hamilton, Time Series Analysis (Princeton University Press, New Jersey, 1994).
69. R. M. Stein, Benchmarking Default Prediction Models: Pitfalls and Remedies in Model
Validation (Moodys KMV white paper, 2002).
70. R. M. Stein, The relationship between default prediction and lending prots: Inte-
grating ROC analysis and loan pricing, J. Banking Financ. 29 (2005) 12131236.
January 11, 2010 13:27 WSPC/173-IJITDM 00370
Empirical Studies of Structural Credit Risk Models and the Application in Default Prediction 675
71. J. C. Duan, G. Gauthier, J. G. Simonato and S. Zaanoun, Estimating Mertons Model
by Maximum Likelihood with Survivorship Consideration, Working Paper (University
of Toronto, 2003).
72. R. Anderson and S. Sundaresan, Design and valuation of debt contract, Rev. Financ.
Stud. 9 (1996) 3768.
73. J. A. Hanley and K. O. Hajian-Tilaki, Sampling variability of nonparametric estimates
of the areas under receiver operating characteristic curves: An update, Academic
Radiology (1997) 4958.
74. H. Y. Wong and K. L. Li, On Bias of Testing Mertons Model, Working Paper (The
Chinese University of Hong Kong, 2006).
75. D. Due and D. Lando, Term structure of credit spreads with incomplete accounting
information, Econometrica 69 (2001) 633664.
76. G. Duee, Estimating the price of default risk, Review of Financial Studies 12 (1999)
197226.
77. H. Fan and S. Sundaresan, Debt valuation, renegotiations and optimal dividend policy,
Review of Financial Studies 13 (2000) 10571099.
78. E. Jones, S. Mason and E. Rosenfeld, Contingent claims analysis of corporate capital
structures: An empirical investigation, Journal of Finance 39 (1984) 611627.
79. R. C. Merton, An analytical derivation of the cost of deposit insurance and loan
guarantees, Journal of Banking and Finance 1 (1977) 311.
80. E. I. Ronn and A. K. Verma, Pricing risk-adjusted deposit insurance: An option-based
model, Journal of Finance 41 (1986) 871895.
81. J. C. Duan, A. Moreau and C. W. Sealey, Deposit insurance and bank interest rate
risk: Pricing and regulatory implications, Journal of Banking and Finance 19 (1995)
10911108.
82. J. C. Duan, G. Gauthier, J. G. Simonato and S. Zaanoun, Estimating Mertons model
by maxmimum likelihood with survivorship consideration, Working Paper, University
of Toronto (2003).
83. F. Black, Studies of stock price volatility changes, Proceedings of the 1976 Meetings
of the Business and Economics Statistics Section, American Statistical Association
(1976) 177181.
Copyright of International Journal of Information Technology & Decision Making is the property of World
Scientific Publishing Company and its content may not be copied or emailed to multiple sites or posted to a
listserv without the copyright holder's express written permission. However, users may print, download, or
email articles for individual use.

Potrebbero piacerti anche