Sei sulla pagina 1di 19

Journal of Financial Regulation and Compliance

Value at risk: a critical overview


Robert Sollis
Article information:
To cite this document:
Robert Sollis, (2009),"Value at risk: a critical overview", Journal of Financial Regulation and Compliance,
Vol. 17 Iss 4 pp. 398 - 414
Permanent link to this document:
http://dx.doi.org/10.1108/13581980911004370
Downloaded on: 22 January 2015, At: 14:59 (PT)
References: this document contains references to 16 other documents.
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

To copy this document: permissions@emeraldinsight.com


The fulltext of this document has been downloaded 1740 times since 2009*
Users who downloaded this article also downloaded:
Alberto Burchi, (2013),"Capital requirements for market risks: Value-at-risk models and stressed-VaR
after the financial crisis", Journal of Financial Regulation and Compliance, Vol. 21 Iss 3 pp. 284-304 http://
dx.doi.org/10.1108/JFRC-10-2012-0042
Lindsay A. Lechner, Timothy C. Ovaert, (2010),"Value-at-risk: Techniques to account for leptokurtosis and
asymmetric behavior in returns distributions", The Journal of Risk Finance, Vol. 11 Iss 5 pp. 464-480 http://
dx.doi.org/10.1108/15265941011092059
Stavros Degiannakis, Christos Floros, Alexandra Livada, (2012),"Evaluating value-at-risk models before
and after the financial crisis of 2008: International evidence", Managerial Finance, Vol. 38 Iss 4 pp. 436-452
http://dx.doi.org/10.1108/03074351211207563

Access to this document was granted through an Emerald subscription provided by 485088 []
For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald for
Authors service information about how to choose which publication to write for and submission guidelines
are available for all. Please visit www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company
manages a portfolio of more than 290 journals and over 2,350 books and book series volumes, as well as
providing an extensive range of online products and additional customer resources and services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee
on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive
preservation.

*Related content and download information correct at time of download.


The current issue and full text archive of this journal is available at
www.emeraldinsight.com/1358-1988.htm

JFRC THEMED PAPER


17,4
Value at risk: a critical overview
Robert Sollis
398 Newcastle University Business School, University of Newcastle upon Tyne,
Newcastle upon Tyne, UK

Abstract
Purpose – A misplaced reliance on value at risk (VaR) has been focused on in the media as one of the
main reasons for the current financial crisis, and the recently published Turner Review by the UK
Financial Services Authority concurs. The purpose of this paper is to present an introductory overview
of VaR and its weaknesses which will be easily understood by non-technical readers.
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

Design/methodology/approach – Simple numerical examples utilising real and simulated data are
employed to reinforce the main arguments.
Findings – This paper explains that some of the main approaches employed by banks for computing
VaR have serious weaknesses. These weaknesses have contributed to the current financial crisis.
Research limitations/implications – Consistent with the introductory nature of this paper, the
empirical research is limited to simple examples.
Practical implications – The evidence here suggest that if VaR is to play a major role under future
financial regulation then research is required to develop improved estimation techniques and
backtesting procedures.
Originality/value – This paper differs from many academic papers on VaR by assuming only a very
basic knowledge of mathematics and statistics.
Keywords Risk management, Regulation, Value analysis
Paper type General review

1. Introduction
Value at risk (VaR) is an estimate of the worse possible monetary loss from a financial
investment over a future time-period (e.g. one-day, one-month). For most financial
investments, the return over a future period (e.g. the one-day percent return) is a
random variable. Generally, therefore, the actual future return will to some extent
always differ from any estimate of the future return. Hence, a VaR statement has a
confidence level attached, where “confidence” is defined in terms of the probability that
the actual monetary loss will not be greater than the VaR. For example, – the one-day
VaR on a particular investment is £1 million with a 99 percent confidence level –
meaning, there is a 99 percent probability that the actual loss associated with the
investment over the next day will not be worse than £1 million (but a 1 percent
probability that the actual loss will be worse than £1 million).
VaR is the main statistical technique used by banks for modelling financial risk.
Unsurprisingly, in light of the current financial crisis VaR, has recently received a lot of
attention in the media, with much of this attention focusing on the weaknesses of the
Journal of Financial Regulation and
Compliance technique. Indeed, some authors blame VaR entirely for the financial crisis:
Vol. 17 No. 4, 2009
pp. 398-414 Yet a method heavily grounded on those same quantitative and theoretical principles, called
q Emerald Group Publishing Limited
1358-1988
Value at Risk, continued to be widely used. It was this that was to blame for the crisis [. . .]
DOI 10.1108/13581980911004370 Remove Value-at-Risk books from the shelves – quickly (Taleb and Triana, 2008)[1].
Regulatory authorities are now also arguing that a misplaced reliance on VaR is partly Value at risk
responsible for the current financial crisis. For example, Section 1.1(iv) of the recently
published Turner Review of the crisis by the UK Financial Services Authority is titled
“misplaced reliance on sophisticated maths,” and states:
There are, however, fundamental questions about the validity of VaR as a measure of risk
(Turner, 2009, p. 22, Chapter 1: “What went wrong?”, Section 1.1(iv)).
399
The weaknesses of VaR are rightly being focused on in the media and are beginning to be
seriously discussed by regulators. However, much of what has appeared in the media on
VaR is slightly fuzzy, since VaR is a statistical technique and most if not all of the
commentators are not statisticians[2]. Many of the pieces contain misinterpretations or
lack sufficient detail to be clear, while others overuse hyperbole leading to a false picture
of exactly how VaR is employed and what the important weaknesses really are. Consider
for example the following quote from the New York Times (January 2, 2009), which is
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

typical of recent press comments:


VaR uses this normal distribution curve to plot the riskiness of a portfolio. But it makes certain
assumptions. VaR is often measured daily and rarely extends beyond a few weeks, and because
it is a very short-term measure, it assumes that tomorrow will be more or less like today. Even
what’s called “historical VaR” – a variation of standard VaR that measures potential portfolio
risk a year or two out, only uses the previous few years as its benchmark (Nocera, 2009).
The truth is that VaR can be calculated assuming that portfolio returns are normally
distributed. But VaR can also be calculated assuming that portfolio returns are not
normally distributed. In fact, under the Basel II Capital Accord that regulates risk
management for banks in the EU, there is no formal regulatory restriction on the type
of probability distribution that has to be used to calculate VaR. Although many
analysts will have chosen to use a normal distribution, some will have chosen
non-normal distributions. Furthermore, the suggestion that it is standard practice for
historical VaR to be calculated using a small sample of recent data and then used by
analysts to measure risk years in the future is an exaggeration. VaR calculated using
historical data is only a sensible measure of future risk if the assumption that the
future will be similar to the relevant historical period is acceptable, which for some
assets over short-periods it might well be, while for other assets and for longer periods
it will definitely not be; and the majority of analysts know this fact. Econometric
techniques can be used to calculate VaR that attempt to take account of those aspects
of the future that are not the same as the past, although any good undergraduate
economist would not recommend placing too much weight on a specific econometric
forecast “a year or two out” since it is well-known that the accuracy of econometric
forecasts decay as the horizon increases (in the jargon of Econometrics 101, the
“standard error bands” increase).
The suggestion that historical VaR is always based on small historical samples is
also rather misleading. It is the choice of the analyst and often the regulator whether the
sample used to calculate VaR is short or long, and obviously it depends on the
availability of historical data[3]. Furthermore, whilst an analyst may choose or through
lack of data be forced to calculate VaR using a small historical sample, regulators
typically require that the VaR models employed by banks are tested for robustness (with
backtesting) and that stress testing is used to examine the potential portfolio impact of
extreme events that might not have occurred in the sample used to calculate VaR[4].
JFRC These points are to some degree a defence of VaR, although this is not the
overarching aim of this paper, which is a critical overview. However, it is important
17,4 that the weaknesses of VaR are clearly and objectively presented. Furthermore, it is
important to understand that VaR does not claim to be able to prevent financial crises
from happening, and anyone who fully understands the technique would not claim
otherwise. To reiterate, since financial loss over a future period is a random variable
400 the actual loss could be worse than the VaR. When in practice a bank’s loss is worse
than the associated VaR, then on its own this does not necessarily mean that the VaR is
wrong in some way or that the methodology is flawed. In the example above, because
the VaR is given at the 99 percent confidence level it is accepted by the analyst stating
the VaR that there is a 1 percent probability the loss could be worse than £1 million.
Thus, the VaR approach to measuring risk does not claim to capture the effect of all
very rare events on financial investments. Note however that I do not recommend that
VaR should continue to be used in the way it has been. The current financial crisis
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

indicates that financial regulation and risk management has failed around the world on
a catastrophic scale. Numerous banks and other financial institutions which have
relied on VaR have not had sufficient capital to cover losses, particularly those losses
associated with the collapse in value of securitised assets. Using the argument above,
one could make the point that this does not necessarily mean that VaR has failed,
because the events leading to the losses might be very rare events in the sense that the
events are in the 1 percent of events that a 99 percent VaR does not claim to cover.
However, the scale and depth of the problem suggests that VaR has been consistently
underestimated across the financial sector, and for some time. And it is explained later
in this paper how one of the most popular approaches to calculating VaR will tend to
underestimate the true VaR for many financial assets. Therefore, I believe that some
key aspects of the VaR methodology are seriously flawed, and that the failure of banks
and regulators to recognise these flaws and act to correct the VaR models being
employed is to some extent responsible for the current financial crisis.
This paper is deliberately as non-technical as possible, with the aim that it will be as
easy to read as some of the recent media pieces on VaR. Given the technical nature of
the subject, this is a difficult balancing act, but hopefully the main points in the paper
will be clear to those who might not have the technical knowledge typically assumed in
the majority of specialist academic papers and textbooks on VaR[5].

2. How is VaR calculated?


There are numerous approaches to calculating VaR but in the majority of cases one of
the following three main approaches is employed[6]:
(1) variance-covariance (VCV) approach;
(2) historical simulation (HS) approach; and
(3) Monte Carlo simulation (MCS) approach.
The VCV approach has a long history by VaR standards with its origins in research
carried out at Bankers Trust in the mid-1980s, and well-developed risk management
services to help banks calculate VaR using the VCV approach are available.
In particular, the RiskMetrics service originally developed by JP Morgan gives the
necessary parameters and information required for calculating VaR using the VCV
approach for many different types of assets and portfolios[7].
The VCV approach assumes that asset returns are conditionally normally distributed Value at risk
random variables that are independent across time. This means that standardised asset
returns (where “standardising” refers to subtracting the mean and dividing by the
conditional standard deviation) are independent standard normal random variables[8].
Note that there is an important statistical distinction between “conditional” and
“unconditional” probability distributions, and for any asset these can be different (see
RiskMetrics (2001) for further details). However, rather than go into technical details, for 401
simplicity we assume here that both conditional and unconditional distributions for the
standardised returns have the same form and that standard deviations are constant.
Therefore, we can ignore the distinction and consequently will for the most part drop the
prefixes “conditional” and “unconditional.”
If standardised asset returns are standard normal random variables then the
calculation of VaR for a single asset can be done using a Basic Statistics textbook. We
know from statistical theory the probabilities of observing values of standard normal
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

random variables within particular intervals. For example, we know that there is a
5 percent probability of observing a standard normal random variable more negative than
21.645; and a 1 percent probability of observing a standard normal random variable more
negative than 22.326. The standard normal distribution (for short, the z-distribution) is
shown in Figure 1 along with an alternative distribution for comparison; a t-distribution
with five degrees of freedom (for short, the t(5)-distribution)[9]. Both have a mean of zero
and are symmetric around the mean. The area under each distribution represents
probability, so the total area under each distribution is unity (i.e. the probability of
observing values for the variables between the lowest possible value and the highest
possible value is 1, or in percentage terms 100 percent). Note that the t(5)-distribution has
fatter tails than the z-distribution, indicating that the probability of observing extreme
values is higher for the t(5)-distribution than for the z-distribution.
In light of the above consider the following simple example: a UK bank invests
£100 million in the shares of the UK company A and decides to calculate the immediate
one-day VaR for this investment at the 99 percent confidence level. Assume that the
standard deviation of the one-day share A returns is 0.5 percent (0.005). If the VCV

0.4
z
0.35 t(5)

0.3

0.25

0.2

0.15

0.1 Figure 1.
Standard normal
0.05 distribution and
t-distribution with five
0 degrees of freedom
–5 –4 –3 –2 –1 0 1 2 3 4 5
JFRC approach is used then standardised returns are assumed to be standard normal, and so it
17,4 follows from the table of probabilities for a standard normal distribution that over the
next day the standardised return is not expected to be more negative than 2 2.326 with a
99 percent confidence level. Therefore, the actual return is not expected to be more
negative than the standardised return 2 2.326 multiplied by the standard deviation plus
the mean (the actual return being the “un-standardised” standardised return). Assuming
402 for simplicity that the mean return is zero, this gives 2 2.326 £ 0.005 ¼ 2 0.01163[10].
So the actual return associated with share A over the next day is not expected to be more
negative than 2 1.163 percent at the 99 percent confidence level. It follows
straightforwardly that over the next day the worse possible loss at the 99 percent
confidence level associated with the bank’s investment is the mark-to-market value of
the investment (£100 million) multiplied by the relevant threshold return (2 0.01163),
resulting in 2 £1.163 million. This is not quite the VaR because it is convention when
stating VaR to state it as a positive number. So, for this example, the one-day VaR at the
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

99 percent confidence level is £1.163 million. In practice the standard deviation for asset
returns is an unknown parameter that has to be estimated by the analyst. It can be
estimated from a sample of historical data using a simple formula.
Calculating the VaR for portfolios of assets is also straightforward with the VCV
approach. If the individual standardised asset returns are standard normal random
variables then it follows nicely that the standardised portfolio returns are standard
normal random variables (see Jorion (2007, Chapter 7) for further details). Furthermore,
simple analytical formulas can be used to calculate the standard deviation of the
portfolio allowing for covariance between the assets[11]. To calculate the VaR of the
portfolio, the same approach is taken as with the single asset example above, but using
the portfolio standard deviation in place of the single asset standard deviation. For a
portfolio of two shares A and B, the relevant formula for the standard deviation of the
portfolio is:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
sp ¼ a2A s2A þ a2B s2B þ 2aA aB rAB sA sB ð1Þ

where:
aA – The proportion of the total amount invested in share A.
aB – The proportion of the total amount invested in share B.
sA – The standard deviation of share A returns.
sB – The standard deviation of share B returns.
rAB – The correlation coefficient (a parameter that describes the direction and
strength of the correlation between shares A and B returns).
Since the parameters sA, sB, and rAB are unknowns, estimates are required. Again,
they can be estimated using historical sample data on the assets in question.
Consider extending the simple single asset one-day VaR example above to two
shares: shares A and B. Assume that the bank invests £50 million in share A and
£50 million in share B. Assume that the standard deviation of the daily returns for
share A and share B is 0.5 percent, and that rAB ¼ 2 0.2. Then using equation (1), the
portfolio standard deviation is:
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
sp ¼ 0:52 £ 0:52 þ 0:52 £ 0:52 þ 2 £ 0:52 £ 20:2 £ 0:52 ¼ 0:316 percent: ð2Þ Value at risk

Assuming normality of standardised portfolio returns over the next day, the
standardised portfolio return is not expected to be more negative than 2 2.326 with a
99 percent confidence level. Assuming for simplicity that the mean portfolio return is
zero, the actual portfolio return is not expected to be more negative than
2 2.326 £ 0.00316 ¼ 2 0.00735. It follows straightforwardly that the one-day VaR
403
for the portfolio at the 99 percent confidence level is £735,000.
Clearly, the VaR in this portfolio example is less than the VaR in the single asset
example despite the same overall amount being invested and despite the fact that the
shares have the same standard deviation as in the single asset case. This is because the
shares are negatively correlated which reduces the risk of the portfolio relative to an
investment in just one or other of the shares, demonstrating the benefits of
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

diversification from the VaR perspective[12].


Rather than assuming a particular probability distribution for asset returns and
applying relevant analytical formulas as in the VCV approach, the HS approach
involves calculating VaR using historical data on the asset (or assets) in question. Note
that the true probability distribution for asset returns is unknown. In the VCV
approach, it is assumed that returns are normally distributed; in the HS approach it is
assumed that the true distribution is unknown, but that the empirical distribution is a
good proxy.
To calculate the one-day VaR for a £100 million investment in share A using the HS
approach an analyst would need to collect daily data on the closing price of share A over
a sample period. Using this sample data, the daily returns can be easily calculated[13].
To work out the VaR at the 99 percent confidence level, all the analyst needs to do is to
reorder the sample of daily returns from most negative to most positive, pick the return
value at the first percentile and multiply by the mark-to-market value of the investment.
A simple example using the financial times stock exchange (FTSE) 100 index is given in
Section 4.2. Calculating VaR for a portfolio of assets using the HS approach is also
straightforward. Historical data are collected on the relevant assets in the portfolio and
the historical portfolio returns are calculated. Then by reordering the portfolio returns
and choosing the relevant percentile value the VaR of interest can be calculated.
The MCS approach is less-widely used than the VCV and HS approaches, most
likely because it is not as straightforward and it can be computationally expensive[14].
Monte Carlo simulation in this context refers to computer simulation of “pseudo” asset
returns from an assumed probability distribution for asset returns. As with the VCV
approach an assumption about the unknown true probability distribution for asset
returns is made. However, the key difference is that the analyst is not restricted to a
normal distribution. For example, an analyst using the MCS approach could assume
that asset returns are t-distributed[15]. Therefore, using the MCS approach the analyst
can calculate VaR allowing for extreme returns to have a higher probability than under
the assumption of normality.
To calculate the one-day VaR for a single asset using the MCS approach an analyst
starts by choosing a probability distribution for the daily asset returns in question to
proxy the unknown true probability distribution; say they choose a t-distribution with
five degrees of freedom (a t(5)-distribution). Using a computer the analyst generates a
large number of data points from the t(5)-distribution, say 10,000 points. These 10,000
JFRC data points are assumed to be actual relevant data points for the asset,
17,4 i.e. pseudo-returns (even though they are not real returns but simulated using a
computer, the analyst pretends that they are real returns). Given this assumption, to
work out the relevant VaR the analyst employs the HS approach, but using the
pseudo-returns rather than the actual historical data. Thus, the one-day VaR with a 99
percent confidence level can be calculated by reordering the sample of pseudo-returns
404 from most negative to most positive, picking the return value at the first percentile and
multiplying by the mark-to-market value of the investment. The extension to portfolio
VaR is the same as in the HS approach but using the portfolio pseudo-returns.

3. VaR in Basel II
If the worse possible loss over a future period associated with financial investments
could be calculated with a confidence level as high as 99 percent and the results were
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

accurate, then this information would be extremely useful for both banks and
regulators. In particular, it could be used to help ensure that banks have sufficient
capital to cover actual losses (up to a 99 percent confidence level). Conversely, if the
worse possible loss over a future period associated with financial investments was
calculated and the results were not accurate but were erroneously considered to be
accurate, then such information could be extremely destabilising for banks, regulators,
and for the wider economy.
For the past decade, regulators have allowed banks to employ VaR as their primary
statistical risk management tool, particularly for managing market risk. In many
countries, banks have to calculate VaR and report these amounts to regulators under
law. It is indicative of the trust that has been placed in VaR by banks, regulators and
governments that VaR is the main internal risk management technique allowed under
the Basel II Capital Accord – which has now been implemented into EU law and has
been partially implemented in the USA[16]. Basel II splits bank risk into three
categories:
(1) credit risk (the risk that borrowers will default);
(2) market risk (the risk that the market prices of assets change for the worse); and
(3) operational risk (essentially any other detrimental risk not covered by
categories (1) and (2), such as the risk of loss due to fraud).

Clearly, all three-risk categories are crucial to the profitability of banks and extreme
events in one or more of these risk categories could lead to rapid bankruptcy. Banks are
allowed to use two general approaches to quantify credit risk and market risk, either a
particular standardised approach designed by the regulator or an internal approach
designed by the bank and approved by the regulator. VaR is used by many banks to
help measure credit risk under the internal approach, but it is used by virtually all
banks to measure market risk under the internal approach.
The Basel II document gives detailed information on the VaR techniques and the
time horizons and backtesting procedures that banks under its jurisdiction must
employ. For example, Basel II requires that banks employing an internal approach to
managing market risk calculate VaR on a daily basis. More specifically:
Each bank must meet, on a daily basis, a capital requirement expressed as the higher of (i) its
previous day’s value-at-risk number measured according to the parameters specified in this
section and (ii) an average of the daily value-at-risk measures on each of the preceding sixty Value at risk
business days, multiplied by a multiplication factor[17].
For market risk, Basel II states that the 99 percent confidence level be used, and that the
VaR is for a minimum ten-day holding period. Banks are allowed to use time-scaling
when calculating VaR, which means that for example the daily VaR can scaled up to the
pffiffiffiffiffi
ten-day VaR by multiplication by 10[18].
405
4. So, what is wrong with VaR?
4.1 The VCV approach
The main problem with the VCV approach is that for many assets, standardised returns are
probably not standard normal random variables[19]. What is the consequence of assuming
normality if the assumption is incorrect? The consequence is that the VaR might be a poor
estimate of the true VaR, where the true VaR is the relevant threshold in the unknown true
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

probability distribution for the asset or portfolio loss. Going back to the simple single asset
example used above; if the one-day standardised returns for share A are not normally
distributed but in fact are t(5)-distributed, then rather than the worse standardised return
at the 99 percent confidence level being 22.326, the worse standardised return at
the 99 percent confidence level is 23.365. Consequently, the worse actual return at the
99 percent confidence level for share A is 23.365 £ 0.005 ¼ 20.01683, and the true
one-day VaR is £1.683 million, not £1.163 million. Clearly by incorrectly assuming that
standardised returns have a standard normal distribution when in fact they have a
t(5)-distribution, VaR is underestimated.
For many financial assets, a common feature of historical data on their standardised
returns is that like the t(5)-distribution above, the empirical distribution has fatter tails
than the standard normal distribution suggesting that the unknown true distribution has
fatter tails than the standard normal distribution. Therefore, as in the simple example
above, assuming normality could for many assets be leading to an underestimate of the
true VaR. Regarding the choice of VaR approach, Basel II states:
No particular type of model is prescribed [. . .] banks will be free to use models based, for
example, on variance-covariance matrices, historical simulations, or Monte Carlo
simulations[20].
Thus, Basel II allows banks to use VaR models based on the assumption of normality
despite it quite probably being an erroneous assumption in many cases.

4.2 The HS approach


The HS approach improves on the VCV approach since it drops the assumption of
normality. In fact, in the HS approach no assumptions are made regarding the
probability distribution for asset returns since the VaR is calculated directly from
historical data on the asset or assets in question. If the true distribution is non-normal
then the HS approach might be more accurate than the VCV approach which assumes
normality (although this is not guaranteed). However, a serious problem with the HS
approach is its sensitivity to changes in the sample size employed. It can be shown that
with a judicious choice of sample size a risky asset can easily be made to look much
less risky than it actually is. Furthermore, because of its sample sensitivity the HS
approach will tend to be procyclical in the sense that in strong bull markets VaR
calculated using the HS approach will tend to be low, meaning that VaR-linked
JFRC minimum capital requirements will fall, allowing banks to expand lending. This
17,4 feature of the HS approach potentially increases the risk of asset price “bubbles”[21].
Consider the following simple example. The daily FTSE 100 returns are shown in
Figure 2 for the period April 2, 1984-December 29, 2006. Using the full sample, the
one-day VaR at the 99 percent confidence level for a £100 million investment in
the index calculated using the HS approach is £2.86 million. However, if data over the
406 period January 2, 2004-December 29, 2006 (approximately the last 750 observations)
are used, the VaR is £0.88 million. If only one year of data is employed, January 2,
2006-December 29, 2006 (approximately, the last 250 observations), then the VaR is just
£0.40 million!
The reason why these VaR amounts are so different is that the smaller sample periods
cover a bull market with low volatility, while the full sample period includes the big
negative returns and higher volatility associated with the dotcom crash, the Asian
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

financial crisis and most notably Black Monday (October 19, 1987). The sample periods
used here have been deliberately chosen to illustrate the sensitivity of the HS technique
to sample size and in practice a bank will typically stress test to reveal the impact of
events such as Black Monday on their portfolio, even if they did not take place in the
sample period used to compute VaR. However, this example is a clear illustration of the
sensitivity of the HS approach to including or excluding particular data points which is
particularly worrying given that in practice there is no significant restriction on a bank’s
choice of sample size when calculating VaR using the HS approach. The advice given by
Basel II on the choice of sample size when calculating VaR is:
The choice of historical observation period (sample period) for calculating value-at risk will be
constrained to a minimum length of one year. For banks that use a weighting scheme or other
methods for the historical observation period, the “effective” observation period must be at
least one year (that is, the weighted average time lag of the individual observations cannot be
less than six months)[22].
The argument in favour of using small samples when calculating VaR with the HS
approach is that it might give a more accurate estimate of the true VaR since the
0.1

0.05

–0.05

Figure 2. –0.1
FTSE 100 returns for the
period April 2, 1984-
December 29, 2006 –0.15
0 1,000 2,000 3,000 4,000 5,000 6,000
parameters of the conditional probability distribution for returns are likely to be Value at risk
changing (e.g. the standard deviation). However, from the simple example above the
danger with employing small samples is obvious – it could easily lead to VaR being
underestimated by a large amount (or indeed overestimated)[23].

4.3 The MCS approach


Recall that the true probability distribution for asset returns is unknown and when 407
calculating VaR using the MCS approach the analyst has to assume a probability
distribution. Hence, the main weakness with the MCS approach is that the assumed
probability distribution for asset returns might be significantly different to the true
distribution. If the assumed distribution has thinner tails than the true distribution
then, ceteris parabus, the calculated VaR will be an underestimate of the true VaR. If the
assumed distribution has fatter tails than the true distribution then, ceteris parabus, the
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

calculated VaR will be an overestimate of the true VaR. Other features of the assumed
distribution relative to the true distribution are relevant as well. For example, if the
true distribution is skewed to one side but the assumed distribution is symmetrical
then it is also likely that VaR will be underestimated or overestimated depending on the
direction of the skew.
Consider the following simple example. Assume that the true probability
distribution for daily asset returns (in percent) is the t-distribution with one degree
of freedom (a t(1)-distribution). Assume that the analyst using the MCS approach to
calculate VaR incorrectly assumes that the appropriate distribution is the t-distribution
with five degrees of freedom (a t(5)-distribution). A graph of the two distributions is
shown in Figure 3. Clearly the t(1)-distribution has much fatter tails than the
t(5)-distribution.
Consequently, in this example the MCS VaR will underestimate the true VaR. In this
case the true one-day VaR at the 99 percent confidence level for a £100 million
investment is £31.821 million. The calculated VaR using the MCS approach with
10,000 replications is £3.305 million. Clearly in this case the MSC VaR massively
underestimates the true VaR. This hypothetical example is deliberately extreme, to

0.4
t(1)
0.35 t(5)

0.3

0.25

0.2

0.15

0.1
Figure 3.
0.05 t-distribution with one
degree of freedom and five
0 degrees of freedom
–15 –10 –5 0 5 10 15
JFRC illustrate a point, however analysts employing the MCS approach do not know the true
17,4 probability distribution for the asset or assets in question and so the probability of
misspecifications on a similar scale is not zero. Note that statistical techniques exist for
testing the likelihood that one probability distribution is more appropriate than
another, however Basel II does not formally require that such techniques be employed.
The weaknesses discussed above are just a subset of a much larger set of
408 weaknesses. For example, VaR is endogenous to the wider economy and can therefore
be destabilising (“endogenous” in this context means that current VaR estimates can
affect future risk, which can affect future VaR estimates, etc. leading to potentially
destabilising cycles); there are fundamental problems with the conventional
time-scaling approach used; VaR is not sub-additive, meaning that the sum of the
VaR for individual assets can be less than the VaR for a portfolio of those assets;
standard deviations and correlations can change unexpectedly at times of stress. Some
academic financial economists have been warning of these weaknesses for many years
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

(Danı́elsson et al., 2001; Danı́elsson, 2002; Danı́elsson and Zigrand, 2006), which raises
the question why has national and international financial regulation not already been
amended to take account of the weaknesses? The answer to this question lies partly
with the fact that while academic economists have been very critical of the VaR
approach, the financial sector has been much less critical. And it is understandable
why. The simple examples given above demonstrate that it probably has been
relatively easy for many banks to underestimate the true VaR while adhering
to regulatory criteria. For banks under the jurisdiction of Basel II, minimum capital
requirements are directly linked to VaR. The flexibility to manipulate actual capital
reserves that VaR provides is not something which a profit maximising bank in a
competitive environment has any incentive to eliminate.

4.4 The problem with backtesting


Backtesting is designed to identify the problems with predictive statistical methods
such as VaR. Although Basel II enforces a backtesting procedure for VaR
unfortunately the procedure can suffer from “low power” (where here “power” is a
statistical term which means the probability of a correct rejection of a false null
hypothesis, here the null hypothesis being that the VaR approach is valid), and size
distortion (where “size” is the probability of an incorrect rejection of a true null
hypothesis). Thus, while in theory the backtesting procedure might be expected to
perform well in practice, in fact there can be quite a high probability that it will lead to
poor VaR models being accepted by the regulator and conversely good VaR models
being rejected. The basic details of the main backtesting procedure used to evaluate
VaR models by banks under the remit of Basel II are:
The next step in specifying the backtesting program concerns the nature of the backtest itself,
and the frequency with which it is to be performed. The framework adopted by the
Committee, which is also the most straightforward procedure for comparing the risk
measures with the trading outcomes, is simply to calculate the number of times that the
trading outcomes are not covered by the risk measures (“exceptions”). For example, over 200
trading days, a 99% daily risk measure should cover, on average, 198 of the 200 trading
outcomes, leaving two exceptions[24].
So, a large number of “exceptions” from applying the VaR model to historical data
generally indicates that it should be rejected by the regulator. To illustrate the potential
statistical problems associated with this type of backtesting procedure, the following Value at risk
simulation experiments are undertaken involving the one-day VaR on a single asset
investment at the 99 percent confidence level over a period of one year.
Experiment 1. A sample of 250 daily pseudo-returns (in percent) is simulated from a
t(5)-distribution. We then record whether pseudo-return 1 is more negative than the
relevant 99 percent VaR return calculated assuming that percent returns are standard
normal random variables (i.e. 2 2.326). This is repeated for pseudo-return 2, 3, etc. 409
across the full 250 days of the sample (i.e. approximately one-year of trading days).
Note that for this simulated data the VaR model is invalid since it assumes that returns
are normally distributed, whereas the probability distribution for the simulated returns
is non-normal.
Rather than just doing this experiment once, it is repeated 10,000 times to build up
the empirical distribution for the number of exceptions assuming normality when the
true probability distribution for returns is a t(5)-distribution. Utilising this empirical
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

distribution, the probability that the number of exceptions will be greater than 1, 3, 10,
and 30 is estimated. If a regulator uses “greater than # exceptions” as a criterion to
determine whether the bank’s VaR model is rejected, these probabilities can be
interpreted as the probability that the regulator will reject the VaR model when 1, 3, 10,
and 30 expectations is the cut-off point number. Note that in theory given that the VaR
is calculated over 250 data points, if the VaR model is valid we would expect on
average for there to be approximately three exceptions at the 99 percent confidence
level (since 1 percent of 250 is 2.5).
Experiment 2. The same as Experiment 1 but using pseudo-returns generated from
a t(10)-distribution.
Experiment 3. The same as Experiment 1 but using pseudo-returns generated from
a t(15)-distribution.
Experiment 4. This experiment is the opposite of Experiments 1-3 in the sense that
rather than calculating VaR using an invalid VaR model, a valid model is used. More
specifically, we generate 250 pseudo-returns from a standard normal distribution and
then calculate exceptions as in Experiments 1-3. This is repeated 10,000 times and the
probability that the number of exceptions will be greater than 1, 3, 10, and 30 is
estimated. The results of the experiments are shown in Table I.
In Experiment 1, the true distribution for returns is a t-distribution but the VaR is
calculated assuming a standard normal distribution. Therefore, the VaR model is
invalid and if the backtesting procedure is effective we would expect to see that the
probability of rejecting the VaR model is high, particularly when a low number of
exceptions is used as the cut-off point. In Table I, this is exactly what we do see. If three

Distribution Number of exceptions


Experiment True Assumed 1 3 10 30

1 t(5) z 0.9985 0.9732 0.2340 0.0000


2 t(10) z 0.9700 0.7876 0.0210 0.0000
3 t(15) z 0.9335 0.6373 0.0045 0.0000
Table I.
4 z z 0.7287 0.2513 0.0001 0.0000
Probability of rejecting
Notes: t(n), t-distribution with n degrees of freedom; z, standard normal distribution the VaR model
JFRC exceptions are used as the cut-off point the probability that the VaR model would be
17,4 rejected by the regulator is 97.3 percent. Equivalently, there is a 2.7 percent probability
that the regulator would incorrectly accept the invalid VaR model. Therefore, for this
example the backtesting procedure appears to work well as there is a high probability
that the invalid VaR model will be identified and rejected.
When the results from Experiments 2 and 3 are considered however an important
410 potential weakness of this backtesting procedure becomes clear: low power. In these
experiments the pseudo-returns are generated from t-distributions that are closer to
the normal distribution than in Experiment 1 because higher degrees of freedom are
assumed. But they are still distinctly non-normal. However, in Experiment 2, when three
exceptions is used as the cut-off point the probability that the invalid VaR model would
be rejected by the regulator is only 78.8 percent. In Experiment 3, this probability falls
even further to just 63.7 percent; hence, for this example there is a 36.3 percent
probability that the regulator using this backtesting procedure would accept an invalid
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

VaR model. Clearly, the power of the backtesting procedure falls quite rapidly as the
degrees of freedom increases.
When the results from Experiment 4 are considered, a further potential weakness
with this backtesting procedure emerges: size distortion. Given that in this experiment
the VaR model is valid, with three exceptions used as the cut-off point the VaR model
should only be rejected with a low probability. However, the actual probability of a
rejection is 25.1 percent. Therefore, in this case there is a 25.1 percent probability that a
valid VaR model will be spuriously rejected by the regulator using this backtesting
procedure. To reduce the probability of a spurious rejection below 25.1 percent, the
regulator could simply increase the number of exceptions used as the cut-off point; for
example from three to ten exceptions where the probability of a spurious rejection in
Experiment 4 is only 0.01 percent. Note however that if the regulator did this, and the
VaR model was actually invalid as in Experiments 1-3, then the probability of a correct
rejection of the invalid VaR model drops significantly. If ten exceptions is used as the
cut-off point and Experiment 3 represents a true risk management scenario then the
probability that the regulator will reject the invalid VaR model is virtually zero!
These simulation experiments are deliberately simple and are not designed to be a
true reflection of a risk management scenario faced by a bank, but are designed to
illustrate the potential statistical problems that can arise with this type of backtesting
procedure. The reason why these statistical problems might occur is that the backtesting
results are based on sample data and are therefore dependent on numerous sample
specific factors. Consequently, in practice the actual performance of the backtesting
procedure can differ greatly to what might be expected from statistical theory.
Basel II does recognise that these statistical difficulties exist and a similar set of
simulation experiments are reported in the Basel II document to illustrate the difficulties.
Consequently, rather than use a single cut-off point to determine the acceptance or
rejection of a bank’s VaR model Basel II employs a green, yellow, red “traffic light”
approach which defines regions of exceptions that are acceptable (e.g. the green region is
0-4). This recognises uncertainty and gives the regulator some flexibility, however in
light of the ease with which VaR can be manipulated (e.g. the FTSE example in
Section 4.2) it would in many cases be extremely easy for a bank whose VaR model is at
the edge of one unacceptable region to move it into another more acceptable region prior
to reporting its results to the regulator.
4.5 VaR, derivatives, and credit Value at risk
Over the past two decades, there has been a huge increase in the trading of derivatives
and in the past decade a significant increase in the trading of credit derivatives with
securitised assets as the reference entity (where “securitised assets” are individually
illiquid cash-flow producing assets such as mortgages which are re-packaged into
securities and sold on to investors). The collapse in value of securitised assets with
cash-flows linked to US sub-prime mortgages and the consequent impact on the 411
derivatives and credit derivatives markets is the “risk event” which is responsible for
some of the biggest losses experienced by banks from late-2007 through to the present.
Calculating VaR for portfolios of derivatives and credit derivatives is beyond the scope
of this paper. However, some of the approaches used have their origins in the VaR
approaches discussed above. Since many derivatives have non-linear payoff profiles
(e.g. the change in an option price as a result of a change in the price of the underlying
asset is not constant), the VCV approach needs to be extended. The extended VCV
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

approach for derivatives relies on mathematical approximations of the relevant


non-linear relationships which can contain quite large errors, and consequently an
extended MCS approach is often preferred. The weaknesses of both the VCV approach
and MCS approaches discussed above are therefore to some degree also applicable
when VaR is calculated for derivatives and credit derivatives.

5. Conclusions
This critical overview has introduced the main approaches to calculating VaR and
discussed the main weaknesses of these approaches. These can be summarised as
follows:
.
The VCV approach to VaR which is popular because of its simplicity and because
of the support offered by RiskMetrics is seriously flawed. It is based on the
assumption that asset returns are conditionally normally distributed. For many
assets, returns are probably not conditionally normally distributed and for these
assets the VCV approach will tend to lead to an underestimate of the true VaR.
.
The HS approach to VaR, which is also popular because of its simplicity
improves on the VCV approach since asset returns are not assumed to be
conditionally normally distributed. However, the approach is also seriously
flawed. By altering the sample size employed it is possible to get wildly different
estimates of VaR. The technique is quite open to manipulation so as to achieve
low VaR amounts unless sufficiently well-regulated.
.
The MCS approach is less popular than the VCV and HS approaches since
computationally it is much more expensive, particularly for large portfolios. The
MCS approach suffers from a similar weakness to the VCV approach in the sense
that the assumed probability distribution for asset returns could be incorrect.
However, it has the advantage that the analyst is not restricted to assuming a
normal distribution.

In addition to these main weaknesses although regulators should be commended for


formally requiring backtesting, it is shown here that the backtesting procedure
enforced by Basel II could lead to invalid VaR models being accepted and valid VaR
models being rejected with quite a high probability. Furthermore, the fact that the
criteria for a VaR model being accepted by the regulator are known by banks prior to
JFRC their backtesting results being calculated and reported to the regulator is less than
17,4 ideal given the ease with which VaR can be manipulated.
In light of these and other weaknesses, it is very difficult to give any support to the
continued use of the VCV and HS approaches to calculating VaR in their current form,
or the backtesting procedure under Basel II. If they are to be allowed at all under future
regulation then it is imperative that ways of developing and extending the VCV and HS
412 approaches to deal with the weaknesses detailed in this paper are found. In the longer
term, serious consideration needs to be given to the issue of whether all financial
regulation based on statistical models estimated using actual data is misguided and
potentially destabilising because of the endogeneity of the models to the data
generation process. This point is raised in Danı́elsson et al. (2001), and covered in more
detail in Danı́elsson (2002) and Danı́elsson et al. (2004). In light of the current financial
crisis, which has occurred despite the increasing use of quantitative risk management
techniques such as VaR, it is a point that urgently requires further research by
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

financial economists and macroeconomists.

Notes
1. Taleb (2008) has been particularly critical of VaR and of quantitative risk management
generally (for further details see his popular book The Black Swan: The Impact of the Highly
Improbable).
2. Even the Turner Review gets it slightly wrong; it should be “misplaced reliance on
sophisticated statistics”!
3. While many banks use between one and three years worth of data when calculating VaR it is
easy to find exceptions;, e.g. in 2006, UBS employed five-years of historical data to calculate
daily VaR at the 99 percent confidence level for market risk. The results are given in the
Annual Reporting document available at: www.ubs.com/1/e/investors/annual_reporti
ng2006/handbook/0018/0025.html
4. “Backtesting” refers to testing the accuracy and robustness of a predictive statistical
technique over a historical sample period – where the true outcomes are known.
5. The standard academic text is Jorion (2007). The number of academic papers and textbooks
on VaR is too numerous to mention all here. The web site: www.gloriamundi.org/ has
references to many good textbooks and papers on VaR. For examples of excellent high-level
academic papers on specialist aspects of VaR see Duffee and Pan (2001), Engle and
Manganelli (2004), Danı́elsson et al. (2004), and Danı́elsson and Zigrand (2006)). For those
wanting a more advanced overview than is the aim of this paper see Duffee and Pan (1997) or
Danı́elsson (2002).
6. VaR techniques can also be grouped into two classes; the VCV and MCS approaches are in
the “parametric” class and the HS approach is in the “non-parametric” class. The VCV
approach is also referred to as the “delta-normal” method.
7. The publicly available RiskMetrics (1996, 2001) documents have comprehensive details on
the service including mathematical and statistical assumptions made.
8. “Standard deviation” is just a parameter that measures the spread of the random variable
around its mean. If the standard deviation is big, then the random variable is widely spread,
if it is small then it is tightly packed.
9. Here “degrees of freedom” can be thought of as a scaling parameter which alters the shape of
the distribution. The lower the degrees of freedom, the fatter the tails of the distribution.
10. The assumption that asset returns have a mean of zero is often acceptable when calculating Value at risk
VaR for short-horizons, but it might not be acceptable for VaR over long-horizons. Banks and
regulators typically calculate VaR for short-horizons (e.g. daily VaR), and so it has become
standard practice in many textbooks on VaR to always assume a mean of zero.
11. These analytical formulas are standard algebraic representations that have been used for
many years in portfolio theory (Markowitz, 1952).
12. One might expect that calculating VaR for a portfolio using the VCV approach would be 413
extremely difficult if there were large numbers of assets in the portfolio and if investments
matured at different times. However, these complexities can be easily dealt with using
matrix algebra and by mapping cash flows to standardised instruments. See RiskMetrics
(1996, 2001) for further details.
13. The formula is r t ¼ lnð pt Þ 2 lnð pt21 Þ where pt is the closing price on day t and ln denotes
natural logarithm.
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

14. For the MCS approach the computational costs in terms of time taken to calculate the VaR,
the training required for analysts, and the computer power required are much greater than
for the VCV and HS approaches.
15. Either the standardised or the raw returns (in percent).
16. The two relevant EU Directives are “Directive 2006/49 EC” on the capital adequacy of
investment firms and institutions and “Directive 2006/48/EC” which relates to the taking up
and pursuit of the business of credit institutions. See www.federalreserve.gov/newsevents/
press/bcreg/20080626b.htm for further information on the level of implementation in the USA.
17. Bank for International Settlements, Basel Committee on Banking Supervision (2006), Section
VI. Market Risk – The Internal Models Approach, 4. Quantitative Standards, (i), p. 196.
18. For further details see, Bank for International Settlements, Basel Committee on Banking
Supervision (2006), particularly Section VI. Note that time-scaling in this way is only valid if
one-day asset returns are identically and independently distributed.
19. For many assets, returns and standardised returns are probably neither conditionally nor
unconditionally normally distributed random variables.
20. Bank for International Settlements, Basel Committee on Banking Supervision (2006,
Quantitative Standards(f), p. 96).
21. The Turner Review stresses the importance of avoiding procyclicality in future regulation
(Turner, 2009, p. 59, Chapter 2: “What to do”).
22. Bank for International Settlements, Basel Committee on Banking Supervision (2006,
Quantitative Standards(d), p. 195).
23. In this example, we do not know what the true conditional and unconditional distributions
are so we do not know whether these VaR amounts are actually underestimates or
overestimates of the true VaR.
24. Bank for International Settlements, Basel Committee on Banking Supervision (2006), Annex
10a, Supervisory Framework for the use of “Backtesting” in Conjunction with the Internal
Models Approach to Market Risk Capital Requirements, II. Description of the backtesting
framework, 21, p. 312.

References
Bank for International Settlements, Basel Committee on Banking Supervision (2006), “section VI.
Market risk – the internal models approach”, International Convergence of Capital
Measurement and Capital Standards: A Revised Framework, Comprehensive Version,
JFRC Bank for International Settlements, Basel Committee on Banking Supervision, Basel,
available at: www.bis.org/publ/bcbs128.pdf
17,4 Danı́elsson, J. (2002), “The emperor has no clothes: limits to risk modelling”, Journal of Banking
& Finance, Vol. 26, pp. 1273-96.
Danı́elsson, J. and Zigrand, J.-P. (2006), “On time-scaling of risk and the square-root-of-time rule”,
Journal of Banking & Finance, Vol. 30, pp. 2701-13.
414 Danı́elsson, J., Shin, H.S. and Zigrand, J.-P. (2004), “The impact of risk regulation on price
dynamics”, Journal of Banking & Finance, Vol. 28, pp. 1069-87.
Danı́elsson, J., Embrechts, P., Goodhart, C., Keating, C., Muennich, F., Renault, O. and Shin, H.S.
(2001), “An academic response to Basel II”, Special Paper No. 130, LSE Financial Markets
Group, London School of Economics, London.
Duffee, D. and Pan, J. (1997), “An overview of value at risk”, Journal of Derivatives, Spring,
pp. 7-49.
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

Duffee, D. and Pan, J. (2001), “Analytical value at risk with jumps and credit risk”, Finance and
Stochastics, Vol. 5, pp. 155-80.
Engle, R. and Manganelli, S. (2004), “CAViaR: conditional autoregressive value at risk by
regression quantiles”, Journal of Business and Economic Statistics, Vol. 22, pp. 367-81.
Jorion, P. (2007), Value at Risk: The New Benchmark for Managing Financial Risk, 3rd ed.,
McGraw-Hill, New York, NY.
Markowitz, H.M. (1952), “Portfolio selection”, Journal of Finance, Vol. 7, pp. 77-91.
Nocera, J. (2009), “The strange death of risk management”, The New York Times, January 2,
available at: www.nytimes.com (accessed May 1, 2009).
RiskMetrics (1996), Technical Document, JP Morgan, New York, NY.
RiskMetrics (2001), Return to RiskMetrics: The Evolution of a Standard, RiskMetrics Group,
New York, NY.
Taleb, N.N. (2008), The Black Swan: The Impact of the Highly Improbable, Penguin, New York, NY.
Taleb, N.N. and Triana, P. (2008), “Bystanders to this financial crime were many”, Financial
Times, December 7, available at: www.ft.com (accessed May 1, 2009).
Turner, A. (2009), The Turner Review: A Regulatory Response to the Global Banking Crisis,
Financial Services Authority, London.

About the author


Robert Sollis is a Professor of Financial Economics at Newcastle University Business School,
University of Newcastle upon Tyne. His research and teaching interests lie in the application of
econometrics and statistics in banking, finance, and risk management. Robert Sollis can be
contacted at: Robert.Sollis@ncl.ac.uk

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints
This article has been cited by:

1. J. Morgan. 2013. Forward-looking contrast explanation, illustrated using the Great Moderation.
Cambridge Journal of Economics 37:4, 737-758. [CrossRef]
Downloaded by University of Phoenix At 14:59 22 January 2015 (PT)

Potrebbero piacerti anche