Sei sulla pagina 1di 21

QUANTITATIVE RESEARCH / JULY 2010

Authors Bernd Scherer, PhD* Professor of Finance at EDHEC Business School Bala Balachander, Ruben Falk PhD 212-438-0648 857-383-5880 rfalk@capitaliq.com bbalachander@capitaliq.com Ruben Bala Balachander Falk 857-383-5880 212-438-0648 bbalachander@capitaliq.com rfalk@capitaliq.com Brian Yen, PhD 617-530-8107 byen@capitaliq.com

Introducing Capital IQs Fundamental US Equity Risk Models


Investment management firms, institutional investors, sophisticated individual investors, investment consultants, and regulators increasingly focus on the risk management capabilities of investment management firms. This is not surprising as investors ultimately receive a return on risk not a return on assets. 1 Even small amounts invested in highly volatile assets might expose investors to higher risks than large amounts of safe assets. Leveraged portfolios could be invested in a relatively small amount of net assets, but might still contain much bigger risks than larger yet unleveraged portfolios. Under this view, asset management becomes risk management. However, risk management needs formal risk management models to make sure the conjectured consequence consistently follows the assumed cause when analyzing portfolios, i.e. when managing risks. The purpose of this paper is to document the process of building and testing of a fundamental US Equity risk model across a number of short to medium term forecast horizons. The paper is organized as follows: Section 1 reviews typical risk model applications; Section 2 discusses the relative merits of alternative forms of multifactor risk models; Section 3 documents data and methodology; Section 4 describes the chosen test metrics; Section 5 presents our results, and Section 7 concludes.

See CLARKE et al. (2002) and SCHERER (2000) for more details.

Bernd Scherer, PhD (bernd.scherer@edhec.edu) is Professor of Finance at EDHEC Business School, London and member of EDHEC Risk.

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

1 The Need for Risk Management Models


Portfolio risk models are typically used for several purposes. The first purpose is to aggregate risks across a large number of individual holdings into a single risk metric. In order to aggregate risks (both across time and across assets) all commercially available risk models, at an appropriate stage, use the assumption of multivariate normality. While it is trivial to take an issue with multivariate normality, the alternatives for large portfolios are less clear as higher order co-moments require a vast amount of data, are difficult to estimate, and are subject to much more estimation error. For large diversified factor neutral (to ensure diversification and the removal of trending and leptokurtic factor bets) long/short portfolios, the central limit theorem will offer a reasonable approximation to normality. The second use for portfolio risk models is to forecast portfolio risks. These risk figures are routinely used to comply with internal, client, or regulatory guidelines. Taking more risks than an investor is able to tolerate might force him to abandon an otherwise sensible investment strategy at the time of stress which is usually not the best time to sell. Conversely, taking too little risks is equally undesirable as investors might be unable to reach their investment targets even though their investment views have been proven correct.2 A third use for risk management models is the decomposition of position risks - a technique that has recently been rebranded under the popular name of risk budgeting.3 For this to become a meaningful exercise, risks have to be decomposed in the same way they are taken by portfolio managers. Risk budgeting requires the need to identify and interpret individual risk factors as well as choosing risk factors (variables that explains why a group of stocks tends to move together) consistent with the portfolio managers view of the world, i.e. his investment process. Composite risk factors that are built without considering the way asset managers make decisions are not helpful in this respect. Fourth, risk models are used as inputs for portfolio optimization. For this purpose, a good risk model needs to correctly separate factor risks and residual risks. Optimizer residual risks (risks that are uncorrelated to each other) are good risks because they quickly diversify away. Factor risks, on the contrary, are bad risks as they are shared by all assets and hence will not diversify away completely. If, however, a risk model incorrectly identifies risks as residual risks that are in reality still affected in a systematic fashion by a hidden risk factor (unknown to the risk model), then the speed of diversification will be strongly overestimated. Thus, what looks like a low risk portfolio is actually much riskier than what the portfolio (risk) manager expected4. A fifth purpose, somewhat incidental to the primary purpose of managing risk, is to use the risk factor returns and exposures to decompose the sources of portfolio returns. Along with the risk attribution, this can provide useful feedback on the portfolio construction process as related to the ex-post effect of intended or unintended factor bets.

If our only task was to arrive at a risk forecast for a given portfolio, then we would not necessarily want to employ a factor risk model that needs to rely on multivariate normality to keep the mathematics for risk attribution (which we do not need to make a risk forecast) and portfolio optimization (which we do not need as the investor looks at a given portfolio) tractable. Instead we could define a time
2

series of

back-casted log portfolio returns, assets

rt
k i 1

n t 1

, computed from todays (date


n

n ) portfolio weights, wn ,i , for


i
at

i 1,..., k

rt

n t 1

ln 1

wn ,i Rt ,i

where
t 1

Rt ,i

denotes the simple return of asset

time t . We can view this as a pseudo history of portfolio returns conditional on todays weights. This is identical to a historical simulation of todays portfolio. A univariate portfolio risk model is essentially a financial time series model that works di rectly on the back-casted return series without the need to calculate individual asset variances and co-variances. 3 See SCHERER (2010) for a review of the most recent techniques.
4

See STEFEK/LEE (2008) or RENSHAW (2008) for more elaborate versions of this argument.

Capital IQ Quantitative Reseach

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

2 Multifactor Equity Risk Models


2.1 The Need for Conditional Risk Models The goal of a variance based multifactor risk model is to forecast the conditional covariance matrix by imposing some structure on the unconditional covariance matrix as efficiently and accurately as possible. The need for developing multifactor risk models arises from the shortcomings of the unconditional covariance matrix which needs n n 1 / 2 parameters to estimate, i.e. n variances on the main diagonal and the remaining upper or lower triangular half. This is practically infeasible for large portfolios as we simply do not have enough data. Imagine an investment universe with n 2000 assets. We would need at least 2000 1999 / 2 2million data points for our matrix to be positive semi-definite (i.e. to be of rank 2000). The larger the number of assets (relative to the number of observations) the more severe estimation error (low condition number of the covariance matrix) and hence portfolio instability arises. This is where we need to impose a (factor) model on the data. The idea (hope) of imposing a parametric model is to reduce estimation error. However, we will only benefit from its structure as long as this structure is largely correct. What largely means is an empirical question. 2.2 A Generic Model In general, a factor model consists of factor loadings (also called betas or exposures), factor realizations (factor returns), and residual returns that allow us to link factor and residual volatility to individual stocks and aggregate risks within portfolios.5 All linear multifactor risk models share the same principal structure (1) where,

ff

BT

B represents a n k matrix of factor loadings,


factor returns and a n n diagonal

ff

the k k covariance matrix of

residual risk matrix.6

Factor models vary on how factor loadings and factor returns are estimated and/or pre-specified. Managing factor risk is extremely important as factor risk tends to be both leptokurtic and trending while residual risk tends to diversify away quickly (at least for a good factor model, i.e. when residual risks cannot be jointly described by some hidden factor). As outlined above, factor models try to reduce estimation error by imposing some structure to focus on the meaningful correlations rather than on noise. If correct, this will be beneficial for hedging and optimization applications.

CONNOR (2009) et al, ZANGARI (2003), CONNOR (1995) and MACQUEEN (2003) provide very comprehensive reviews on the details of multivariate equity risk modeling. DiBARTOLOMEO/WARRICK (2005) extend factor based risk models to include market sentiment.
5 6

Zero off diagonal elements in

are assumed rather than estimated. If we would not impose this restriction, there would be no

dimension reduction. We would still have to estimate a n n residual covariance matrix. However, if our model captures all relevant risk factors, then the off diagonal elements are close to zero. Risk model providers sometimes use hybrid risk models to explain non zero off diagonal entries in terms of hidden factors.

Capital IQ Quantitative Reseach

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

A generic factor model for i 1,..., n stocks across t 1,..., T time periods and j factors can be written as a general panel regression model.

1,..., k

r1t
(2)

11t 1t

... ... ...

1 jt

f jt ...

1kt

f kt

1t

rit rnt

i1t 1t

ijt

f jt ... f jt ...

ikt

f kt f kt

it

n1t 1t

njt

nkt

nt

Here rit denotes the return of stock i at date t ,

jit

stands for the factor exposure of stock i to


it

factor j at time t . Factor returns are given by f jt while

captures the unexplained, i.e. residual

risk. All returns are demeaned such that (2) does not require an intercept. Of course this model offers too much generality to be of any practical relevance. We will show in the next sections how different risk models impose different constraints on (2). 2.3 Cross Sectional Models Cross sectional models drop the time index t in equation (2) to arrive at the following cross sectional regression

(3)

r1 r2 rn

11 21

12 22

1k 2k

f1 f2 fk

1 2

n1

n2

nk

The cross section of stock returns is regressed against selected individual stock characteristics like industry membership, capital structural leverage, book to price ratio, stock momentum, etc. Each characteristic ij represents a data point that is assumed to be known. Factor returns f1 ,..., f k in contrast are unknown and need to be estimated (this also applies to residual returns 1 ,..., n ). For each time period we run a new cross sectional regression, where estimated factor and residual returns are stored. This method allows us to build a time series of factor and residual returns to estimate the factor covariance matrix
ff

and residual covariance matrix


ff

. The asset

covariance matrix can then be calculated as B

BT

2.4 Time Series Models The oldest and most intuitive factor models are time series models with pre-specified factor returns, i.e. the time series of factor returns is given, while factor exposures (and residual returns) need to be estimated. In the notation of (2) we run for each i 1,..., n stocks a time series regression of the form

(4)

ri1 ri 2 riT

f11 f12 f1T

f 21 f11 f 2T

f k1 fk 2 f kT

i1 i2

i1 i2

ik

iT

Capital IQ Quantitative Reseach

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

Time series models are designed to explain the variation of individual returns (left hand side in (4) from a set of independent variables (right hand side). The covariance matrix of asset returns is

calculated from B

ff

T B

, is estimated where the n k matrix of factor loadings (betas), B

from n time series regressions with k factors each. 2.5 Time Series versus Cross Sectional: Which Model Is Best? Currently most risk management software providers use cross sectional models. We believe this is not necessarily the best choice for the following reasons: 1. Time series models are designed to explain the variation in asset returns, in other words risk.7 Cross sectional models in contrast explain the variation in cross sectional returns, in other words conditional asset means. This is different to modeling risk and part of the legacy of cross sectional models that have been developed first for the purpose of forecasting returns. We believe cross sectional models are transformed alpha models not original risk models. FAMA/FRENCH (1993) themselves argue that a time series model is more appropriate to estimate risk.8 Time series models will clearly estimate individual beta exposure with (estimation) error:
OLS ij true ij ij

2.

However this estimation does little harm at the portfolio beta level as it simply diversifies away when we add up individual beta exposures:
portfolio ij 1 n n i 1 true ij 1 n n i 1 ij 1 n n i 1 true ij

given that betas are individually unbiased (we assumed equal weighting of individual assets), in stark contrast to cross sectional models. If individual betas (independent variables) are measured with errors, the resulting regression (cross section of returns against cross section of betas) coefficients, i.e. the factor returns are biased.9 However, errors in factor returns do not diversify. They affect the whole portfolio and are on the databank (of factor returns) forever. 3. Cross sectional models are plagued with econometric problems. Regressions suffer from heteroskedasticity, i.e. not all observations carry the same information. Observations with large errors should be weighted less, but there is little available theory that could guide model builders. Also multivariate outliers are difficult to spot and might have a nontrivial effect on factor returns. Since that misestimated factor returns do not diversify away, some risk model providers use robust regressions instead. Robust regressions tend to move factor risks to residual risks. The fact that residual risks diversify quickly may lead to problems if the suspected outlier is not an outlier but an informative data point instead. Cross sectional models use unrealistic and extreme sector sensitivities that can only take values of zero or one. However, multiproduct firms have different exposures to different industries. If this is not correctly specified the resulting estimates of factor returns will be misestimated and hence risk forecasts are likely to be wrong. To illustrate the point consider the return of a given industry at a given point in time. This industry return equals the regression coefficient on a dummy variable with unit exposure if the stock belongs to a given industry and zero else. These pre-specified unitary industry exposures are most certainly wrong as stocks within an industry have different exposures to the risk factor running through an industry. Up and downstream firms in the oil industry for example will have different exposures (betas) to the oil factor, i.e. they offer a different leverage on the oil industry.
See ROSENBERG (1974) as the first reference for multifactor risk models. Another way of saying this is that factor returns from cross sectional models are actually market implied factor returns and more represent what the market might have thought about a factor return rather than what it truly was. Suppose we calculate the betas of all stocks with respect to oil price changes and use these betas as a stock characteristic. Regressing this characteristic against next period stock returns, yields a market implied return on oil that will generally differ from the realized return on oil. 9 This is a standard result in econometrics far beyond the risk model estimation. See KENNEDY (2003, chapter 9) for more detail.
7 8

4.

Capital IQ Quantitative Reseach

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

Providers of cross sectional risk models try to mitigate this problem with finer and finer industry definitions and multi-industry assignments. However, the problem of measuring industry exposures using a scale that is pre-defined and discrete rather than dynamic and continuous cannot be fully addressed in cross sectional models. 5. In cross sectional models all stocks within a sector are generally assumed to respond identically to an industry shock. In our view, this makes cross sectional models difficult to apply for hedging purposes. Every cash neutral allocation within an industry exhibits zero industry risk exposure in the logic of cross sectional models, while a long/short portfolio of high/low industry beta portfolios will in reality exhibit sector risk. Risk estimates from cross sectional models depend on the industry universe used. As a result, there is no one size fits all cross sectional model. Risk model providers seem to bias their estimation of factor returns to where their book of business is (which makes business sense of course). In contrast, factor returns in a time series model are given, while beta exposures will be calculated on a stock by stock basis without any consideration to the wider universe.

6.

However, these relative advantages come at a cost. Time series beta estimates might lack the responsiveness of cross sectional models. Cross sectional models have little problem with firms that change their capital structure over night. In contrast, it might take a time series model some time to realize a change in sensitivities (betas). If true betas change fast, then a time series regression approach might be at a disadvantage as it averages over a (too) long history while pre-specified betas are more responsive if correctly specified. In practice these disadvantages are not meaningful for fast updating short to medium term risk models (two years of daily data or less) as opposed to long term models (monthly updating over five years of data). 2.6 Statistical Models Instead of using time series or cross sectional models, some investors use statistical models.10 These models attempt to simultaneously estimate factor returns and factor exposures. While they have a good in sample fit, they are more challenged with bad out of sample performance. Their biggest advantage (and disadvantage) is that they use little economic priors to identify factor risks. Hence some would say that statistical models come from Mars. While this might look like a good idea, if we model an unknown emerging market with little fundamental data available, it seems like a waste of knowledge where priors are widely available. After all we might know that company A is a bank while company B is a pharmaceutical company. Statistical factors (also called blind factors for that reason) might need many data points to establish that they belong to different industries. Empirical test by MILLER (2006) have shown that statistical models have a poor record in identifying changing risk structures. Using statistical models in isolation might therefore not be a very good idea. Instead they are increasingly used to safeguard against hidden factors in residual returns. This idea gives rise to so called hybrid models trying to offer the best of both worlds. Using fundamental factors (either in cross sectional or time series models) provides context for risk and portfolio management as well as aligning the covariance matrix with return forecasts. On the other hand, applying a statistical factor model on the residual returns imposes coverage of un-modeled common factors, and thus safeguards against overoptimistic ex ante diversification of residual risk.

See CONNOR/KORAJCYK (1986) for the mathematics of statistical factor models. A textbook exposition is found in ZIVOT/WANG (2006).
10

Capital IQ Quantitative Reseach

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

3 Building a Fundamental US Time Series Risk Model


3.1 Methodology Our US time series model is constructed from factor series of (i) market returns, (ii) fundamental style factor returns calculated from our Alphaworks factor library, and (iii) GICS industry returns. Our model is constructed after ensuring that the three factor groups are mutually independent (or orthogonal). This ensures quality of model estimates and interpretability of results. Risk attribution is facilitated as there are no covariance terms left to be allocated among marginal risk contributions. The exact order of orthogonalization is flexible in practice. By default, we start with market returns as the most important source of variation. We then regress style factor returns against market returns and take the residuals from this regression as market neutral style returns, i.e. as style returns after the market factor has been taken out. We finally proceed by calculating market and style neutral industry returns by regressing industry returns against style and market returns and again use the residuals from this regression as pure industry returns. This order ensures that the loadings on our comprehensive style factors take precedence in the interpretation of portfolio exposures. Nevertheless, the desired order of imposing independence among the factor groups may be different for different managers. For instance, if sector exposure is a primary concern, we can construct a variation of the model in which industry factors take precedence over style factors. The order of independence does not affect the quality of the risk forecast. It will only affect the interpretation of marginal risk contributions (i.e. risk attribution), but this is the reason for choosing a particular ordering in the first place. 3.2 Time Horizon Given the speed at which market events unfolded in 2008 and 2009 we decided to focus on short to medium term risk models to help investors avoid blowouts in monthly returns that would endanger their client franchise. While shorter term risk models are more responsive to changes in risk11, users of risk models often get concerned about turnover from changing risk forecasts. They should not be. Suppose we had a perfect forecast of future one month volatility. However the volatility of one month volatility is itself about 6%. A perfect forecast would of course show the same volatility. Should we ignore this perfect forecast on the grounds of transaction costs (TC)? Probably not as we need to separate risk forecasting and portfolio construction. First we should make our best forecast, and then we worry about how to implement it. Both tasks are linked by a transaction cost model: if the disutility from having too much risk is outweighed by TC, investors will not trade. Statements that turnover should be included in risk model evaluation sound prudent but lack decision theoretic underpinning and a practical objective function. In fact, TC should not be managed by a risk model. Investors should trade because the higher risk could lead to losses next month, and precisely these monthly losses are what investors are concerned with. For investors who rebalance monthly, even if the investors knew that risk would drop the month after next, they should still trade today based on the current monthly risk forecast. After all, a risk model based on 30 years of equally weighted data will be very stable in its risk forecasts as a result of the averaging process, but this is an artifact of the averaging process going on inside the model and probably has little relevance for the rebalancing frequency and investment horizon of most investors. In general, we delegate the trade-off between risk versus transaction costs to a transaction cost model combined with portfolio construction. 3.3 Data The US Fundamental Risk Model was specifically designed to take advantage of the state-of-the-art factor construction and data aggregation by drawing on our extensive Alphaworks factor library and proprietary data collection. The model is the first of its kind, to our knowledge, to be entirely built using our in-house Point-In-Time data sources, ensuring the highest level of historical accuracy during backtesting and simulation. We believe the models style factors better reflect the key building blocks
11

For short term risk models, we expect style and industry factors to be less dominant than for a long term risk model.

Capital IQ Quantitative Reseach

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

typically used in alpha generation and portfolio construction by managers. They are therefore more relevant for portfolio analysis and risk attribution. Our model uses eight style factors compiled from the Alphaworks library of more than 130 thoroughly researched individual component signals described in Table 1. Style Analyst Expectation (AE) Capital Efficiency (CE) Earnings Quality (EQ) Historical Growth (HG) Price Momentum (PM) Size (SE) Valuation (VA) # of signal factors 11 Components Earnings & Sales Forecast Earnings Surprise Analyst Diffusion Analyst Revision Return on Equity & Capital Leverage & Interest Coverage Issuance & Buybacks Balance Sheet Accruals Working Capital & Asset Turnover Capital Expenditure and R&D Intensity Margins, Payout Ratio 1 & 3-year growth of - Operating & Free Cash Flow - Earnings - Margins 1, 6, 9 & 12-Month Price Momentum Technical indicators over various time frames MACD, RSI, Slope, 52 Week High/Low Log of Market Cap. & Sales Reported & Forward Earnings Yield Dividend Yield Book to Price Sales, EBITDA & Cash Flow to Enterprise Value Inverse PEGY Realized volatility CAPM Beta Distance from High to Low (1 & 12 months) Short Interest & Trading Volume

10

26

31

20 2

34

Volatility (VOL)

Table 1: Style Factor Description from Alphaworks library. Each style category is made up from a number of a long/short cash neutral signal portfolios. These portfolios are derived from a univariate sort that determines the top 33% of stocks (longs) according to the chosen characteristic and the bottom 33% (shorts). Our universe covers 6,000 US equities that have been or are major index constituents and start from 1992. Each of the signal portfolios is a long/short cash neutral (arbitrage) portfolio that is long the top 33% of stocks (according to the chosen characteristic) and short the bottom 33%. For factor exposures, factor returns, and stock specific risk, the model covers 6,000 US equities that have been or are major index constituents. The data used for the purposes of this paper start in 1992 and end in 2009. Sector returns are calculated at the GICS 2 level. The model is estimated real time with no look-ahead bias. We do not employ macroeconomic data (factors) like inflation, commodity prices, interest rates, or consumption. Macroeconomic variables typically affect groups of stocks (interest rates move banks, consumption moves retail, oil prices moves the oil industry, exchange rates move export stocks, etc.) and hence have little explanatory power once we have included sector and industry effects. In fact out of sample performance typically deteriorates.
Capital IQ Quantitative Reseach 8

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

3.4 Estimation Frequency The covariance matrix and factor exposures are estimated on daily returns data, and the fundamental style factors are calculated on Point-In-Time data with no look-ahead bias for restatements of accounting data. The historical model is estimated monthly for back-testing purposes in order to ensure comparability with currently existing cross sectional risk models, while an ongoing production model can be estimated daily based on daily Point-In-Time data. Daily data usually exhibits considerable autocorrelation and cross autocorrelation.12 We therefore adjust the EWMA (Exponentially Weighted Moving Average) adjusted covariance matrix of factor returns with the method proposed by NEWEY/WEST (1987). 3.5 Degrees of Freedom So far we argued the generic case for time series versus cross sectional models and described our base data set. However, the successful implementation requires many more choices. Estimation universe. Factor returns can be estimated on the complete universe, all 6000 US stocks that have been or are part of a major index, the Russell 3000 or the S&P 1500. Additionally, we could use various weighting schemes on these universes - equal weighting, market weighting, or log size weighting have different effects on the impact of idiosyncratic returns from large stocks on factor returns. Equally importantly, any stale pricing of illiquid stocks that might enter the top or bottom 33% of our long/short style portfolios will have an effect on the sensitivity and explanatory power of our style factors. If returns dont respond contemporaneously to daily news (but with a lag because they are not traded every day), betas will be artificially low.13 Dimension reduction. It is not advisable to run a time series regression with 130 individual style factors that exhibit high intra-group correlations. Regression results will suffer from multicollinearity (factor exposures are difficult to identify) and low degrees of freedom (factor exposures cannot be estimated with great precision). Moreover, risk decomposition will not be informative with this amount of individual factors. We therefore reduce the dimensionality in each style factor group and use principal component analysis (maximizes common variance) or factor analysis (maximizes common correlation). Alternatively, instead of using optimized weighting schemes as implicit in principal component analysis or factor analysis we can use a simple yet robust equal weighting of factor returns. All schemes will help us filter out the noise in individual factor time series and save a considerable amount of degrees of freedom in our time series regressions. Estimation of factor exposure. The natural choice for time series regressions is ordinary least squares (OLS). However, we can also use stepwise regressions to filter what are the important factors for each stock rather than picking up noise from insignificant factors; although, stepwise regressions might overly adjust within sample with little out of sample stability. Given that there might be too little factor sensitivity if some stocks exhibit stale pricing, we can accommodate for this in our time series regressions by adding lagged factor returns and calculating factor sensitivity as the sum of lagged beta exposures.14 Finally, we can also use robust OLS to safeguard against multivariate outliers. Estimation of factor risk and residual risk covariance matrices. It is well known that equity volatilities cluster in time, i.e. days of high volatility are more likely to be followed by high than by low volatility. Given our objective to build a short to medium term risk model (we want to forecast monthly portfolio risks), we need to calibrate our model forecasts to this time horizon by choosing (potentially different) weighting schemes for factor correlations, factor variances, and residual risks. Currency factors. Despite estimating a single US equity market model, it might be advisable to separate currency risk as a separate component of the risk budget. Even if investors hold purely domestic US stocks, they might be exposed to exchange rate risk either via the translation of international revenue streams or via the strategic impact of an appreciating exchange rate on its
For international models, these problems compound as data on a databank are not time synchronous. One day returns between US and AUD have virtually no overlap of trading hours. 13 See SCHOLES/WILLIAMS (1977) on the impact of stale pricing. 14 See ASNESS et al (2001) for an application of this idea to hedge fund data.
12

Capital IQ Quantitative Reseach

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

foreign competitors (GM gets hurt if the USD appreciates against the YEN as this makes Toyota imports more competitive). Even if fund managers do not manage these implicit exchange rate exposures, they are still part of a companys factor structure. Leaving them out might result in biased regressions and a misleading decomposition of portfolio risk. Being long the export sector of an economy will tilt portfolio risk towards currencies. On the other hand, currency exposures might be difficult to estimate reliably.15 It is not feasible to test all permutations of risk models on 15 years of daily data. We therefore focus on 28 model variations summarized in Table 2. These models rely on a qualitative judgment, but we believe it spans the space of reasonable models well. The models that we focused on had the following set of parameters (the variations among the models are captured in Table 2): We used a two year moving window of history asset and factor returns data. The Sector returns were computed at the GICS2 level. The Alphaworks individual style returns were computed as weighted top/bottom tertile (3quantile) return spread of a Long/Short factor portfolio. The models tested below employed different universes and weighting schemes for the factor and sector returns. The default dimensionality reduction scheme within each style groups was equal weighting. We also employed a default currency factor block of EUR, CHF, GBP, JPY daily returns. The exposures were estimated using OLS regression by default, and the factor correlation matrix and factor variance and specific risks were estimated using exponential weighted moving averages (half lives specified in Table 2 and 3) with a Newey-West correction window of three days. By default, models were estimated from 1996 through 2008 for testing purposes. Departures from these defaults are noted in Table 2.

15

See DALES/MEESE (2001) who find that optimal currency hedge ratios are extremely unstable.

Capital IQ Quantitative Reseach

10

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

Model M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13 M14 M15 M16 M17 M18 M19 M20 M21 M22 M23 M24 M25 M26 M27 M28

Factor Correlation HalfLife (days) 480 480 480 480 240 240 120 480 240 480 480 240 240 240 240 120 240 240 240 240 240 240 240 240 240 240 240 240

Factor Variance Exp. Wtg HalfLife (days) 90 90 90 90 90 45 45 45 45 90 90 60 45 30 90 45 60 60 60 60 60 60 60 60 60 60 60 60

Factor Return Universe R3000 R3000 R3000 R3000 R3000 R3000 R3000 R3000 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500 SP1500

Factor Weighting Equal Equal Equal Equal Equal Equal Equal Equal Equal Equal logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap logMktCap

FX Block Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y N N N N Y N N N N

Other Parameters PCA for dim. reduction. Top 2 PCS equal weighted Robust Regression for Exposure Estimation 4 yrs window historical data

GICS3 Sectors, SP1500 Sector Universe equal weighted GICS3 Sectors, SP1500 Sector Universe, equal weighted GICS3 Sectors, R3000 Sector Universe, logMktCap weighted Factor Analysis for dim. reduction PCA for dim. reduction. Top 2 PCS EQ weighted Added 1 day diff of log(VIX), Orthog. in second block

Added 1 day diff of log(VIX), Orthog. in first block Added 1 day diff of log(VIX), Orthog. in second block Test Period 1992-2008 PCA for Dim Reduction (top PC). Test Period 1992-2008

Table 2: Model variations for US short term fundamental risk model. We test 28 variations for our US equity risk model along the described degrees of freedom.

Included emerging currencies KRW/THB/BRL

Capital IQ Quantitative Reseach

11

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

4 Risk Model Evaluation


4.1 Test Metrics Evaluating risk models requires a test metric. A widely used test looks for bias in risk model forecasts. A risk model is said to be unbiased if its forecasts neither consistently under- or overestimate realized volatility. This so called bias test is based on the series of portfolio returns rescaled by forecasted volatility. Let us denote the volatility forecast from a given risk model made at time t by t f , the demeaned return over the risk forecast horizon by rt , and the series of rescaled (or better standard normalized) portfolio returns by zt (5)
rr
f t

. The bias test statistic is given by


T

bias

2 t 1 t

This is in fact the standard deviation of standard normalized portfolio returns. Under the null hypothesis that our risk forecast is correct, we would expect this standard deviation to be one, i.e. portfolio returns have been correctly rescaled to a standard normal variable. For normal and 2 2 T or independently distributed portfolio returns, the test statistic is distributed as T bias ~ approximately as bias ~ N 1,
2 T

for large T . A bias test statistic larger (smaller) than unity

indicates that the risk model underestimates (overestimates) risk. The bias test is able to test a risk model on a stand-alone basis with a natural outcome (a value close to one is good). This compares favorably with more elaborate statistics like mean squared error for which we do not have an easy calibration for what actually represents a good outcome (in other words there is no standardization). While the bias-test is most commonly reported by risk model providers, it suffers from at least four shortcomings. First, it is not clear that an unbiased forecast is always preferable. A decision maker is likely to prefer a model with a small bias if it offers a small variance in exchange. Unbiased hence is not necessarily better. This bias-variance tradeoff is well known and captured with the so called mean square error. Second, the bias test cannot be used to compare two risk models with different forecasting errors. If forecasts are unbiased but noisy the bias test statistic will be downward biased. Third, the bias test is not a test for prediction accuracy. It only provides an indication of average over or under-prediction, and this average can be largely misleading as periods of under-prediction might simply be followed by periods of over-prediction. Worse, if the true risk fluctuates over the estimation/forecast period, the bias test might reject the risk model even if the risk model was correctly specified. Fourth, and practically very important, the bias test is very sensitive to outliers. Large return realizations will bias the model towards an underestimation of risk, especially when return levels are widely varying as is the case in our factor test portfolios16. This led BRINER/CONNOR (2008) to remove influential data points when calculating the bias statistic. Instead of removing data points, we suggest using the risk ratio statistic given in (6)

risk _ ratio T

T t 1

r t f t

where tr denotes realized risk. The risk ratio statistic is close in concept to the bias test but less affected by extreme returns (no squaring) than the bias statistic, while both converge to one as samples get large.17 Given the shortcomings of the (risk-ratio) bias test, we also use the DIEBOLD/MARINO (1995) test. We use this test to compare alternative risk model specifications. Unlike the risk ratio test, the DIEBOLD/MARINO (DM) test is specially designed to compare the outcomes of two risk forecasting models. We first need to define a loss function for each risk model. Typical loss functions to quantify
16 17

These test portfolios are described in Table 1. The risk ratio statistic unfortunately still inherits the remaining problems from the bias test.

Capital IQ Quantitative Reseach

12

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

the forecast error for each risk model include absolute forecast error or squared forecast error. We then calculate a time series of loss differences. Suppose we are concerned about squared differences between realized and forecasted risk (assuming a quadratic loss function). Define

lti

f ,i t

r 2 t

for model i

1, 2
r t

as the squared differences between forecasted risk under model i with realized risk, test looks at whether dt

. The DM

lt1 lt2 is statistically different from zero. This can be conveniently tested by running a regression of d t against a vector of ones. 18
(7) However, given that lt
1

lt1 lt2

et

lt2 is likely to be highly auto-correlated (overestimation of risk this month

is likely to be followed by another overestimation next month) as well as heteroskedastic, we estimate this linear regression with a HAC adjustment.19 4.2 Choice of Test Portfolios When evaluating a risk model, we need to decide both on the test metrics involved as well as on the test portfolios employed. If, for example, a risk model misses a particular factor, then this will only be exposed in testing if the assets in a chosen test portfolio also have exposure to this missing factor. There are several routes we can choose for selecting test portfolios. The first and most common approach is selecting widely available benchmark portfolios. This does not expose the tester to the critique of having selected unreasonable portfolios. We will use standard market benchmarks in the form of benchmarks without tilt (S&P 100, S&P 500, S&P MidCap 400, S&P SmallCap 600, S&P Super Composite 1500) as well as tilted benchmarks (S&P 500 Growth, S&P 500 Value, S&P MidCap 400 Growth, S&P MidCap 400 Value, S&P SmallCap 600 Growth, S&P SmallCap 600 Value), all with monthly views throughout the testing period. We refer to these as Benchmark Portfolios. The second method draws portfolios completely at random. While this will search the whole space of admissible portfolios (not a trivial task at all in the presence of investment constraints), it will also create portfolios that are unlikely to ever be relevant to investors20. For this reason we did not include any truly random portfolios in our testing. The third method (and also included in our testing) is to create active (long/short) portfolios that are meaningful for different investor types. We deliberately create active portfolios along typical signals as acted upon by different investors using the metrics defined in Table 1. This allows us to test how vulnerable portfolio risk models are to their definition of risk factors. Risk models using a covariance matrix that is ill aligned with alpha factors will find it difficult to assess how uncorrelated their residual risk really is. It also allows testing whether different investors would choose different risk models or whether a particular risk model is superior regardless of investment style. For each of the eight styles, we calculate a concentrated (50 names on one side) and a diversified (200 names on one side) portfolio, thus giving a total of 16 portfolios with monthly views throughout the testing period. We refer to these as Factor Portfolios. The Benchmark Portfolios and the Factor Portfolios collectively form the Test Portfolios.

All risk model calculations as well as model evaluations are performed in R. Other risk model specification tests include the calculation of min-variance portfolios or the creation of long/short hedge portfolios. 20 See BURNS (2006) on using random portfolios.
18 19

Capital IQ Quantitative Reseach

13

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

4.3 Calculating Realized Volatility Unfortunately true volatility is not observable.21 All we observe is realized volatility. We need to estimate true volatility (the standard deviation the random number generator life has used) from observed data. The positive news is that subdividing our observation period in smaller and smaller grids will reduce estimation error which is what we desire when we try estimating volatility. The negative news is that high frequency data often exhibit high autocorrelation, which in turn does not allow us to use the square root of time rule to aggregate daily risk estimates into monthly or annual numbers. High autocorrelation leads to underestimated time aggregated risk figures. If positive returns today are more likely to be followed by positive returns tomorrow, then the return distribution will spread out wider than what you would expect under the independence assumption. A widely used adjustment is to use the square root of the variance ratio in addition to the square root of time. To transform daily volatilities into monthly numbers we would calculate
daily

VR

21

monthly ,

where the variance ratio ( VR ) equals the variance of monthly returns divided by 21 times the variance of daily returns.22 However, given that our test portfolios considerably change risk and structure every month, we need to abstain from using the VR to correct for potential autocorrelation in daily returns. We simply do not have the data to estimate it, and hence use the standard deviation of daily returns in next month as an estimate for realized risk.

For low frequency data this is true, while for high frequency data (intraday 5 min interval) realized volatility can be reliably estimated. 22 See CAMPBELL et al (1997) for a detailed description of the variance ratio.
21

Capital IQ Quantitative Reseach

14

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

5 Risk Model Calibration


This section describes our process of calibrating risk models to the data as well as the portfolio managers risk horizon. Calibration is important as we have many degrees of freedom that could potentially affect the quality of our risk forecasts. It is also part of a customization process that riskaware clients require to tailor risk model forecasts to their portfolio revision frequency. 5.1 A Short-Term Investor Suppose we aim to find a risk model for a one month horizon. Is this a reasonable exercise, i.e. who would be interested in these short horizon risk forecasts? First, all investors with frequent decision making (volatility capture as opposed to longer term investors that try to catch structural alphas) and high turnover will be interested in short term models. For them it makes little sense to estimate the six month risk on a portfolio that will have changed 50% of its composition in a month. Second, all portfolio managers of open ended funds that are invested in less liquid products will want to avoid increases in short term risks. Redemptions will create liquidity costs as even unleveraged funds always have a short side the client. In order to come close to the above situation, we take our 16 long short factor portfolios. They represent management styles that can be found among active portfolio managers, and they exhibit large turnover with widely changing risk characteristics, thereby introducing a significant random component to the testing. We start with the risk ratio variant of the bias test because of its popularity and intuitive appeal with practitioners. For each model variation we calculate the risk ratio statistic for our factor portfolios. The results are given in Figure 1. We see that all model versions do well on average in the sense that their forecast appears unbiased, and there is little evidence to choose among models on the basis of the risk-return ratio alone. This holds maybe with the exception that models M5 to M8 and M18 to M19 seem to do slightly better. Almost all model versions have underestimated the risk of Size Factor portfolios, but the average underestimation was is limited to a maximum of 20% (e.g. a realized risk of 12% for a 10% risk forecast) which has little relevance for all practical purposes.

Figure 1: Heat map for risk ratio statistic. For each model variation and factor portfolio, we report the risk-ratio statistic (average of monthly realized volatility divided by monthly forecasted volatility) according to (6).

Capital IQ Quantitative Reseach

15

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

Figure 2: Average risk ratio statistic for groups of test portfolios (16 long/short factor portfolios), value tilted benchmark (6 value and growth tilts from standard benchmarks), and size tilted benchmark (5 portfolios covering S&P100, S&P500, S&P1500, and S&P small and mid cap) portfolios In order to arrive at a reference for the variation in risk ratios, we compare the average risk-ratio across all factor portfolios versus the average risk ratio for value and size tilted benchmark portfolios in Figure 2. Here we see that the variation in risk-ratios is considerably larger for the factor portfolios. This is not unsurprising given that our factor portfolios are designed to exhibit extreme factor risks, while the (tilted) benchmark portfolios are in general very well diversified portfolios with little extremes in factor exposure. However, as already mentioned in section 4.1 a bias test statistic of one is not necessarily an indication for a good risk model. Investors might be better off with using a slightly biased model that in return tracks realized risk very closely. If forecasts always equal realized risk + 0.5%, then this would be preferable to an unbiased risk model that overestimates or underestimates risk by 5%. Being biased might be the better alternative. Consequently we use the previously described DM test to test our model versions relative to each other to see whether we can find a model that outperforms all other models.

Figure 3: Median DIEBOLD/MARINO HAC adjusted MSE test-statistics of model M14 against all other models. Positive entries mean the competing model exhibits a higher mean squared error than model M14.
Capital IQ Quantitative Reseach 16

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

It turns out that this model exists. In our framework, this is M14 which distinguishes itself from other models by a short half-life (30 days) for all variance estimates (factor and residual) and a 240 day half-life for factor correlations. Given that we are looking for a short to medium term horizon risk model, this again meets our intuition. Higher weights should be given to the more recent data if the objective is to forecast short term volatility. The results are given in Figure 3 that reports the median HAC adjusted t-value on for lt
Mi

ltM 14

et with MSE as loss function. All t-values are in favor of M14 with 14 out of

28 (about 50%) are significant at an 80% confidence interval (i.e. a t-value of more than 1.3). The marked underperformance of the remaining risk models can be directly attributed to the short half-life that seems to be the most important calibration factor for the very short time horizon forecasts. However, as shown in the next section, longer half-lives are relevant for longer forecast horizons and, for a half-life of 60 days, we can make the following observations: Using Factor Analysis or PCA (M21 and M28 respectively) appear to yield some improvement over equal weighting of style component returns (although not significant at the 80% confidence level). Weighting individual stock returns using the log of market capitalization rather than weighting them equally, for computing style factor returns, improves the forecasting performance albeit only slightly. Interestingly, the results do not bear out that using Russell 3000 rather than S&P 1500 as the estimation universe improves the performance of the risk model even when the test portfolios are drawn from the Russell 3000. Also of note is that finer grained GICS level 3 industry factors do not lend any forecasting accuracy improvement compared to using GICS level 2. Including currency factors or the VIX as macro style factors does not improve the risk model. Finally we can show the time series of forecasted and realized risk for M14 in Figure 4. This shows the annualized one-month forecasted and realized volatility for the S&P 500 as a test portfolio.

Forecasted and Realized Risk for S&P 500


90% 80% 70% 60% 50% 40% 30%
Mont hly Risk (Annualize d)

Realized Forecast

20% 10% 0% 1/1/1992 9/27/1994 6/23/1997 3/19/2000 Time 12/14/2002 9/9/2005 6/5/2008

Figure 4: Forecasted versus realized risks for S&P 500. Our model forecasts realized portfolio risks very well and responds rapidly to changes in the risk environment. Deviations from the model forecast are short-lived, and risk forecasts appear unbiased with the possible exception of late 2008. While we could make our forecasts even more responsive to more fully capture the risk dynamics of the credit crunch, the price for this would be deteriorating performance in all other periods.

Capital IQ Quantitative Reseach

17

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

5.2

Variation in Time Horizon

How does our calibration result change if we alter the investors forecasting horizon? Phrased differently: how good is one calibration for different time horizons? Is there a sweet spot that works well for a range of time horizons, i.e. is there a specification that works well for lets say a one to 12 month horizon? What is the value of calibrating a risk model to your specific time horizon? Our starting point for our calibration exercise is the observation made in the previous section. Varying the half-life for variance estimation seems to matter most. Given that we are looking at robustness, we take a slightly different (and computationally less demanding) approach than in the first section. Rather than finding the best model for each time horizon, we start with a particular choice of a 90 day half-life, and then look at the impact of different half-life and forecasting horizon on the DM test taken relative to the 90-day half life model. The results are given in Table 3.

Improvement in Median MSE for DM Test (across all portfolios) 30 45 Variance Half Life (days) 60 90 120 240

Forecast Horizon (Months) 1 2.40 1.90 1.21 0.00 -0.15 -0.45 3 1.67 1.51 0.95 0.00 -0.21 -0.52 6 -0.38 0.13 0.04 0.00 -0.52 -1.28 12 -0.39 -0.14 0.01 0.00 -0.23 -0.75

Table 3: Improvement in t-stat versus the 90 day half-life model for the DM test (Mean Squared Error) results median. Best and worst values are marked green and red. Standard benchmarks (size and value tilted) serve as test portfolios. The 60-day half life model does well for six and 12 month horizons (although the 45-day half life does slightly better for the six month horizon), whereas shorter or larger half-lives worsen performance for the one-year forecast horizon. However, for shorter forecast horizons, the 30-day half-life model does much better, and the difference in median t-value becomes very large. For a one month forecast horizon, there is less difference between the 90 and 240 day half-lives because both models are equally bad for forecasting risk with very short term horizons.23 Given the empirical evidence above, we conclude that calibration does matter. Models that do very well for a one month horizon are significantly different from models that are optimal for a six to 12 month horizon.

23

We repeat this exercise for the mean absolute error (MAE) version of the DM test, and the results remain very much the same.

Capital IQ Quantitative Reseach

18

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

6 Summary
This paper provides a framework for building, testing, and calibrating a multifactor US equity risk model using time series data rather than the cross sectional approach taken by most commercial vendors. Not only do we have strong priori reasons to suspect that a time series model is better suited to forecast risk, but a time series also allows us to draw on the extensive Alphaworks factor library and proprietary data collection. Our model is the first of its kind, to our knowledge, to be entirely built using our in-house Point-In-Time data sources, ensuring the highest level of historical accuracy during backtesting and simulation. We also believe the models style factors better reflect the key building blocks typically used in alpha generation and portfolio construction. They should therefore be more relevant for portfolio analysis and risk attribution and portfolio optimization. The final litmus test, however, is the out of sample performance of our model forecasts. In a real time learning exercise (only data up to time t are used to forecast risks from t to t+1) we show that our models can be easily calibrated to different time horizons and provide unbiased forecasts of realized portfolio risks across a broad range of test portfolios. We also show that calibration matters, using the DM test to find the best model version among a set of unbiased models. There is no model that does equally well for one to 12 month forecast horizons, and the differences in model performance are significant. For more information on the Capital IQ Equity Risk Models please contact Ruben Falk at rfalk@capitaliq.com

Capital IQ Quantitative Reseach

19

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

Literature ASNESS C., R. KRAIL and J. LIEW (2001), Do Hedge Funds Hedge?, The Journal of Portfolio Management, pp. 6-19. BRINER B and G. CONNOR (2008), How much structure is best? A comparison of market model, factor model and unstructured equity covariance matrices, Journal of Risk, pp. 3-30. BURNS P. (2006), Random Portfolios for Evaluating Trading Strategies, http://www.burnsstat.com/pages/working.html CAMPBELL J.Y., A. W. LO, and A. C. MACKINLAY (1997), The Econometrics of Financial Markets. Princeton University Press, 1997. CONNOR G. (1995), The Three Type of Factor Models: A Comparison of Their Explanatory Power, Financial Analysts Journal. CONNOR G., KORAJCYK R. (1986), Performance Measurement with the APT: A New Framework For Analysis, Journal of Financial Economics. CONNOR G., KORAJCYK R. and L. GOLDBERG (2009), Portfolio Risk Management, Princeton University Press. DALES A. and R. MEESE (2001), Strategic Currency Hedging, Journal of Asset Management, pp. 9-21. DiBARTOLOMEO, D. and S. WARRICK (2005), Making covariance based portfolio risk models sensitive to the rate at which markets reflect new information, in: KNIGHT J. and S. Satchell, Linear Factor models, Elsevier Finance. DIEBOLD F.X. and R.S. MARINO (1995), Comparing Predictive Accuracy, Journal of Business Economics and Statistics FAMA E and K. FRENCH (1993), Common Risk Factors in the Returns of Stocks and Bonds, Journal of Financial Economics MACQUEEN (2003), The Structure of Multifactor Equity Risk Models, Journal of Asset Management. MILLER G. (2006), Needles, Haystacks, and Hidden Factors, Journal of Portfolio Management. NEWEY W. K. and K. D. WEST (1987), A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica, pp.703708. RENSHAW A. (2008), Using Axiomas Alpha Factor Method to Correct the Misalignment of Alpha Model and Risk Model Factors, Axioma Research paper 009. ROSENBERG, B (1974), Extra components of covariance among security prices, Journal of Financial and Quantitative Analysis, p.263-274. SCHERER B. (2000), Preparing the Best Risk Budget, in: Risk, December, p. 30-33 SCHERER B. (2010), Risk Budgeting and Portfolio Choice, 4th edition, Riskwaters: London

Capital IQ Quantitative Reseach

20

July 2010 / Introducing Capital IQs Fundamental US Equity Risk Models

SCHOLES M. and J.T. WILLIAMS (1977), Estimating Betas from Non-synchronous Data. Journal of Financial Economics, pp STEFEK D. and J.H. LEE (2008), Do Risk Factors Eat Alphas, The Journal of Portfolio Management Summer, v34n4, pp. 12-25 ZANGARI P. (2003), Equity factor risk models. In: Modern Investment Management: An Equilibrium Approach, chapter 20, pp 334395. John Wiley & Sons, Inc., 2003. ZIVOT E. and J. WANG (2006), Modeling Financial Time Series With S-Plus, Springer

This document was prepared by the Capital IQ Quantitative Research group. Capital IQ is a division of Standard & Poors. The information contained in this document is subject to change without notice. Capital IQ cannot guarantee the accuracy, adequacy or completeness of the information and is not responsible for any errors or omissions or for results obtained from use of such information. Capital IQ makes no warranties of merchantability or fitness for a particular purpose. In no event shall Standard & Poors be liable for direct, indirect or incidental, special or consequential damages resulting from the information here regardless or whether such damages were foreseen or unforeseen. This material is not intended as an offer or solicitation for the purchase or sale of any security or other financial instrument. Securities, financial instruments or strategies mentioned herein may not be suitable for all investors. Any opinions expressed herein are given in good faith, are subject to change without notice, and are only correct as of the stated date of their issue. Prices, values, or income from any securities or investments mentioned in this report may fall against the interests of the investor and the investor may get back less than the amount invested. The information contained in this report does not constitute advice on the tax consequences of making any particular investment decision. This material does not take into account your particular investment objectives, financial situations or needs and is not intended as a recommendation of particular securities, financial instruments, strategies to you nor is it considered to be investment advice. Before acting on any recommendation in this material, you should consider whether it is suitable for your particular circumstances and, if necessary, seek professional advice. Capital IQ Quantitative Research is analytically and editorially independent from any other analytical group at Standard & Poors, including Standard & Poors Ratings. 2010 Capital IQ, a division of Standard & Poors. All rights reserved. Redistribution, reproduction and/or photocopying in whole or in part is prohibited without written permission. STANDARD & POORS, Capital IQ and S&P are registered trademarks of The McGraw-Hill Companies, Inc.

Capital IQ Quantitative Reseach

21

Potrebbero piacerti anche