Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Panel Data Econometrics: Empirical Applications
Panel Data Econometrics: Empirical Applications
Panel Data Econometrics: Empirical Applications
Ebook1,098 pages11 hours

Panel Data Econometrics: Empirical Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Panel Data Econometrics:  Empirical Applications introduces econometric modelling. Written by experts from diverse disciplines, the volume uses longitudinal datasets to illuminate applications for a variety of fields, such as banking, financial markets, tourism and transportation, auctions, and experimental economics. Contributors emphasize techniques and applications, and they accompany their explanations with case studies, empirical exercises and supplementary code in R. They also address panel data analysis in the context of productivity and efficiency analysis, where some of the most interesting applications and advancements have recently been made.

  • Provides a vast array of empirical applications useful to practitioners from different application environments
  • Accompanied by extensive case studies and empirical exercises
  • Includes empirical chapters accompanied by supplementary code in R, helping researchers replicate findings
  • Represents an accessible resource for diverse industries, including health, transportation, tourism, economic growth, and banking, where researchers are not always econometrics experts
LanguageEnglish
Release dateJun 20, 2019
ISBN9780128158609
Panel Data Econometrics: Empirical Applications

Related to Panel Data Econometrics

Related ebooks

Economics For You

View More

Related articles

Reviews for Panel Data Econometrics

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Panel Data Econometrics - Mike Tsionas

    States

    General Introduction

    Panel data always have been at the center of econometric research and have been used extensively in applied economic research to refute a variety of hypotheses. The chapters in these two volumes represent, to a large extent, much of what has been accomplished in the profession during the last few years. Naturally, this is a selective presentation and many important topics have been left out because of space limitations. The books cited at the end of this Introduction, however, are well known and provide more details about specific topics. The coverage extends from fixed and random effect formulations to nonlinear models and cointegration. Such themes have been instrumental in the development of modern theoretical and applied econometrics.

    Panel data are used quite often in applications, as we see in Volume 2 of this book. The range of applications is vast, extending from industrial organization and labor economics to growth, development, health, banking, and the measurement of productivity. Although panel data provide more degrees of freedom, their proper use is challenging. The modeling of heterogeneity cannot be exhausted to fixed and random effect formulations, and slope heterogeneity has to be considered. Dynamic formulations are highly desirable, but they are challenging both because of estimation issues and because unit roots and cointegration cannot be ignored. Moreover, causality issues figure prominently, although they seem to have received less attention relative to time-series econometrics. Relative to time-series or cross-sections, the development of specification tests for panel data seems to have been slower than usual.

    The chapters in these two volumes show the great potential of panel data for both theoretical and applied research. There are more opportunities as more problems arise, particularly when practitioners and economic theorists get together to discuss the empirical refutation of their theories or conjectures. In my view, opportunities are likely to arise from three different areas: the interaction of econometrics with game theory and industrial organization; the prominence of both nonparametric and Bayesian techniques in econometrics; and structural models that explain heterogeneity beyond the familiar paradigm of fixed/random effects.

    1 Detailed Presentation

    In Chapter 1, Stephen Hall provides background material about econometric methods that is useful in making this volume self-contained.

    In Chapter 2, Jeffrey M. Wooldridge and Wei Lin study testing and estimation in panel data models with two potential sources of endogeneity: that because of correlation of covariates with time-constant, unobserved heterogeneity and that because of correlation of covariates with time-varying idiosyncratic errors. In the linear case, they show that two control function approaches allow us to test exogeneity with respect to the idiosyncratic errors while being silent on exogeneity with respect to heterogeneity. The linear case suggests a general approach for nonlinear models. The authors consider two leading cases of nonlinear models: an exponential conditional mean function for nonnegative responses and a probit conditional mean function for binary or fractional responses. In the former case, they exploit the full robustness of the fixed effects Poisson quasi-MLE; for the probit case, they propose correlated random effects.

    In Chapter 3, William H. Greene and Qiushi Zhang point out that the panel data linear regression model has been studied exhaustively in a vast body of literature that originates with Nerlove (1966) and spans the entire range of empirical research in economics. This chapter describes the application of panel data methods to some nonlinear models such as binary choice and nonlinear regression, where the treatment has been more limited. Some of the methodology of linear panel data modeling can be carried over directly to nonlinear cases, while other aspects must be reconsidered. The ubiquitous fixed effects linear model is the most prominent case of this latter point. Familiar general issues, including dealing with unobserved heterogeneity, fixed and random effects, initial conditions, and dynamic models, are examined. Practical considerations, such as incidental parameters, latent class and random parameters models, robust covariance matrix estimation, attrition, and maximum simulated likelihood estimation, are considered. The authors review several practical specifications that have been developed around a variety of specific nonlinear models, including binary and ordered choice, models for counts, nonlinear regressions, stochastic frontier, and multinomial choice models.

    In Chapter 4, Jeffrey S. Racine and Christopher F. Parmeter provide a survey of nonparametric methods for estimation and inference in a panel data setting. Methods surveyed include profile likelihood, kernel smoothers, and series and sieve estimators. The practical application of nonparametric panel-based techniques is less prevalent than nonparametric density and regression techniques. The material covered in this chapter will prove useful and facilitate their adoption by practitioners.

    In Chapter 5, Kien Tran and Levent Kutlu provide a recent development in panel stochastic frontier models that allows for heterogeneity, endogeneity, or both. Specifically, consistent estimation of the models’ parameters as well as observation-specific technical inefficiency is discussed.

    In Chapter 6, Stefanos Dimitrakopoulos and Michalis Kolossiatis discuss how Bayesian techniques can be used to estimate the Poisson model, a well-known panel count data model, with exponential conditional mean. In particular, they focus on the implementation of Markov Chain Monte Carlo methods to various specifications of this model that allow for dynamics, latent heterogeneity and/or serial error correlation. The latent heterogeneity distribution is assigned a nonparametric structure, which is based on the Dirichlet process prior. The initial conditions problem also is addressed. For each resulting model specification, they provide the associated inferential algorithm for conducting posterior simulation. Relevant computer codes are posted as an online supplement.

    In Chapter 7, Chih-Hwa Kao and Fa Wang review and explain the techniques used in Hahn and Newey (2004) and Fernandez-Val and Weidner (2016) to derive the limit distribution of the fixed effects estimator of semiparametric panels when the time dimension tends to infinity jointly with the cross-section dimension. The techniques of these two papers are representative and understanding their working mechanism is a good starting point. Under a unified framework, this paper explicitly points out the difficulties in extending from models with fixed dimensional parameter space to panels with individual effects and from panel with individual effects to panel with both individual and time effects, and how Hahn and Newey (2004) and Fernandez-Val and Weidner (2016) solve them.

    In Chapter 8, Bo Honore and Ekaterini Kyriazidou study the identification of multivariate dynamic panel data logit models with unobserved fixed effects. They show that in the pure VAR(1) case (without exogenous covariates) the parameters are identified with as few as four waves of observations and can be estimated consistently at rate square-root-n with an asymptotic normal distribution. Furthermore, they show that the identification strategy of Honore and Kyriazidou (2000) carries over in the multivariate logit case when exogenous variables are included in the model. The authors also present an extension of the bivariate simultaneous logit model of Schmidt and Strauss (1975) to the panel case, allowing for contemporaneous cross-equation dependence both in static and dynamic frameworks. The results of this chapter are of particular interest for short panels, that is, for small T.

    In Chapter 9, Subal Kumbhakar and Christopher F. Parmeter notice that, in the last 5 years, we have seen a marked increase in panel data methods that can handle unobserved heterogeneity, persistent inefficiency, and time-varying inefficiency. Although this advancement has opened up the range of questions and topics for applied researchers, practitioners, and regulators, there are various estimation proposals for these models and, to date, no comprehensive discussion about how these estimators work or compare to one another. This chapter lays out in detail the various estimators and how they can be applied. Several recent applications of these methods are discussed, drawing connections from the econometric framework to real applications.

    In Chapter 10, Peter Pedroni discusses the challenges that shape panel cointegration techniques, with an emphasis on the challenge of maintaining the robustness of cointegration methods when temporal dependencies interact with both cross-sectional heterogeneities and dependencies. It also discusses some of the open challenges that lie ahead, including the challenge of generalizing to nonlinear and time varying cointegrating relationships. The chapter is written in a nontechnical style that is intended to make the information accessible to nonspecialists, with an emphasis on conveying the underlying concepts and intuition.

    In Chapter 11, by P.A.V.B. Swamy, Peter von zur Muehlen, Jatinder S. Mehta, and I-Lok Chang show that estimators of the coefficients of econometric models are inconsistent if their coefficients and error terms are not unique. They present models having unique coefficients and error terms, with specific applicability to the analyses of panel data sets. They show that the coefficient on an included nonconstant regressor of a model with unique coefficients and error term is the sum of bias-free and omitted-regressor bias components. This sum, when multiplied by the negative ratio of the measurement error to the observed regressor, provides a measurement-error bias component of the coefficient. This result is important because one needs the bias-free component of the coefficient on the regressor to measure the causal effect of an included nonconstant regressor of a model on its dependent variable.

    In Chapter 12, Arne Heggingsen and Geraldine Henningsen give practical guidelines for the analysis of panel data with the statistical software R. They start by suggesting procedures for exploring and rearranging panel data sets and for preparing them for further analyses. A large part of this chapter demonstrates the application of various traditional panel data estimators that frequently are used in scientific and applied analyses. They also explain the estimation of several modern panel data models such as panel time series models and dynamic panel data models. Finally, this chapter shows how to use statistical tests to test critical hypotheses under different assumptions and how the results of these tests can be used to select the panel data estimator that is most suitable for a specific empirical panel data analysis.

    In Chapter 13, Robin Sickles and Dong Ding empirically assess the impact of capital regulations on capital adequacy ratios, portfolio risk levels and cost efficiency for banks in the United States. Using a large panel data of US banks from 2001 to 2016, they first estimate the model using two-step generalized method of moments (GMM) estimators. After obtaining residuals from the regressions, they propose a method to construct the network based on clustering of these residuals. The residuals capture the unobserved heterogeneity that goes beyond systematic factors and banks' business decisions that affect its level of capital, risk, and cost efficiency, and thus represent unobserved network heterogeneity across banks. They then reestimate the model in a spatial error framework. The comparisons of Fixed Effects, GMM Fixed Effect models with spatial fixed effects models provide clear evidence of the existence of unobserved spatial effects in the interbank network. The authors find a stricter capital requirement causes banks to reduce investments in risk-weighted assets, but at the same time, increase holdings of nonperforming loans, suggesting the unintended effects of higher capital requirements on credit risks. They also find the amount of capital buffers has an important impact on banks' management practices even when regulatory capital requirements are not binding.

    In Chapter 14, Gerraint Johnes and Jill Johnes survey applications of panel data methods in the economics of education. They focus first on studies that have applied a difference-in-difference approach (using both individual and organization level data). Then they explore the way in which panel data can be used to disentangle age and cohort effects in the context of investigating the impact of education on subsequent earnings. The survey next examines the role of panel data in assessing education peer effects and intergenerational socioeconomic mobility. The review ends by looking at adaptations of methods to assess efficiency in a panel data context, and dynamic discrete choice models and their importance in the context of evaluating the likely effects of policy interventions. The survey is intended to highlight studies that are representative of the main areas in which the literature has been developed, rather than to be encyclopedic.

    In Chapter 15, corresponding author Scott Atkinson analyzes panel data studies of the most widely examined energy consumption industries—electric power, railroads, and airlines. For electric power, the choices between utility level versus plant-level data, cross-sectional versus panel data, and pooled-data analysis versus fixed-effects (FE) estimation generally makes little difference. A consensus also exists across estimates of cost, profit, and distance functions, the systems including these functions. Generally, studies reject homogeneous functional forms and find nearly constant returns to scale (RTS) for the largest firms. Residual productivity growth declines over time to small, positive levels, and substantial economies of vertical integration exist. Cost saving can accrue from a competitive generating sector. Controversy remains regarding the Averch-Johnson effect and the relative efficiency of publicly owned versus privately owned utilities. Railroads exhibit increasing RTS, substantial inefficiencies, and low productivity growth. Airlines operate close to constant RTS and enjoy modest productivity growth. Substantial inefficiencies decrease with deregulation. A valuable alternative to FE estimation is a control function approach to model unobserved productivity.

    In Chapter 16, Georgia Kosmopoulou, Daniel Nedelescu, and Fletcher Rehbein survey commonly used methods and provide some representative examples in the auction literature in an effort to highlight the value of panel data techniques in the analysis of experimental data obtained in the laboratory.

    In Chapter 17, Paul D. Allison, Richard Williams, and Enrique Moral-Benito point out that panel data make it possible both to control for unobserved confounders and to allow for lagged, reciprocal causation. Trying to do both at the same time, however, leads to serious estimation difficulties. In the econometric literature, these problems have been solved by using lagged instrumental variables together with the generalized method of moments (GMM). In this chapter, the authors show that the same problems can be solved by maximum likelihood estimation implemented with standard software packages for structural equation modeling (SEM). Monte Carlo simulations show that the ML-SEM method is less biased and more efficient than the GMM method under a wide range of conditions. ML-SEM also makes it possible to test and relax many of the constraints that typically are embodied in dynamic panel models.

    In Chapter 18, Rico Merkert and Corinne Mulley notice that panel data have been widely used for analyzing both the demand and supply sides of transport operations. Obtaining true panels at the international level, however, appears to be difficult for various reasons. For the demand side, their peer review of the transport literature has demonstrated that pseudo panel data can be treated as if it is true panel data. For the supply side, this approach results in many studies using unbalanced panels instead. In terms of methods, they find that the DEA approach overcomes the problems of conflicting KPIs when considering overall cost efficiency while providing a robust tool for implementing change through the understanding of the key determinants of efficiency. Their case study about determinants of urban and regional train operator efficiency has evidenced, that the spatial context matters for the sample composition of DEA panel analysis in transport and that separating the panel into context specific subsamples can produce more robust results.

    In Chapter 19, David Humphrey outlines the problems encountered when using banking panel data. Workarounds and solutions to these problems are noted. Although many of these problems occur when selecting and obtaining a panel data set, others are specific to the topics investigated, such as bank scale and scope economies, technical change, frontier efficiency, competition, and productivity. Illustrative results from published studies on these topics also are reported.

    In Chapter 20, Christoph Siebenbrunner and Michael Sigmund point out that financial contagion describes the cascading effects that an initially idiosyncratic shock to a small part of a financial system can have on the entire system. They use two types of quantile panel estimators to imply that if certain bank-specific drivers used by leading regulatory authorities are good predictors of such extreme events, where small shocks to some part of the system can cause the collapse of the entire system. Comparing the results of the quantile estimation to a standard fixed-effects estimator they conclude that quantile estimators are better suited for describing the distribution of systemic contagion losses. Comparing the results to the aforementioned regulations, they find several recommendations for improvement.

    In Chapter 21, Keshab Bhattarai reviews applications of panel data models. The process of substitution of labor by capital as discussed in Karabarbounis and Neiman (2014) and Picketty (2014) has increased the capital share, causing a reduction in labor share of about 10% magnitude. They also studied the impacts of trade and aid on economic growth. Fixed and random effect estimates show that investment rather than aid was a factor contributing to growth. Exports tied to aid are always harmful for growth of recipient countries. Although the evidence is mixed for the individual economies, there appear to be trade-offs between unemployment and inflation in the panel of Organisation for Economic Co-operation and Development (OECD) countries as shown by the random and fixed effect models in which the Hausman test is in favor of random effect model. A simple VAR model with two lags on inflation and unemployment shows persistence of inflation and unemployment rates among the OECD economies. The ratio of investment to GDP (gross domestic product) is a significant determinant of growth rates across OECD countries, and FDI contributes positively to growth. Regression results are robust on the grounds of stationarity and cointegration criteria. Threshold panel models developed by Hansen (1997) and Caner and Hansen (2004) show how to study regime changes occurring in the real world.

    In Chapter 22, Andrew Jones, Apostolos Davillas, and Michaela Benzeval add to the literature about the income-health gradient by exploring the association of short-term and long-term income with a wide set of self-reported health measures and objective nurse-administered and blood-based biomarkers, as well as employing estimation techniques that allow for analysis beyond the mean. The income-health gradients are greater in magnitude in cases of long-run rather than cross-sectional income measures. Unconditional quantile regressions reveal that the differences between the long-run and the short-run income gradients are more evident toward the right tails of the distributions, where both higher risk of illnesses and steeper income gradients are observed.

    In Chapter 23, Steve Ongena, Andrada Bilan, Hans Degryse, and Kuchulain O'Flynn review the data, econometric techniques, and estimates with respect to two recent and salient developments in the banking industry, i.e., securitization and globalization. The traditional banking market has become wider in its business models, through securitization, and in its geographical dispersion, through global operations. Both developments have brought new challenges for the understanding of basic questions in banking. Questions such as what determines credit flows or what are the channels of transmission for monetary policy recently have been addressed through this new optic. This review establishes that access to micro data has enabled researchers to arrive at increasingly better identified and more reliable estimates.

    In Chapter 24, Claire Economidou, Kyriakos Drivas, and Mike Tsionas develop a methodology for stochastic frontier models of count data allowing for technological and inefficiency induced heterogeneity in the data and endogenous regressors. They derive the corresponding log-likelihood function and conditional mean of inefficiency to estimate technology regime-specific inefficiency. They apply our proposed methodology for the states in the United States to assess efficiency and growth patterns in producing new knowledge in the United States. The findings support the existence of two distinct innovation classes with different implications for their members’ innovation growth.

    In Chapter 25, Emmanuel Mamatzakis and Mike Tsionas propose a novel approach to identify life satisfaction and thereby happiness within a latent variables model for British Household Panel Survey longitudinal data. By doing so, they overcome issues related to the measurement of happiness. To observe happiness, they employ a Bayesian inference procedure organized around Sequential Monte Carlo (SMC)/particle filtering techniques. Happiness efficiency captures individuals’ optimal happiness to be achieved should they use their resource endowment efficiently. In addition, they propose to take into account individual-specific characteristics by estimating happiness efficiency models with individual-specific thresholds to happiness. This is the first study that departs from restrictions that happiness efficiency, and thereby inefficiency, would be time-invariant. Key to happiness is to have certain personality traits; being agreeable and being an extrovert assist efforts to enhance happiness efficiency. On the other hand, being neurotic would impair happiness efficiency.

    In Chapter 26, Vasso Ioannidou and Jan de Dreu study how the introduction of an explicit deposit insurance scheme in Bolivia in 2001 affected depositors’ incentives to monitor and discipline their banks for risk-taking. They find that after the introduction of the explicit deposit insurance scheme, the sensitivity of deposit interest rates and volumes to bank risk is reduced significantly, consistent with a reduction in depositor discipline. This effect operates mainly though large depositors—the class of depositors who were sensitive to their banks’ risk in the first place. The authors also find that the larger the decrease in depositor discipline is, the larger the insurance coverage rate is. Deposit interest rates and volumes become almost completely insensitive to bank risk when the insurance coverage is higher than 60%. The results provide support for deposit insurance schemes with explicit deposit insurance limits per depositor.

    In Chapter 27, Sarantis Kalyvitis, Sofia Anyfantaki, Margarita Katsimi, and Eirini Thomaidou review the growing empirical literature that explores the determinants of export prices at the firm level. They first present evidence from empirical studies that link firm export pricing to destination characteristics (gravity-type models). The main implications of channels that can generate price differentiation, such as quality customization, variable markups, and exchange rate pass-through, and financial frictions then are explored. A newly compiled panel data set from Greek exporting firms is used to present evidence from regressions with export price as the dependent variable and show how the main economic hypotheses derived in theoretical models are nested in empirical specifications.

    In Chapter 28, Almas Hermati and Nam Seok Kim investigate the relationship between economic growth and democracy by estimating a nation’s production function specified as static and dynamic models using panel data. In estimating the production function, they use a single time trend, multiple time trends, and the general index formulations to the translog production function to capture time effects representing technological changes of unknown forms. In addition to the unknown forms, implementing the technology shifters model enabled this study to find possible known channels between economic growth and democracy. Empirical results based on a panel data of 144 countries observed from 1980 to 2014 show that democracy had a robust positive impact on economic growth. Credit guarantee is one of the most significant positive links between economic growth and democracy. In order to check the robustness of these results, a dynamic model constructed with a flexible adjustment speed and a target level of GDP also is tested.

    In Chapter 29, Almas Hesmati, Esfandiar Maasoumi, and Biwei Su examine the evolution of well-being (household income) of Chinese households over time, and its determinants. They study (stochastic) dominance relations based on Chinese Household Nutrition Survey (CHNS) data. They reveal a profile of general mobility/inequality and relative welfare in China over time and among population subgroups. The authors report that from 2000 to 2009, welfare has improved steadily along with Chinese economic development and growth. Pairwise comparison of subgroups reveals that there is no uniform ranking by household type, gender of household head, or age cohort. Married group and nonchild rearing group second order dominate single/divorced group and child rearing group. Inequality in subgroups with different educational levels and household sizes suggests groups with higher education and smaller household size tend to be better off than their counterparts. Longitudinal data allow estimation of permanent incomes, which smooth out short-term fluctuations. Treating the data as a time series of cross sections also avoids imposition of constant partial effects over time and across groups. This is appropriate given the observed heterogeneity in this population. Individual/group specific components are allowed and subsumed in conditional dominance rankings, rather than identified by panel data estimation methods.

    In Chapter 30, Mike G. Tsionas, Konstantinos N. Konstantakis, and Panayotis G. Michaelides present a production function, which is based on a family of semi-parametric artificial neural networks that are rich in parameters, in order to impose all the properties that modern production theory dictates. Based on this approach, this specification is a universal approximator to any arbitrary production function. All measures of interest, such as elasticities of substitution, technical efficiency, returns to scale, and total factor productivity, also are derived easily. Authors illustrate our proposed specification using data for sectors of the US economy. The proposed specification performs very well and the US economy is characterized by approximately constant RTS and moderate TFP, a finding that is consistent with previous empirical work.

    General References

    Baltagi B.H. Econometric analysis of panel data. second ed. John Wiley & Sons; 2001.

    Cameron A.C., Trivedi P.K. Microeconometrics using Stata. Rev. ed. Stata Press Publ; 2010.

    Greene W.H. Econometric analysis. seventh ed. Upper Saddle River, NJ: Prentice Hall; 2012 740 p.

    Hsiao C. Analysis of panel data. second ed. Cambridge University Press; 2002.

    Wooldridge J.M. Econometric analysis of cross section and panel data. second ed. Cambridge, MA: MIT Press; 2010.

    Chapter 13

    Capital Regulation, Efficiency, and Risk Taking: A Spatial Panel Analysis of US Banks

    Dong Ding; Robin C. Sickles    Department of Economics, Rice University, Houston, TX, United States

    Abstract

    In this study, we empirically assess the impact of capital regulations on capital adequacy ratios, portfolio risk levels, and cost efficiency for US banks. Using a large panel data of US banks from 2001 to 2016, we first estimate the model using two-step generalized method of moments (GMM) estimators. After obtaining residuals from the regressions, we propose a method to construct the network based on the clustering of these residuals. The residuals capture the unobserved heterogeneity that goes beyond systematic factors and banks’ business decisions that affect its level of capital, risk, and cost efficiency and therefore represent unobserved network heterogeneity across banks. We then estimate the model in a spatial error framework. The comparisons of fixed effects, GMM fixed effect models with spatial fixed effects models provide clear evidence of the existence of unobserved spatial effects in the interbank network. We have found that a stricter capital requirement causes banks to reduce investments in risk-weighted assets, but, at the same time, increase holdings of nonperforming loans, suggesting the unintended effects of higher capital requirements on credit risks. We also find the amount of capital buffers has an important impact on banks’ management practices even when regulatory capital requirements are not binding.

    Keywords

    Spatial error model; Spatial weight matrix; Bank risk

    Chapter Outline

    1Introduction

    2Regulatory Background

    3Hypotheses and Models

    3.1The Relationships Among Capital, Risk, and Efficiency: Theoretical Hypotheses

    3.2Empirical Model

    4Estimation

    4.1Endogenity

    4.2Spatial Correlation

    4.3Correction for Selection Bias

    5Data

    6Results

    6.1Estimation of Cost Efficiency

    6.2GMM Results for the Full Sample

    6.3The Spatial Effects

    6.4Robustness Checks

    7Concluding Remarks

    References

    1 Introduction

    Since the process of bank deregulation started in the 1970s, the supervision of banks has relied mainly on the minimum capital requirement. The Basel Accord has emerged as an attempt to create an international regulatory standard about how much capital banks should maintain to protect against different types of risks. The financial crisis of 2007–09 revealed that, despite numerous refinements and revisions during the last two decades, the existing regulatory frameworks are still inadequate for preventing banks from taking excessive risks. The crisis also highlighted the importance of the interdependence and spillover effects within the financial networks.

    To prevent future crises, economists and policymakers must understand the dynamics of the intertwined banking systems and the underlying drivers of banks’ risk-taking to better assess risks and adjust regulations. Theoretical predictions about whether more stringent capital regulation curtails or promotes banks’ risk-taking behavior are ambiguous. It is ultimately an empirical question how banks behave in the light of capital regulation. This chapter seeks to investigate the drivers of banks’ risk-taking in the United States and to test how banks respond to an increase in capital requirements.

    Many empirical studies test whether increases in capital requirements force banks to increase or decrease risks (Aggarwal & Jacques, 2001; Barth, Caprio, & Levine, 2004; Camara, Lepetit, & Tarazi, 2013; Demirguc-Kunt & Detragiache, 2011; Jacques & Nigro, 1997; Lindquist, 2004; Rime, 2001; Shrieves & Dahl, 1992; Stolz, Heid, & Porath, 2003). For example, Shrieves and Dahl (1992) and Jacques and Nigro (1997) suggest that capital regulations have been effective in increasing capital ratios and reducing asset risks for banks with relatively low capital levels. They also find that changes in risk and capital levels are positively related, indicating that banks that have increased their capital levels over time also have increased their risk appetite. Other studies, such as Stolz et al. (2003) and Van Roy (2005), however, report a negative effect of capital on the levels of risk taken by banks. Overall, both theoretical and empirical studies are not conclusive as to whether more stringent capital requirements reduce banks’ risk-taking.

    A different strand of literature provides evidence that efficiency is also a relevant determinant of bank risk. In particular, Hughes, Lang, Mester, Moon, et al. (1995) link risk-taking and banking operational efficiency together and argue that higher loan quality is associated with greater inefficiencies. Kwan and Eisenbeis (1997) link bank risk, capitalization, and measured inefficiencies in a simultaneous equation framework. Their study confirms the belief that these three variables are jointly determined. Additional studies about capital, risk, and efficiency are conducted by Williams (2004), Altunbas, Carbo, Gardener, and Molyneux (2007), Fiordelisi, Marques-Ibanez, and Molyneux (2011), Deelchand and Padgett (2009), and Tan and Floros (2013).¹ Taken together, these two strands of the empirical literature about banking business practices imply that bank capital, risk, and efficiency are all related.

    The third strand of literature deals with applying spatial econometrics to model banking linkages and the transmission of shocks in the financial system. Although spatial dependence has been studied extensively in a wide range of social fields, such as regional and urban economics, environmental sciences, and geographical epidemiology, it is not yet very popular in financial applications, although there are some applications in empirical finance. For example, Fernandez (2011) tests for spatial dependency by formulating a spatial version of the capital asset pricing model (S-CAPM). Craig and Von Peter (2014) find significant spillover effects between German banks’ probabilities of distress and the financial profiles of connected peers through a spatial probit model. Other studies such as Asgharian, Hess, and Liu (2013), Arnold, Stahlberg, and Wied (2013), and Weng and Gong (2016) analyze spatial dependencies in stock markets. The empirical literature, however, appears to be silent about examining the effects of financial regulation on risks while taking spatial dependence into account. Banks’ behaviors are likely to be inherently spatial. Ignoring these spatial correlations would lead to model misspecification, and consequently, biased parameter estimates.

    In this chapter, we combine these different strands of literature. Using a large sample of US banking data from 2001 to 2016, we empirically assess the impact of capital regulation on capital adequacy ratios, portfolio risk levels, and efficiency of banks in the United States under spatial frameworks. The sampling period includes banks that report their balance sheet data according to both the original Basel I Accord and the Basel II revisions (effective from 2007 in the United States), and up to the most available date on 2016-Q3. More precisely, this chapter addresses the following questions: To what extent are banks’ risk-taking behaviors and cost efficiency sensitive to capital regulation? How do capital buffers affect a bank's capital ratios, the level of risk it is willing to take on, and its cost efficiency? How does the result change by taking into account spatial interactions among observed banks?

    This chapter makes several contributions to the discussion about bank capital, risk, and efficiency and has important policy implications. First, this analysis provides an empirical investigation linking capital regulation on bank risk-taking, capital buffer, and bank efficiency in a spatial setting. The introduction of spatial dependence allows us to determine the importance of network externalities after controlling for bank specific effects and macroeconomic factors.

    Second, this chapter proposes a new approach for creating a spatial weights matrix. The key challenge in investigating spatial effects among banks is in defining the network, or in other words, constructing the spatial weights matrix. Spatial weights matrix usually is constructed in terms of the geographical distance between neighbors. In financial markets, however, it is not necessarily the case, given that most transactions are performed electronically. We propose a method to construct a spatial weights matrix based on clustering of residuals from regressions. The residuals aim to capture the unobserved heterogeneity that goes beyond systematic factors and banks’ own idiosyncratic characteristics and can be interpreted as a representation of unobserved network heterogeneity.

    Third, this study employs a significantly larger and more recent data set than previous studies that used data only up to 2010. In addition, because Basel III maintains many of the defining features of the previous accords, this study will shed light on how a more risk-sensitive capital regulation (i.e., Basel III) could influence banks’ behaviors in the United States after the financial crisis.

    The rest of the chapter is organized as follows. Section 2 lays out the regulatory background of this study. Section 3 explains the model. Section 4 outlines the estimation methodology and addresses several econometric issues. Section 5 describes the data. Section 6 presents and discusses the empirical findings and Section 7 presents conclusions.

    2 Regulatory Background

    The purpose of the Basel Committee on Banking Supervision is two-fold: to provide greater supervision of the international banking sector and to promote competition among banks internationally by having them comply with the same regulatory standards (Jablecki et al., 2009). All three Basel Accords are informal treaties, and members of the Basel Committee are not required to adapt their rules as national laws. For example, the United States adopted Basel II standards only for its 20 largest banking organizations in 2007. Regardless, the accords have led to greater universality in global capital requirements, even in countries that are not on the Basel Committee.

    Basel I, implemented in 1988, was designed to promote capital adequacy among banks internationally by promoting an acceptable ratio of capital to total risk-weighted assets. Specifically, Basel I required the ratio between regulatory capital and the sum of risk-weighted assets to be greater than 8%. This has become an international standard, with more than 100 countries adopting the Basel I framework. The first Basel Accord divided bank capital into two tiers to guarantee that banks hold enough capital to handle economic downturns. Tier 1 capital, the more important of the two, consists largely of shareholder's equity. Tier 2 consists of items such as subordinated debt securities and reserves. The primary weakness of Basel I was that capital requirements were associated only with credit risk and did not include operational or market risk. Additionally, risk weights assigned to assets are fixed within asset categories, creating incentives for banks to engage in regulatory capital arbitrage. For example, all commercial loans were assigned the same risk weight category (100%) regardless of the inherent creditworthiness of the borrowers, which tended to reduce the average quality of bank loan portfolios.

    Basel II was published in June 2004 and was introduced to combat regulatory arbitrage and improve bank risk management systems. The Basel II Accord was much more complex and risk-sensitive than Basel I and placed greater emphasis on banks’ own assessment of risk. Basel II was structured in three pillars: Pillar 1 defined the minimum capital requirements; pillar 2 was related to the supervisory review process; and pillar 3 established the disclosure requirements about the financial condition and solvency of institutions. Basel II made several prominent changes to Basel I, primarily in regard to how risk-weighted assets were to be calculated. In addition to credit risk, Basel II extended the risk coverage to include a capital charge for market and operational risk. The total risk-weighted assets (RWAT) was calculated as follows:

    where RWAC denotes the risk-weighted assets for credit risk. MRC is the market risk capital charge, and ORC is the operational risk capital charge.

    Basel II also allowed banks to use internal risk models to determine the appropriate risk weights of their own assets after approved by regulators. Additionally, Basel II calculated the risk of assets held in trading accounts using a Value at Risk approach, which takes into account estimates of potential losses based on historical data.

    In the aftermath of the financial crisis of 2007–2009, the Basel Committee revised its capital adequacy guidelines, which became Basel III (BCBS, 2011). The primary additions to Basel II were higher capital ratios for both Tier 1 and Tier 2 capital, the introduction of liquidity requirement, and the incorporation of a leverage ratio to shield banks from miscalculations in risk weightings and higher risk weightings of trading assets. As shown in Table 1, although the minimum regulatory capital ratio remained at 8%, the components constituting the total regulatory capital had to meet certain new criteria. A capital conservation buffer of 2.5% was introduced to encourage banks to build up capital buffers during normal times. Liquidity risk also received much attention in Basel III. A liquidity coverage ratio (LCR) and a net stable funding ratio (NSFR) were implemented in 2015 and have became a minimum standard applicable to all internationally active banks on a consolidated basis on January 1, 2018 (BIS, 2018). Midway through the Basel III consultative process, the United States enacted the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) in 2010. The Dodd-Frank Act is generally consistent with Basel III but further addressed systemic risk by identifying a set of institutions as systemically important financial institutions (SIFIs). The Dodd-Frank Act placed more stringent capital requirements for these SIFIs and required them to undertake periodic stress tests (DFAST) to ensure these institutions are well capitalized in aggregate stress scenarios.

    Table 1

    Source: Bank for International Settlements, http://www.bis.org/bcbs/basel3.htm.

    In spite of these changes, critics remain skeptical that the same issues that plagued Basel II regarding incorrect risk weights, as well as ease of circumvention are still prominent in Basel III. It is believed that regulatory capital requirements should be sufficiently attuned to the riskiness of bank assets. Vallascas and Hagendorff (2013), however, find a low-risk sensitivity of capital requirements that enable banks to build up capital buffers by under-reporting their portfolio risk. Because the risk-weighting methodology remained essentially unchanged in Basel III, banks still will have the incentive to game the system by obtaining securities that might prove disastrous unexpectedly (Lall, 2012).

    3 Hypotheses and Models

    3.1 The Relationships Among Capital, Risk, and Efficiency: Theoretical Hypotheses

    The prevalence of a minimum capital requirement is based primarily on the assumption that banks are prone to engage in moral hazard behavior. The moral hazard hypothesis is the classical problem of excessive risk-taking when another party is bearing part of the risk and cannot easily charge for that risk. Because of asymmetric information and a fixed-rate deposit insurance scheme, the theory of moral hazard predicts that banks with low levels of capital have incentives to increase risk-taking in order to exploit the value of their deposit insurance (Kane, 1995). The moral hazard problem is particularly relevant when banks have high leverage and large assets. According to the too-big-to-fail argument, large banks, knowing that they are so systemically important and interconnected that their failure would be disastrous to the economy, might count on a public bailout in case of financial distress. Therefore, they have incentives to take excessive risks and exploit the implicit government guarantee. In addition, the moral hazard hypothesis predicts that inefficiency is related positively to risks because inefficient banks are more likely to extract larger deposit insurance subsidies from the FDIC to offset part of their operating inefficiencies (Kwan & Eisenbeis, 1996). This suggests the following hypothesis.

    Hypothesis 1

    There exists a negative relationship between capital/efficiency and risk, because banks with higher leverage and lower efficiency have incentives to take higher risks to exploit existing flat deposit insurance schemes.

    With regard to the relationship between cost efficiency and risks, Berger and DeYoung (1997) outline and test the bad luck, bad management, and skimping hypotheses using Granger causality test. Under the bad luck hypothesis, external exogenous events lead to increases in problem loans for the banks. The increases in risk incur additional costs and managerial efforts. Therefore, cost efficiency is expected to fall after the increase in problem loans. Under the bad management hypothesis, managers fail to control costs, which results in low cost efficiency, and they perform poorly at loan underwriting and monitoring. These underwriting and monitoring problems eventually lead to high numbers of nonperforming loans as borrowers fall behind in their loan repayments. Therefore, the bad management hypothesis implies that lower cost efficiency leads to an increase in problem loans. The skimping hypothesis, however, implies a positive Granger-causation from measured efficiency to problem loans. Under the skimping hypothesis, banks skimp on the resources devoted to underwriting and monitoring loans, reducing operating costs and increasing cost efficiency in the short run. In the long run, however, nonperforming loans increase as poorly monitored borrowers fall behind in loan repayments.

    Milne and Whalley (2001) develop a continuous-time dynamic option pricing model that explains the incentives of banks to hold their capital buffers above the regulatory required minimum. The capital buffer theory states that adjustments in capital and risk depend on banks’ capital buffers. It predicts that, after an increase in the regulatory capital requirement (the same impact as a direct reduction in the capital buffer), capital and risk initially are related negatively as long as capital buffers are low, and after a period of adjustment when banks have rebuilt their capital buffers to some extent, capital and risk become positively related. This leads to the following hypothesis.

    Hypothesis 2

    The coordination of capital and risk adjustments depends on the amount of capital buffer that a bank holds. Well-capitalized banks adjust their buffer capital and risk positively while banks with a low capital buffer try to rebuild an appropriate capital buffer by raising capital and simultaneously lowering risk.

    3.2 Empirical Model

    Taken together, these studies and the models on which they are based imply that bank capital, risk, and efficiency are determined simultaneously and can be expressed in general terms as follows:

       (1)

    where Xit are bank-specific variables.

    Following Shrieves and Dahl (1992), we use a partial adjustment model to examine the relationship between changes in capital and changes in risk. Shrieves and Dahl (1992) point out that capital and risk decisions are made simultaneously and are interrelated. In the model, observed changes in bank capital ratios and risk levels are decomposed into two parts: a discretionary adjustment and an exogenously determined random shock such that:

       (2)

    where ΔCAPi,t and ΔRISKi,t are observed changes in capital and risk, respectively, for bank i in period t. ΔdCAPi,t and ΔdRISKi,t represent the discretionary adjustments in capital and risk. ϵi,t and μi,t are the exogenous random shocks. Banks aim to achieve optimal capital and risk levels, but banks might not be able to achieve their desired levels quickly. Therefore, banks can adjust capital and risk levels only partially toward the target levels. The discretionary adjustment in capital and risk is modeled in the partial adjustment framework:

       (3)

    where α and β are speed of adjustment; CAPi,t∗ and RISKi, t∗ are optimal level of capital and risk; and CAPi,t − 1, and RISKi,t − 1 are the actual levels of capital and risk in the previous period.

    Substituting Eq. (3) into Eq. (2), and accounting for the simultaneity of capital and risk decisions, the changes in capital and risk can be written as:

       (4)

    Eq. (4) shows the observed changes in capital and risk are a function of the target capital and risk levels, the lagged capital and risk levels, and any random shocks. Examples of exogenous shocks to the bank that could influence capital or risk levels include changes in regulatory capital standards or macroeconomic conditions.

    3.2.1 Network Model

    Shocks that affect banks’ decisions are likely to spill over to other banks, creating systemic effect. Following Denbee, Julliard, Li, and Yuan (2017), we model the network effect on banks’ capital and risk-holding decisions as a shock propagation mechanism in which banks’ decisions depend upon how the individual bank's shock propagates to its direct and indirect neighbors.

    We decompose banks’ decisions into a function of observables and an error term that captures the spatial spillover generated by the network:

       (5)

       (6)

    where Yit are banks’ capital and risk holding decisions, λ is a spatial autoregressive parameter, and wij are the network weights. The network component uit is thus modeled as a residual term.² The vector of shocks to all banks at time t can be rewritten in matrix form as:

    and expanding the inverse matrix as a power series yields:

       (7)

       (8)

    where λ also can be interpreted as network multiplier effect. We need | λ | < 1 for stability. The matrix M(λW) measures all direct and indirect effects of a shock to bank i on bank j.

    The network impulse-response function of banks’ capital and risk holdings, to a one standard deviation shock σi to a given bank i is given by:

    The average network multiplier resulting from a unit shock equally spread across the n banks can be expressed as:

    A positive λ indicates an amplification effect that a shock to any bank would be amplified by the banking network system. A negative λ, however, indicates a dampening effect on shock transmission.

    3.2.2 Measures of Capital and Risk

    Given the regulatory capital requirements associated with Basel I, II, and III, capital ratios are measured in three ways: Tier 1 risk-based ratio, total risk-based ratio, and Tier 1 leverage ratio. Tier 1 risk-based capital ratio is the proportion of core capital to risk-weighted assets in which core capital basically consists of common stock and disclosed reserves or retained earnings. Tier 2 capital includes revaluation reserves, hybrid capital instruments, and subordinated term debt, general loan-loss reserves, and undisclosed reserves. Total risk-based ratio is the percentage of Tier 1 and Tier 2 capital of risk-weighted assets. Tier 1 leverage ratio is the ratio of Tier 1 capital to total assets. The higher the ratio is, the higher the capital adequacy.

    The literature suggests a number of alternatives for measuring bank risk. The most popular measures of bank risk are the ratio of risk-weighted assets to total assets (RWA) and the ratio of non-performing loans to total loans (NPL). The ratio of risk-weighted assets is the regulatory measure of bank portfolio risk and was used by Shrieves and Dahl (1992), Jacques and Nigro (1997), Rime (2001), Aggarwal and Jacques (2001), Stolz et al. (2003) and many others. The standardized approach to calculating risk-weighted assets involves multiplying the amount of an asset or exposure by the standardized risk weight associated with that type of asset or exposure. Typically, a high proportion of RWA indicates a higher share of riskier assets. Since its inception, risk-weighting methodology has been criticized because it can be manipulated (for example, via securitization), NPL, therefore, is used as a complementary risk measure as it might contain information on risk differences between banks not caught by RWA. Nonperforming loans is measured by loans past-due 90 days or more and nonaccrual loans and reflect the ex-post outcome of lending decisions. Higher values of the NPL ratio indicate that banks ex-ante took higher lending risks and, as a result, have accumulated ex-post higher bad loans.

    3.2.3 Variables Affecting Changes in Capital, Risk and Efficiency

    The target capital ratio and risk level are not observable and typically depend on some set of observable bank-specific variables, as we do in our analysis. Loan loss provisions (LLPs) as a percentage of assets are included as a proxy for asset quality. A higher level of LLPs indicates an expectation of more trouble in the banks’ portfolios and a resulting greater need for capital, and therefore might capture ex-ante credit risk or expected losses. The loan-to-deposit ratio (LTD) is used commonly to assess a bank's liquidity. If the ratio is too high, the bank might not have enough liquidity to cover any unforeseen fund requirements; conversely, if the ratio is too low, the bank might not be earning as much as it otherwise earns. Size likely will affect a bank's capital ratios, efficiency, and level of portfolio risk, because larger banks are inclined to have larger investment opportunity sets and are granted easier access to capital markets. For these reasons, they have been found to hold fewer capital ratios than their smaller counterparts (Aggarwal & Jacques, 2001). We include the natural log of total assets as the proxy for bank size. Bank profitability is expected to have a positive effect on bank capital if the bank prefers to increase capital through retained earnings. An indicator of profitability is measured by return on assets (ROA) and return on equity (ROE).

    Macroeconomic shocks, such as a recession and falling housing prices, also can affect capital ratios and portfolios of banks. To capture the effect of common macroeconomic shocks that might have affected capital, efficiency, and risk during the period of study, the annual growth rate of real US GDP and Case-Shiller Home Price Index are included as controls. Crisis is a dummy variable that takes the value of 1 if the year is 2007, 2008, or 2009.

    The regulatory pressure variable describes the behavior of banks close to or below the regulatory minimum capital requirements. Capital buffer theory predicts that an institution approaching the regulatory minimum capital ratio might have incentives to boost capital and reduce risk to avoid the regulatory cost triggered by a violation of the capital requirement. We compute the capital buffer as the difference between the total risk-weighted capital ratio and the regulatory minimum of 8%. Consistent with previous work, we use a dummy variable REG to signify the degree of regulatory pressure that a bank is under. Because most banks hold a positive capital buffer, we use the 10th percentile of the capital buffer over all observations as the cutoff point. The dummy REG is set equal to 1 if the bank's capital buffer is less than the cutoff value, and zero otherwise. To test the previous predictions, we interact the dummy REG with variables of interest. For example, in order to capture differences in the speeds of adjustment of low and high buffer banks, we interact REG with the lagged dependent variables Capt − 1 and Riskt − 1. To assess differences in short-term adjustments of capital and risk that depend on the degree of capitalization, we interact the dummy REG with ΔRisk and ΔCap in the capital and risk equations, respectively. A summary of variables description is presented in Table C.2 in the

    Enjoying the preview?
    Page 1 of 1