Study Shows Bank Risk-Assessment Tool Not Responding Adequately to Market Fluctuations
A new study from North Carolina State University indicates that regulators need to do more to ensure that banks are adequately computing their Value-at-Risk (VaR) to reflect fluctuations in financial markets. The study finds that the tests used by regulators do not detect when VaRs inaccurately account for significant swings in the market, which is significant because VaRs are key risk-assessment tools financial institutions use to determine the amount of capital they need to keep on hand to cover potential losses.
“Failing to modify the VaR to reflect market fluctuations is important,” study co-author Dr. Denis Pelletier says, “because it could lead to a bank exhausting its on-hand cash reserves.” Pelletier, an assistant professor of economics at NC State, says “Problems can come up if banks miscalculate their VaR and have insufficient funds on hand to cover their losses.”
VaRs are a way to measure the risk exposure of a company’s portfolio. Economists can determine the range of potential future losses and provide a statistical probability for those losses. For example, there may be a 10 percent chance that a company could lose $1 million. The VaR is generally defined as the point at which a portfolio stands only a one percent chance of taking additional losses.
In other words, the VaR is not quite the worst-case scenario – but it is close. The smaller a company’s VaR, the less risk a portfolio is exposed to. If a company’s portfolio is valued at $1 billion, for example, a VaR of $15 million is significantly less risky than a VaR of $25 million.
The NC State study indicates that regulators could use additional tests to detect when the models used by banks are failing to accurately assess the statistical probability of losses in financial markets. The good news, Pelletier says, is that the models banks use tend to be overly conservative – meaning they rarely lose more than their VaR. But the bad news is that bank models do not adjust the VaR quickly when the market is in turmoil – meaning that when the banks are wrong and “violate” or lose more than their VaR – they tend to be wrong multiple times in a short period of time.
This could have serious consequences, Pelletier explains. “For example, if a bank has a VaR of $100 million it would keep at least $300 million in reserve, because banks are typically required to keep three to five times the VaR on hand in cash as a capital reserve. So it could afford a bad day – say, $150 million in losses. However, it couldn’t afford several really bad days in a row without having to sell illiquid assets, putting the bank further in distress.”
Banks are required to calculate their VaR on a daily basis by various regulatory authorities, such as the Federal Deposit Insurance Corporation. Pelletier says the new study indicates that regulatory authorities need to do more to ensure that banks are using dynamic models – and don’t face multiple VaR violations in a row.
The study, “Evaluating Value-at-Risk Models with Desk-Level Data,” was co-authored by Pelletier, Jeremy Berkowitz of the University of Houston and Peter Christoffersen of McGill University. The study will be published in a forthcoming special issue of Management Science on interfaces of operations and finance.
– shipman –
Note to editors: The study abstract follows.
“Evaluating Value-at-Risk Models with Desk-Level Data”
Authors: Denis Pelletier, North Carolina State University; Jeremy Berkowitz, University of Houston; Peter Christoffersen, McGill University.
Published: Forthcoming, Management Science special issue on interfaces of operations and finance
Abstract: We present new evidence on disaggregated profit and loss (P/L) and value-at-risk (VaR) forecasts obtained from a large international commercial bank. Our data set includes the actual daily P/L generated by four separate business lines within the bank. All four business lines are involved in securities trading and each is observed daily for a period of at least two years. Given this unique data set, we provide an integrated, unifying framework for assessing the accuracy of VaR forecasts. We use a comprehensive Monte Carlo study to assess which of these many tests have the best finite-sample size and power properties. Our desk-level data set provides importance guidance for choosing realistic P/L-generating processes in the Monte Carlo comparison of the various tests. The conditional autoregressive value-at-risk test of Engle and Manganelli (2004) performs best overall, but duration-based tests also perform well in many cases.
- Categories: