Why bank risk models failed Avinash Persaud intelligence Capital 4 April 2008 Financial supervision arguably failed to prevent today's turmoil because it relied upon the very price-sensitive risk models that produced the crisis. This article calls for an ambitious departure from trends in modern financial regulation to correct the problem Greenspan and others have questioned why risk models, which are at the centre of financial supervision, failed to avoid or mitigate today's financial turmoil.There are two answers to this, one technical and the other philosophical. Neither is com plex, but many regulators and central bankers chose to ignore them both. The technical explanation is that the market-sensitive risk models used by thousands of market participants work on the assumption that each user is the only person using them. This was not a bad approximation in 1952, when the intellec tual underpinnings of these models were being developed at the Rand Corporation by Harry Markovitz and George Dantzig. This was a time of capital controls between countries, the segmentation of domestic financial markets and- to get the histori cal frame right -it was the time of the Morris Minor with its top speed of 59mph In today's flat world, market participants from Argentina to New Zealand have the same data on the risk, returns and correlation of financial instruments and use standard optimization models, which throw up the same portfolios to be favoured and those not to be Market participants do not stare helplessly at these results. They move into the favoured markets and out of the unfavoured Enormous cross-border capital flows are unleashed. But under the weight of the herd, favoured instruments cannot remain undervalued uncorrelated and low- risk. They are transformed into the precise opposite. When a market participant's risk model detects a rise in risk in his or her port folio, perhaps because of some random rise in volatility, and he or she tries to reduce his exposure, many others are trying to do the same thing at the same time with the same assets. A vicious cycle ensues as vertical price falls, prompting further selling. Liquidity vanishes down a black hole. The degree to which this ccurs has less to do with the precise financial instruments and more with the depth of diversity of investors' behaviour. Paradoxically, the observation of areas of safety in risk models creates risks, and the observation of risk creates safety Quantum physicists will note a parallel with Heisenbergs uncertainty principle
4 April 2008 Financial supervision arguably failed to prevent today’s turmoil because it relied upon the very price-sensitive risk models that produced the crisis. This article calls for an ambitious departure from trends in modern financial regulation to correct the problem. Greenspan and others have questioned why risk models, which are at the centre of financial supervision, failed to avoid or mitigate today’s financial turmoil. There are two answers to this, one technical and the other philosophical. Neither is complex, but many regulators and central bankers chose to ignore them both. The technical explanation is that the market-sensitive risk models used by thousands of market participants work on the assumption that each user is the only person using them. This was not a bad approximation in 1952, when the intellectual underpinnings of these models were being developed at the Rand Corporation by Harry Markovitz and George Dantzig. This was a time of capital controls between countries, the segmentation of domestic financial markets and – to get the historical frame right – it was the time of the Morris Minor with its top speed of 59mph. In today’s flat world, market participants from Argentina to New Zealand have the same data on the risk, returns and correlation of financial instruments, and use standard optimization models, which throw up the same portfolios to be favoured and those not to be. Market participants do not stare helplessly at these results. They move into the favoured markets and out of the unfavoured. Enormous cross-border capital flows are unleashed. But under the weight of the herd, favoured instruments cannot remain undervalued, uncorrelated and lowrisk. They are transformed into the precise opposite. When a market participant’s risk model detects a rise in risk in his or her portfolio, perhaps because of some random rise in volatility, and he or she tries to reduce his exposure, many others are trying to do the same thing at the same time with the same assets. A vicious cycle ensues as vertical price falls, prompting further selling. Liquidity vanishes down a black hole. The degree to which this occurs has less to do with the precise financial instruments and more with the depth of diversity of investors’ behaviour. Paradoxically, the observation of areas of safety in risk models creates risks, and the observation of risk creates safety. Quantum physicists will note a parallel with Heisenberg’s uncertainty principle. Why bank risk models failed Avinash Persaud Intelligence Capital 11
12 The First Global Financial Crisis of the 2 1st Century Policy-makers cannot claim to be surprised by all of this. The observation that market-sensitive risk models, increasingly integrated into financial supervision in a prescriptive manner, were going to send the herd off the cliff edge was made on after the last round of crises. Many policy officials in charge today responded then that these warnings were too extreme to be considered realistic The reliance on risk models to protect us from crisis was always foolhardy. In terms of solutions, there is only space to observe that if we rely on market prices in our risk models and in value accounting, we must do so on the understanding that in rowdy times central banks will have to become buyers of last resort of dis- tressed assets to avoid systemic collapse. This is the approach upon which we have stumbled. Central bankers now consider mortgage-backed securities as collateral for their loans to banks. But the asymmetry of being a buyer of last resort without also being a seller of last resort during the unsustainable boom will only condemn us to cycles of instability The alternative is to try to avoid booms and crashes through regulatory and fiscal mechanisms which counter the incentives that induce traders and investors to place highly leveraged bets on what the markets currently believe is a'sure thing. This sounds fraught with regulatory risks, and policy-makers are not as ambitious as they once were. We no longer walk on the moon. Of course, President Kennedy's 1961 ambition to get to the moon within the decade was partly driven by a fear of the Soviets getting there first. Regulatory ambition should be set now while the fear of the current crisis is fresh and not when the crisis is over and the seat belts are working again Washington, DC
Policy-makers cannot claim to be surprised by all of this. The observation that market-sensitive risk models, increasingly integrated into financial supervision in a prescriptive manner, were going to send the herd off the cliff edge was made soon after the last round of crises.1 Many policy officials in charge today responded then that these warnings were too extreme to be considered realistic. The reliance on risk models to protect us from crisis was always foolhardy. In terms of solutions, there is only space to observe that if we rely on market prices in our risk models and in value accounting, we must do so on the understanding that in rowdy times central banks will have to become buyers of last resort of distressed assets to avoid systemic collapse. This is the approach upon which we have stumbled. Central bankers now consider mortgage-backed securities as collateral for their loans to banks. But the asymmetry of being a buyer of last resort without also being a seller of last resort during the unsustainable boom will only condemn us to cycles of instability. The alternative is to try to avoid booms and crashes through regulatory and fiscal mechanisms which counter the incentives that induce traders and investors to place highly leveraged bets on what the markets currently believe is a ‘sure thing’. This sounds fraught with regulatory risks, and policy-makers are not as ambitious as they once were. We no longer walk on the moon. Of course, President Kennedy’s 1961 ambition to get to the moon within the decade was partly driven by a fear of the Soviets getting there first. Regulatory ambition should be set now, while the fear of the current crisis is fresh and not when the crisis is over and the seat belts are working again. 12 The First Global Financial Crisis of the 21st Century 1 Avinash Persaud (2000), ‘Sending the Herd off the Cliff Edge: the Disturbing Interaction between Herding and Market-sensitive Risk Management Models’, Jacques de Larosiere Prize Essay, Institute of International Finance, Washington, DC
Blame the models Jon Danielsson London School of economics 8May2008 In response to financial turmoil, supervisors are demanding more risk calcula tionS. But model-driven mispricing produced the crisis, and risk models do not perform during crisis conditions. The belief that a really comp licated statistical model must be right is merely foolish sophistication A well-known US economist, drafted during the second world war to work in the US Army meteorological service in England, got a phone call from a general in May 1944 asking for the weather forecast for Normandy in early June. The economist replied that it was impossible to forecast weather that far into the future. The general whole heartedly agreed but nevertheless needed the number now for planning purposes Similar logic lies at the heart of the current crisis. Statistical modelling increasingly drives decision-making in the financial system, while at the same time significant questions remain about model reliability and whether market participants trust these models. If we ask practitioners, regulators or academics what they think of the quality of the statistical models underpinning pricing and risk analysis, their response is frequently negative. At the same time, many of these same individuals have no qualms about an ever-increasing use of models, not only for internal risk control but especially for the assessment of sys- temic risk and therefore the regulation of financial institutions. To have numbers seems to be more important than whether the numbers are reliable. This is a para dox. How can we simultaneously mistrust models and advocate their use? What's in a rating? Understanding this paradox helps understand both how the crisis came about and the frequently inappropriate responses to the crisis. At the heart of the crisis is the quality of ratings on SIVs. These ratings are generated by highly sophisticated sta- tistical models Subprime mortgages have generated most headlines. That is of course simplistic. A single asset class worth only $400 billion should not be able to cause such turmoil. 1 For example, see Nassim Taleb (2007), Fooled by Randomness: the Hidden Role of Chance in Life and the Markets, Harmondsworth: Penguin Bool
8 May 2008 In response to financial turmoil, supervisors are demanding more risk calculations. But model-driven mispricing produced the crisis, and risk models do not perform during crisis conditions. The belief that a really complicated statistical model must be right is merely foolish sophistication. A well-known US economist, drafted during the second world war to work in the US Army meteorological service in England, got a phone call from a general in May 1944 asking for the weather forecast for Normandy in early June. The economist replied that it was impossible to forecast weather that far into the future. The general wholeheartedly agreed but nevertheless needed the number now for planning purposes. Similar logic lies at the heart of the current crisis. Statistical modelling increasingly drives decision-making in the financial system, while at the same time significant questions remain about model reliability and whether market participants trust these models. If we ask practitioners, regulators or academics what they think of the quality of the statistical models underpinning pricing and risk analysis, their response is frequently negative. At the same time, many of these same individuals have no qualms about an ever-increasing use of models, not only for internal risk control but especially for the assessment of systemic risk and therefore the regulation of financial institutions.1 To have numbers seems to be more important than whether the numbers are reliable. This is a paradox. How can we simultaneously mistrust models and advocate their use? What’s in a rating? Understanding this paradox helps understand both how the crisis came about and the frequently inappropriate responses to the crisis. At the heart of the crisis is the quality of ratings on SIVs. These ratings are generated by highly sophisticated statistical models. Subprime mortgages have generated most headlines. That is of course simplistic. A single asset class worth only $400 billion should not be able to cause such turmoil. Blame the models Jon Danielsson London School of Economics 13 1 For example, see Nassim Taleb (2007), Fooled by Randomness: the Hidden Role of Chance in Life and the Markets, Harmondsworth: Penguin Books
14 The First Global Financial Crisis of the 2 1st Century And indeed, the problem lies elsewhere, with how financial institutions packaged subprime loans into SIVs and conduits and the low quality of their ratings. The main problem with the ratings of SIvs was the incorrect risk assessment provided by rating agencies, who underestimated the default correlation in mort gages by assuming that mortgage defaults are fairly independent events. Of course, at the height of the business cycle that may be true, but even a cursory glance at history reveals that mortgage defaults become highly correlated in downturns Unfortunately, the data samples used to rate SIVs often were not long enough to include a recession Ultimately this implies that the quality of Siv ratings left something to be desired. However, the rating agencies have an 80-year history of evaluating cor porate obligations, which does give us a benchmark to assess the ratings quality Unfortunately, the quality of Siv ratings differs from the quality of ratings of reg ular corporations. A AAA for a Siv is not the same as a AAA for Microsoft And the market was not fooled. After all, why would a AAA-rated SIv earn 200 basis points above a AAA-rated corporate bond? One cannot escape the feeling that many players understood what was going on but happily went along. The pension fund manager buying such SIVs may have been incompetent, but he or she was more likely simply bypassing restrictions on buying high-risk assets Foolish sophistication Underpinning this whole process is a view that sophistication implies quality: a eally complicated statistical model must be right. That might be true if the laws of physics were akin to the statistical laws of finance. However finance is not physics, it is more complex(Danielsson, 2002) In physics the phenomena being measured do not generally change with meas- urement In finance that is not true. Financial modelling changes the statistical laws governing the financial system in real time. The reason is that market partic pants react to measurements and therefore change the underlying statistical processes. The modellers are always playing catch-up with each other. This becomes especially pronounced when the financial system gets into a crisis This is a phenomena we call endogenous risk, which emphasizes the impor tance of interactions between institutions in determining market outcomes. Day to day, when everything is calm, we can ignore endogenous risk In crisis, we can- not. And that is when the models fail This does not mean that models are without merits. On the contrary, they have a valuable use in the internal risk management processes of financial institutions, where the focus is on relatively frequent small events. The reliability of models designed for such purposes is readily assessed by a technique called backtesting, which is fundamental to the risk management process and is a key component in the basel accords Most models used to assess the probability of small frequent events can also be used to forecast the probability of large infrequent events. However, such extrap- olation is inappropriate. Not only are the models calibrated and tested with par ticular events in mind, but it is impossible to tailor model quality to large infre- quent events or to assess the quality of such forecasts
And indeed, the problem lies elsewhere, with how financial institutions packaged subprime loans into SIVs and conduits and the low quality of their ratings. The main problem with the ratings of SIVs was the incorrect risk assessment provided by rating agencies, who underestimated the default correlation in mortgages by assuming that mortgage defaults are fairly independent events. Of course, at the height of the business cycle that may be true, but even a cursory glance at history reveals that mortgage defaults become highly correlated in downturns. Unfortunately, the data samples used to rate SIVs often were not long enough to include a recession. Ultimately this implies that the quality of SIV ratings left something to be desired. However, the rating agencies have an 80-year history of evaluating corporate obligations, which does give us a benchmark to assess the ratings quality. Unfortunately, the quality of SIV ratings differs from the quality of ratings of regular corporations. A AAA for a SIV is not the same as a AAA for Microsoft. And the market was not fooled. After all, why would a AAA-rated SIV earn 200 basis points above a AAA-rated corporate bond? One cannot escape the feeling that many players understood what was going on but happily went along. The pension fund manager buying such SIVs may have been incompetent, but he or she was more likely simply bypassing restrictions on buying high-risk assets. Foolish sophistication Underpinning this whole process is a view that sophistication implies quality: a really complicated statistical model must be right. That might be true if the laws of physics were akin to the statistical laws of finance. However finance is not physics, it is more complex (Danielsson, 2002). In physics the phenomena being measured do not generally change with measurement. In finance that is not true. Financial modelling changes the statistical laws governing the financial system in real time. The reason is that market participants react to measurements and therefore change the underlying statistical processes. The modellers are always playing catch-up with each other. This becomes especially pronounced when the financial system gets into a crisis. This is a phenomena we call endogenous risk, which emphasizes the importance of interactions between institutions in determining market outcomes. Day to day, when everything is calm, we can ignore endogenous risk. In crisis, we cannot. And that is when the models fail. This does not mean that models are without merits. On the contrary, they have a valuable use in the internal risk management processes of financial institutions, where the focus is on relatively frequent small events. The reliability of models designed for such purposes is readily assessed by a technique called backtesting, which is fundamental to the risk management process and is a key component in the Basel Accords. Most models used to assess the probability of small frequent events can also be used to forecast the probability of large infrequent events. However, such extrapolation is inappropriate. Not only are the models calibrated and tested with particular events in mind, but it is impossible to tailor model quality to large infrequent events or to assess the quality of such forecasts. 14 The First Global Financial Crisis of the 21st Century
Blame the models 15 Taken to the extreme, I have seen banks required to calculate the risk of annu al losses once every thousand years, the so-called 99. 9% annual losses. However, the fact that we can get such numbers does not mean the numbers mean any thing. The problem is that we cannot backtest at such extreme frequencies. simila arguments apply to many other calculations, such as expected shortfall or tail value-at-risk. Fundamental to the scientific process is verification, in our case backtesting. Neither the 99.9% models nor most tail value-at-risk models can be backtested, and therefore cannot be considered scientific Demanding numbers We do, however, see increasing demands from supervisors for exactly the calcula tion of such numbers as a response to the crisis. Of course the underlying moti vation is the worthwhile goal of trying to quantify financial stability and systemic risk. However, exploiting the banks' internal models for this purpose is not the right way to do it. The internal models were not designed with this in mind and to do this calculation is a drain on the banks'risk management resources. It is the lazy way out. If we do not understand how the system works, generating numbers may give us comfort. But the numbers do not imply understanding ndeed, the current crisis took everybody by surprise in spite of all the sophis ticated models, all the stress testing and all the numbers. I think the primary lesson from the crisis is that the financial institutions that had a good handle on liquidity risk management came out best. It was management and internal processes that mattered -not model quality. Indeed, the problem created by the conduits cannot be solved by models, but the problem could have been prevent ed by better management and especially better regulations With these facts increasingly understood, it is incomprehensible to me why supervisors are increasingly advocating the use of models in assessing the risk of individual institutions and financial stability. If model-driven mispricing enabled the crisis to happen, what makes us believe that future models will be any better? Therefore one of the most important lessons from the crisis has been the expo- sure of the unreliability of models and the importance of management. The view frequently expressed by supervisors that the solution to a problem like the subprime crisis is Basel II is not really true. The reason is that Basel Il is based on modelling. What is missing is for the supervisors and the central banks to under- stand the products being traded in the markets and have an idea of the magnitude, potential for systemic risk and interactions between institutions and endogenous risk, coupled with a willingness to act when necessary. In this crisis the key problem lies with bank supervision and central banking, as well as with the banks Reference Danielsson, Jon(2002), The Emperor has No Clothes: Limits to Risk modelling Journal of Banking and Finance 26(7), pp. 1273-96
Taken to the extreme, I have seen banks required to calculate the risk of annual losses once every thousand years, the so-called 99.9% annual losses. However, the fact that we can get such numbers does not mean the numbers mean anything. The problem is that we cannot backtest at such extreme frequencies. Similar arguments apply to many other calculations, such as expected shortfall or tail value-at-risk. Fundamental to the scientific process is verification, in our case backtesting. Neither the 99.9% models nor most tail value-at-risk models can be backtested, and therefore cannot be considered scientific. Demanding numbers We do, however, see increasing demands from supervisors for exactly the calculation of such numbers as a response to the crisis. Of course the underlying motivation is the worthwhile goal of trying to quantify financial stability and systemic risk. However, exploiting the banks’ internal models for this purpose is not the right way to do it. The internal models were not designed with this in mind and to do this calculation is a drain on the banks’ risk management resources. It is the lazy way out. If we do not understand how the system works, generating numbers may give us comfort. But the numbers do not imply understanding. Indeed, the current crisis took everybody by surprise in spite of all the sophisticated models, all the stress testing and all the numbers. I think the primary lesson from the crisis is that the financial institutions that had a good handle on liquidity risk management came out best. It was management and internal processes that mattered – not model quality. Indeed, the problem created by the conduits cannot be solved by models, but the problem could have been prevented by better management and especially better regulations. With these facts increasingly understood, it is incomprehensible to me why supervisors are increasingly advocating the use of models in assessing the risk of individual institutions and financial stability. If model-driven mispricing enabled the crisis to happen, what makes us believe that future models will be any better? Therefore one of the most important lessons from the crisis has been the exposure of the unreliability of models and the importance of management. The view frequently expressed by supervisors that the solution to a problem like the subprime crisis is Basel II is not really true. The reason is that Basel II is based on modelling. What is missing is for the supervisors and the central banks to understand the products being traded in the markets and have an idea of the magnitude, potential for systemic risk and interactions between institutions and endogenous risk, coupled with a willingness to act when necessary. In this crisis the key problem lies with bank supervision and central banking, as well as with the banks themselves. Reference Danielsson, Jon (2002), ‘The Emperor has No Clothes: Limits to Risk Modelling’, Journal of Banking and Finance 26 (7), pp. 1273–96. Blame the models 15