Risk Calculation: Scenario Analysis and Value at Risk Analysis

As explained in our last post on risk bearing capacity, risk may be defined as the probability of occurrence of a negative event. Risk calculation, on the other hand, refers to a procedure designed to measure the financial losses that would be incurred by the institution in question should the unfavourable event occur. For this purpose two methods can be used: scenario analysis and value at risk analysis (VaR).

Scenario Analysis

This method entails the use of available historical market and/or internal data to create scenarios concerning the possible development of default rates. Similarly to VaR, which will be discussed in the next paragraph, scenario analysis takes into consideration two assumed situations:
the normal case scenario, in which loss developments are assumed to occur with a rate equal to the average of a certain historical period under review; and the worst case scenario which assumes the infliction of extreme losses. These scenarios are used in concert in order to determine the range of potential losses in portfolio value. Incurred losses may, for example, be dependent of multiple factors such as default risk and collateral risk. The normal case scenario calculates the minimum potential losses by assuming a default rate and collateral depreciation equal to the average of the precedent year. The worst case scenario, on the other hand, calculates losses assuming default and collateral depreciation rates equal to the maximums of the previous year. Being it based on a small number of historical events, the scenario analysis counts among its drawbacks a limited explanatory power. Financial institutions that resort to scenario analysis for risk controlling usually have to accept less precise results than they would get using the VaR approach.

Value at Risk Analysis

The VaR states the maximum loss that will not be exceeded with a certain probability (confidence level) at a given horizon (holding period). To determine the value at risk, a confidence level is determined which reflects the probability that the calculated maximum loss will not be exceeded within the holding period. The confidence level is usually between 95% and 99.95%, which means that higher losses are possible, but whose probability of occurrence is between 5% and 0.05%. The holding period states the horizon during which the losses can occur and is derived from the liquidity of the assets observed. To calculate the credit VaR, it is necessary to determine the distribution of potential losses in the credit portfolio. For this purpose, assumptions are made in terms of the future development of the default rate and the exposure at default (credit amount outstanding at the time of default, minus proceeds from collateral and estate). The value-at-risk does state the amount of losses within the confidence level chosen but it does not offer any prediction as to the probability distribution of losses beyond that confidence level. Moreover, it usually does not take into account any extreme market movements as would occur, for example, in an economic crisis with extremely high default rates. For this reason many have proposed that the VaR analysis be complemented by stress tests which calculate the value fluctuations based on the assumption of extreme market movements. The performance of stress tests is considered a prerequisite for the approval of an IBB approach under Basel II/III.

Although it seems advantageous to shift to a value-at-risk process, it is essential for institutions to determine what additional cost would be incurred in implementing VaR, and what additional benefit would be derived from more effective management that would result from such implementation. Any transition from scenario analysis to VaR should be supported by this cost-benefit analysis.

The following two tabs change content below.

Leave a Reply