Exposure Simulation Part III / CVA for Bermudan Swaptions

In this post we are going to simulate exposures of a bermudan swaption (with physical settlement) through an backward induction (aka American Monte-Carlo) method. These exposure simulations are often needed at the counterparty credit risk management. We will use this exposures to calculate the Credit Value Adjustment (CVA). One could easily extend this notebook to calculate the PFE or other xVAs. The implementation is based on the paper “Backward Induction for Future Values” by Dr. Antonov et al. .

We assume that the value of a derivate and the default probability of the counterpart are independent (no wrong-way risk). For our example our netting set consists of only one Bermudan swaption (but it can be easily extended). For simplicity we assume a flat yield termstructure.

In our example we consider a 5Y Euribor 6M payer swap at 3 percent. The bermudan swaption allows us to enter this swap on each fixed rate payment date.

Just as a short reminder the unilateral CVA of a derivate is given by:

CVA = (1-R) \int_0^Tdf(t)EE(t)dPD(t),

with recovery rate R, portfolio maturity T (the latest maturity of all deals in the netting set), discount factor df(t) at time t, the expected exposure EE(t) of the netting set at time t and the default probability PD(t).

We can approximate the integral by the discrete sum

CVA = (1-R) \sum_{i=1}^n df(t_i)EE(t_i)(PD(t_{i-1})-PD(t_i)).

In one of my previous posts we calculated the CVA of a plain vanilla swap. We performed the following steps:

  1. Generate a timegrid T
  2. Generate N paths of the underlying market values which influences the value of the derivate
  3. For each point in time calculate the positive expected exposure
  4. Approximate the integral

As in the plain vanilla case we will use a short rate process and assume its already calibrated.

The first two steps are exactly the same as in the plain vanilla case.

But how can we calculate the expected exposure of the swaption?

In the T-forward measure the expected exposure is given by:

EE(t) = P(0,T) E^T[\frac{1}{P(t,T)} \max(V(t), 0)]

with future value of the bermudan swaption V(t) at time t. The npv of a physical settled swaption can be negative at time t if the option has been exercised  before time t  and the underlying swap has a negative npv at time t.

For each simulated path we need to calculate the npv of the swaption V(t_i,x_i) conditional the state x_i  at time $t_i$.

In the previous post we saw a method how to calculate the npv of a bermudan swaption. But for the calculation of the npv we used a Monte Carlo Simulation. This would result again in a nested Monte-Carlo-Simulation which is, for obvious reasons, not desirable.

If we have a look on the future value, we see that is very much the same as the continuation value in the bermudan swaption pricing problem.

But instead of calculate the continuation values only one exercises date we calculate it now on each time in our grid. We use a the same regression based approximation.

We approximate the continuation value through some function of  current state of the stochastic process x_i:

V(t_i, x_i) \approx f(x_i).

A common choice for the function is of the form:

f(x) = a_0 + a_1 g_1(x)+ a_2 g_2(x) + \dots a_n g_n(x)

with g_i (x) a polynom of the degree i.

The coefficients of this function f are estimated by the ordinary least square error method.

We can almost reuse the code from the previous post, only a few extensions are needed.

We need to add more times to our time grid (we add a few more points in time after the maturity to have some nicer plots):

date_grid = [today + ql.Period(i, ql.Months) for i in range(0,66)] + calldates + fixing_dates

When we exercise the option we will enter the underlying swap. Therefore we need to update the swaption npvs to the underlying swap npv for all points in time after the exercise time on each grid:

if t in callTimes:
cont_value = np.maximum(cont_value_hat, exercise_values)
swaption_npvs[cont_value_hat < exercise_values, i:] = swap_npvs[cont_value_hat < exercise_values, i:].copy()

For our example we can observe the following simulated exposures:

Screen Shot 2016-06-26 at 12.16.43

If we compare it with the exposures of the underlying swap, we can see that the some exposure paths coincide. This is the case when the swaption has been exercised. In some of this cases we observe also negative npvs after exercising the option.

Screen Shot 2016-06-26 at 12.16.35

For the CVA calculation we need the positive expected exposure, which can be easily calculated:

swap_npvs[swap_npvs<0] = 0
swaption_npvs[swaption_npvs<0] = 0
EE_swaption = np.mean(swaption_npvs, axis=0)
EE_swap = np.mean(swap_npvs, axis=0)

Screen Shot 2016-06-26 at 12.16.18.png

Given the default probability of the counterparty as a  default termstructure we can now calculate the CVA for the bermudan swaption:

# Calculation of the default probs
defaultProb_vec = np.vectorize(pd_curve.defaultProbability)
dPD = defaultProb_vec(time_grid[:-1], time_grid[1:])

# Calculation of the CVA
recovery = 0.4
CVA = (1-recovery) * np.sum(EE_swaption[1:] * dPD)

As usual the complete notebook is available here.

CVA Calculation with QuantLib and Python

Today I am going to present a way to calculate the credit value adjustment (CVA) for a netting set of plain vanilla interest rate swaps. This Monte-Carlo method is based on the code example of my previous post about the expected exposure and PFE calculation and the first steps will be exactly the same.

What is the credit value adjustment (CVA)?

The credit value adjustment is the difference between the risk-free price of a netting set and the the price which takes the possibility of the default of the counterparty into account. A netting set is a portfolio of deals with one counterparty for which you have a netting agreement. That means you are allowed to set positive against negative market values. A netting agreement will reduce your exposure and therefore the counterparty credit risk. If the time of default and value of the portfolio are independent then the CVA is given by

CVA = (1-R) \int_0^Tdf(t)EE(t)dPD(t),

with recovery rate R, portfolio maturity T (the latest maturity of all deals in the netting set), discount factor df(t) at time t, the expected exposure EE(t) at time t and the default probability PD(t).

The Monte Carlo approach

In my previous post we have seen a Monte Carlo method to estimate the expected exposure EE(t) at time t on an given time grid 0=t_0<\dots<T=t_n. We can reuse this estimator to approximate the integral by the discrete sum

CVA = (1-R) \sum_{i=1}^n df(t_i)EE(t_i)(PD(t_{i-1})-PD(t_i)).

As we see in the formula we need two more ingredients to calculate the CVA, the discounted expected exposure and the the default probabilities of the counterparty.

Calculate the discounted expected exposure

To convert the expected exposure at time t in its present value expressed in time-zero dollars we only need to add a few more lines of code.

vec_discount_function = np.vectorize(t0_curve.discount)
discount_factors = vec_discount_function(time_grid)

We use the todays yield curve to calculate the discount factor for each point on our grid. With the numpy function vectorize
we generate a vectorized wrapper of the discount method of the QuantLib YieldTermStructure. This vectorized version can be applied on a array of ql.Dates or times instead of a scalar input. Basicalliy it’s equivalent to the following code snippet:

discount_factors = np.zeros(time_grid.shape)
for i in range(len(time_grid)):
	time = time_grid[i]
	discount_factors[i] = t0_curve.discount(time)

As the numpy documentation states the np.vectorize function is provided primarily for convenience, not for performance.

After the generating the market scenarios and pricing the netting set under each scenario we will calculate the discounted NPVs for each deal in the portfolio:

# Calculate the discounted npvs
discounted_cube = np.zeros(npv_cube.shape)
for i in range(npv_cube.shape[2]):
    discounted_cube[:,:,i] = npv_cube[:,:,i] * discount_factors
# Netting
discounted_npv = np.sum(discounted_cube, axis=2)

Discounted_NPV_Paths

Using the discounted npv cube we can calculate the discounted expected exposure:

# Calculate the exposure and discounted exposure
E = portfolio_npv.copy()
dE = discounted_npv.copy()
E[E&lt;0] = 0
EE = np.sum(E, axis=0)/N
dE[dE&lt;0] = 0
dEE = np.sum(dE, axis=0)/N

discounted_exposure

discounted_expected_exposure

The default probability of the counterparty

To derive the default probability one could either use market implied quotes (e.g. CDS) or use rating information (e.g. based on historical observations). We assume that the survival probability is given by sp(t)=\exp(-\int_0^t \lambda(t)) with a deterministic piecewise-constant hazard rate function \lambda(t). Given a grid of dates d_0<\dots<d_n and the corresponding backward flat hazard rate $latex\lambda_0,\dots,\lambda_n$ we can use the Quantlib HazardRateCurve to build a default curve.

# Setup Default Curve 
pd_dates =  [today + ql.Period(i, ql.Years) for i in range(11)]
hzrates = [0.02 * i for i in range(11)]
pd_curve = ql.HazardRateCurve(pd_dates,hzrates,ql.Actual365Fixed())
pd_curve.enableExtrapolation()

The QuantLib provides a real bunch of different types of DefaultTermStructures. You can either bootstrap a default curve from CDS quotes or you build a interpolated curve like we do here and combine one of the many interpolators (Linear, Backward Flat, etc.) with one of the possible traits (hazard rate, default probability, default density).

defaultCurve

With the default termstructure we can calculate the probability for a default between the times t_i and t_{i+1} for all i in our time grid.

# Calculation of the default probs
defaultProb_vec = np.vectorize(pd_curve.defaultProbability)
dPD = defaultProb_vec(time_grid[:-1], time_grid[1:])

Again we use the numpy function vectorize to apply a scalar function on an array. The method defaultProbability takes two times as input, t and T. It returns the probability of default between t and T.

Now we have all pieces together and the following code snippet gives us the CVA of our netting set:

# Calculation of the CVA
recovery = 0.4
CVA = (1-recovery) * np.sum(dEE[1:] * dPD)

You can download the code as an IPython (Juypter) notebook from here or just clone my repository (IPythonscripts) at GitHub.

If you want to read more about the QuantLib I would recommend to have a look on the blog and book “Implementing QuantLib” by Luigi. Another fantastic blog “Fooling around with QuantLib” by Peter has a very good and detailed post the Gsr model. Actually Peter has implemented this model in C++ and contributed it to the QuantLib.

I hope you have enjoyed reading my post and you will have fun playing around with the notebook. In one of my following posts I will extend this simulation by add a new product class to the netting set: European and bermudan swaptions.

So long.