Pricing Bermudan Options in TensorFlow – Learning an optimal Early Exercise Strategy

In the previous post we used TensorFlow to price some exotic options like Asian and Barrier Options and used the automatic differentiation feature to calculate the greeks of the options.

Today we will see how to price a Bermudan option in TensorFlow with the Longstaff-Schwartz (a.k.a American Monte Carlo) algorithm. For simplicity we assume Bermudan equity call option without dividend payments in a Black-Scholes-Merton world.

The complete Jupyter notebook can be found on GitHub.

Let start with a brief recap of Bermudan options and the Longstaff-Schwartz pricing method. For detailed and more formalized description have a look into my previous posts about Bermudan Swaptions and the American Monte Carlo Simulation for CVA calculations or have a look into the book ‘Monte Carlo Methods in Financial-Engineering’ from Paul Glasserman.

A Bermudan option give the holder the right to exercise option not only at the maturity of the option but also earlier on pre specified dates. At each exercise date the holder of the option has to decide if it’s better to exercise the option now (exercise value) or to wait for a later exercise date (continuation value). He has to find a optimal exercise strategy (early exercise boundary of the option). He has to decide if the exercise value is higher than the continuation value of the option. There are some trivial cases:

1) If the option is not in the money its always better to wait and
2) if the option hasn’t been exercised before the last exercise date the Bermudan become an European option.

Therefore the option price have to be higher than the price of an European (because its included).

One approach to price the option is to use Monte-Carlo simulations, but the problem is calculation of the continuation value. On each simulated path we can easily calculate the exercise value of the option at each exercise date given the state of the path, but the continuation value is a conditional expectation given the current underlying price of the path. The naive approach is to calculate this expectation with a Monte-Carlo Simulation again but then we need to run a simulation for each exercise date on each path. These kind of nested simulations can become very slow.

Longstaff-Schwartz method

One solution is the Longstaff-Schwartz method, the basic idea is to approximate the continuation value through a linear regression model.

The pricing consists of two phases:

In the learning phase we go backward through the time starting with the last exercise date (it is trivial to price), then we go to the previous exercise date and we fit a linear model to predict the continuation values (the option value at time t+1 which we just calculated) given the state of the paths (usually a system of polynomials of the state). Then we use the prediction to decide if we exercise or not and update the value of the option at time t (either exercise or continuation value). And we continue these steps that until we reach the first exercise date.

After we learned the models to predict the continuation values we use these linear models to predict the continuation values in the pricing phase. We generate a new set of random paths and go forward through the time and at each exercise date we decide if the exercise depending on the exercise value and the predicted continuation value.

There are two ways to fit the linear model. One could solve the normal equation
or use gradient descent. I decided to go for the gradient descent, it give us the opportunity to exchange the linear model with more sophisticated models like a deep network.

I use three helper functions, the first function get_continuation_function creates the Tensorflow operators needed for training a linear model at an exercise date and a second function feature_matrix_from_state creates a feature matrix for the model from the paths values at a given time step. I use Chebyshev polynomials up to the degree 4 as features, as we can see below the fit could be better. Feel free to play with it and adjust the code.

The third helper function pricing_function create the computational graph for the pricing and it generates for each call date the needed linear model and the training operator with the helper functions and store it in a list of training_functions.

previous_exersies = 0
    npv = 0
    for i in range(number_call_dates-1):
        (input_x, input_y, train, w, b, y_hat) = get_continuation_function()
        training_functions.append((input_x, input_y, train, w, b, y_hat))
        X = feature_matrix_from_current_state(S_t[:, i])
        contValue = tf.add(tf.matmul(X, w),b)
        continuationValues.append(contValue)
        inMoney = tf.cast(tf.greater(E_t[:,i], 0.), tf.float32)
        exercise = tf.cast(tf.greater(E_t[:,i], contValue[:,0]), tf.float32) * inMoney 
        exercises.append(exercise)
        exercise = exercise * (1-previous_exersies)
        previous_exersies += exercise
        npv += exercise*E_t[:,i]
    
    # Last exercise date
    inMoney = tf.cast(tf.greater(E_t[:,-1], 0.), tf.float32)
    exercise =  inMoney * (1-previous_exersies)
    npv += exercise*E_t[:,-1]
    npv = tf.reduce_mean(npv)
    greeks = tf.gradients(npv, [S, r, sigma])

The npv operator is sum of the optimal exercise decisions. At each time we exercise if the exercise value is greater than the predicted continuation value and the option is in the money. We store the information about previous exercise decision since we can only exercise the option once.

In the actual pricing functionbermudanMC_tensorFlow we execute the graph to create the paths, the exercise values for the training path then will iterate backward through the training_functions and learn for each exercise date the linear model.

# Backward iteration to learn the continuation value approximation for each call date
        for i in range(n_excerises-1)[::-1]:
            (input_x, input_y, train, w, b, y_hat) = training_functions[i]
            y = exercise_values[:, i+1:i+2]
            X = paths[:, i]
            X = np.c_[X**1, 2*X**2-1, 4*X**3 - 3 * X]
            X = (X - np.mean(X, axis=0)) / np.std(X, axis=0)
            for epoch in range(80):
                _ = sess.run(train, {input_x:X[exercise_values[:,i]>0], 
                                     input_y:y[exercise_values[:,i]>0]})
            cont_value = sess.run(y_hat, {input_x:X, 
                                     input_y:y})
            exercise_values[:, i:i+1] = np.maximum(exercise_values[:, i:i+1], cont_value)
            plt.figure(figsize=(10,10))
            plt.scatter(paths[:,i], y)
            plt.scatter(paths[:,i], cont_value, color='red')
            plt.title('Continuation Value approx')
            plt.ylabel('NPV t_%i'%i)
            plt.xlabel('S_%i'%i)

We take the prices of the underlying at time i and calculate the Chebyshey polynomials and store it in the predictor matrix X. The option value at the previous time is our target value y. We normalise our features and train our linear model with a stochastic gradient descent over 80 epochs. We exclude all samples where the option is not in the money (no decision to make). After we got our prediction for the continuation value we update the value of option at time i to be the maximum of the exercise value and the predicted continuation value.

After we learned the weights for our model we can execute the npv and greek operator to calculate the npv and the derivatives of the npv.

Pricing Example

Assume the spot is at 100 the risk free interest rate is at 3% and the implied Black vol is 20%. Lets price a Bermudan Call with strike level at 120 with yearly exercise dates and maturity in 4 years.

The fit off our continuation value approximation on the training set.

The runtime is between 7-10 second for a Bermudan option with 4 exercise dates. We could speed it up, if we would use the normal equations instead of a gradient descent method.

So thats it for today. As usual is the complete source code as a notebook on GitHub for download. I think with TensorFlow its very easy to try other approximations methods, since we can make use of TensorFlows deep learning abilities. So please feel free to play with the source code and experiment with the feature matrix (other polynomials, or higher degrees) or try another model (maybe a deep network) to get a better fit of the continuation values. I will play a bit with it and come back to this in a later post and present some results of alternative approximation approaches.

So long.

TensorFlow meets Quantitative Finance: Pricing Exotic Options with Monte Carlo Simulations in TensorFlow

During writing my previous post about fraud detection with logistic regression with TensorFlow and gradient descent methods I had the idea to use TensorFlow for the pricing of path dependent exotic options. TensorFlow uses automatic (algorithmic) differentitation (AD) to calculate the gradients of the computational graph. So if we implement a closed pricing formula in TensorFlow we should get the greeks for ‘free’. With ‘free’ I mean without implementing a closed formula (if available) or without implementing a numerical method (like finite difference) to calculate the greeks. In Theory the automatic differentiation should be a bit times slower than a single pricing but the cost should be independent on the number of greeks we calculate.

Monte Carlo Simulations can benefit of AD a lot, when each pricing is computational costly (simulation) and we have many risk drivers, the calculation of greeks become very challenging. Imagine a interest rate derivate and we want to calculate the delta and gamma and mixed gammas for each pillar on the yield curve, if we use bump-and-revaluate to calculate the greeks we need many revaluations.

For simplicty we focus on equity options in a Black-Scholes model world since I want to focus on the numerical implementation in TensorFLow. Today we will look into the Monte-Carlo pricing of Plain Vanilla Options, Geometric Asian Options and Down-And-Out Barriers. All options in this post are Calls. For a introduction into TensorFlow look into my previous post on Logisitc Regression in TensorFlow.

You will need TensorFlow 1.8 to run the notebook (which is available on my Github)

Plain Vanilla Call

Lets start with a plain vanilla Call. Let me give a brief introduction into options for the non-quant readers, if you familiar with options skip this part and go directly to the implementation.

A call gives us the right to buy a unit of a underlying (e.g Stock) at preagreed price (strike) at a future time (option maturity), regardless of the actual price at that time.So the value the Call at option maturity is

C = \max(S_T-K,0) ,

with S_T the price of the underlying at option maturity T and strike K. Clearly we wouldn’t exercise the option if the actual price is below the strike, so its worthless (out of the money).

Its very easy to calculate the price of the option at maturity but we are more interested into the todays price before maturity.

Black, Scholes and Merton had a very great idea how to solve this problem (Nobel prize worth idea), and come up with a closed pricing formula.

We will first implement the closed formula and then implement the pricing with a Monte Carlo Simulation.

Not for all kind of options and models we have analytical formula to get the price. A very generic method to price options is the Monte-Carlo Simulation. The basic idea is to simulated many possible (random) evolutions (outcomes/realizations) of the underlying price (paths) and price the option of each of these paths and approximate the price with the average of the simulated option prices. Depending on the underlying stochastic process for the underyling in the model we need numerical approximations for the paths.

For a more detailed introduction into derivates have a look into the book ‘Options, Futures and Other Derivates’ by Prof. John C. Hull.

We lets assume the current stock price is 100, the strike is 110 and maturity is in 2 years from now. The current risk free interest rate is 3% and the implied market vol is 20%.

Let implement the Black Scholes pricing formula in Python

import numpy as np
import tensorflow as tf
import scipy.stats as stats
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm

Plain Vanilla Call in TensorFlow


def blackScholes_py(S_0, strike, time_to_expiry, implied_vol, riskfree_rate):
    S = S_0
    K = strike
    dt = time_to_expiry
    sigma = implied_vol
    r = riskfree_rate
    Phi = stats.norm.cdf
    d_1 = (np.log(S_0 / K) + (r+sigma**2/2)*dt) / (sigma*np.sqrt(dt))
    d_2 = d_1 - sigma*np.sqrt(dt)
    return S*Phi(d_1) - K*np.exp(-r*dt)*Phi(d_2)

blackScholes_py(100., 110., 2., 0.2, 0.03)
9.73983632580859

The todays price is 9.73 and a pricing with the closed formula is super fast less then 200 mirco seconds.

Now lets implement the same pricing formula in TensorFLow. We use the concept of a function closure. The outer function constructs the static TensorFow graph, and the inner function (which we return) let us feed in our parameters into the graph and execute it and return the results.

def blackScholes_tf_pricer(enable_greeks = True):
    # Build the static computational graph
    S = tf.placeholder(tf.float32)
    K = tf.placeholder(tf.float32)
    dt = tf.placeholder(tf.float32)
    sigma = tf.placeholder(tf.float32)
    r = tf.placeholder(tf.float32)
    Phi = tf.distributions.Normal(0.,1.).cdf
    d_1 = (tf.log(S / K) + (r+sigma**2/2)*dt) / (sigma*tf.sqrt(dt))
    d_2 = d_1 - sigma*tf.sqrt(dt)
    npv =  S*Phi(d_1) - K*tf.exp(-r*dt)*Phi(d_2)
    target_calc = [npv]
    if enable_greeks:
        greeks = tf.gradients(npv, [S, sigma, r, K, dt])
        dS_2ndOrder = tf.gradients(greeks[0], [S, sigma, r, K, dt]) 
        # Calculate mixed 2nd order greeks for S (esp. gamma, vanna) and sigma (esp. volga)
        dsigma_2ndOrder = tf.gradients(greeks[1], [S, sigma, r, K, dt])
        dr_2ndOrder = tf.gradients(greeks[2], [S, sigma, r, K, dt]) 
        dK_2ndOrder = tf.gradients(greeks[3], [S, sigma, r, K, dt]) 
        dT_2ndOrder = tf.gradients(greeks[4], [S, sigma, r, K, dt])
        target_calc += [greeks, dS_2ndOrder, dsigma_2ndOrder, dr_2ndOrder, dK_2ndOrder, dT_2ndOrder]
    
    # Function to feed in the input and calculate the computational graph
    def execute_graph(S_0, strike, time_to_expiry, implied_vol, riskfree_rate):
        with tf.Session() as sess:
            res = sess.run(target_calc, 
                           {
                               S: S_0,
                               K : strike,
                               r : riskfree_rate,
                               sigma: implied_vol,
                               dt: time_to_expiry})
        return res
    return execute_graph

tf_pricer = blackScholes_tf_pricer()
tf_pricer(100., 110., 2., 0.2, 0.03)

[9.739834,
 [0.5066145, 56.411205, 81.843216, -0.37201464, 4.0482087],
 [0.014102802, 0.5310427, 2.8205605, -0.012820729, 0.068860546],
 [0.5310427, -1.2452297, -6.613867, 0.030063031, 13.941332],
 [2.8205605, -6.613866, 400.42563, -1.8201164, 46.5973],
 [-0.012820728, 0.030063028, -1.8201163, 0.011655207, -0.025798593],
 [0.06886053, 13.941331, 46.597298, -0.025798589, -0.62807804]]

We get the price and all first and 2nd order greeks. The code runs roughly 2.3 second on my laptop and roughly 270 ms (without greeks).

tf_pricer = blackScholes_tf_pricer(True)
%timeit tf_pricer(100., 110., 2., 0.2, 0.03)

2.32 s ± 304 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)


tf_pricer = blackScholes_tf_pricer(False)
%timeit tf_pricer(100., 110., 2., 0.2, 0.03)

269 ms ± 13.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

We calculated 30 different greeks and one npv, so for a bump and revaluation we would need 31 times the time for one pricing. So that definetly a speed up but compare to the ‘pure’ python implementation is awefully slow. But its easy to implement and reduce maybe developing time.

Monte Carlo Simulation with TensorFlow

In the Black Scholes model the underlying price follows a geometric Brownian motion and we now the distribution of the prices in the futures given the current price, the risk free interest rate and the implied volatiliy of the underlying. So we are able to sample future prices directly.

We use a helper function create_tf_graph_for_simulate_paths to build the computational graph in TensorFlow which can calculate the future prices given the required inputs and normal distributed random numbers dw with zero mean and a standard deviation of one. The function will return the operator to calculate the random price paths and the placeholder we need to feed in our parameter.

We assume we need to calculate the prices at equidistant points of time. The column of the random matrix dw defines the number of time on each path and the rows the numbers of different paths.

For example a random matrix with the shape (1000, 10) would have 1000 path with the prices of the underlying at times T/10, 2T/10, … T.

def create_tf_graph_for_simulate_paths():
    S = tf.placeholder(tf.float32)
    K = tf.placeholder(tf.float32)
    dt = tf.placeholder(tf.float32)
    T = tf.placeholder(tf.float32)
    sigma = tf.placeholder(tf.float32)
    r = tf.placeholder(tf.float32)
    dw = tf.placeholder(tf.float32)
    S_T = S * tf.cumprod(tf.exp((r-sigma**2/2)*dt+sigma*tf.sqrt(dt)*dw), axis=1)
    return (S, K, dt, T, sigma, r, dw, S_T)

To test the function we create another function which use the function above to create the computational graph and returns another function which feed our parametrization into the graph and run it and return the simulated paths.

def make_path_simulator():
    (S,K, dt, T, sigma, r, dw, S_T) = create_tf_graph_for_simulate_paths()
    def paths(S_0, strike, time_to_expiry, implied_vol, riskfree_rate, seed, n_sims, obs):
        if seed != 0:
            np.random.seed(seed)
        stdnorm_random_variates = np.random.randn(n_sims, obs)
        with tf.Session() as sess:
            timedelta = time_to_expiry / stdnorm_random_variates.shape[1]
            res = sess.run(S_T, 
                           {
                               S: S_0,
                               K : strike,
                               r : riskfree_rate,
                               sigma: implied_vol,
                               dt : timedelta,
                               T: time_to_expiry,
                               dw : stdnorm_random_variates
                         })
            return res
    return paths

path_simulator = make_path_simulator()
paths = path_simulator(100, 110.,  2, 0.2, 0.03, 1312, 10, 400)
plt.figure(figsize=(8,5))
_= plt.plot(np.transpose(paths))
_ = plt.title('Simulated Path')
_ = plt.ylabel('Price')
_ = plt.xlabel('TimeStep') 

paths

Now we going to price our plain vanilla call. We will use the same design pattern as above (function closure). The function will create the computational graph and returns a function that generates the random numbers, feed our parameter into the graph and run it.

Actually we just extend the previous function with the pricing of the call on the path payout = tf.maximum(S_T[:,-1] - K, 0) and we discount it and we add some extra lines to calculate the greeks.

def create_plain_vanilla_mc_tf_pricer(enable_greeks = True):
    (S,K, dt, T, sigma, r, dw, S_T) = create_tf_graph_for_simulate_paths()
    payout = tf.maximum(S_T[:,-1] - K, 0)
    npv = tf.exp(-r*T) * tf.reduce_mean(payout)
    target_calc = [npv]
    if enable_greeks:
        greeks = tf.gradients(npv, [S, sigma, r, K, T])
        target_calc += [greeks]
    def pricer(S_0, strike, time_to_expiry, implied_vol, riskfree_rate, seed, n_sims):
        if seed != 0:
            np.random.seed(seed)
        stdnorm_random_variates = np.random.randn(n_sims, 1)
        with tf.Session() as sess:
            timedelta = time_to_expiry / stdnorm_random_variates.shape[1]
            res = sess.run(target_calc, 
                           {
                               S: S_0,
                               K : strike,
                               r : riskfree_rate,
                               sigma: implied_vol,
                               dt : timedelta,
                               T: time_to_expiry,
                               dw : stdnorm_random_variates
                         })
            return res
    return pricer

plain_vanilla_mc_tf_pricer = create_plain_vanilla_mc_tf_pricer()
plain_vanilla_mc_tf_pricer(100, 110, 2, 0.2, 0.03, 1312, 1000000)
[9.734369, [0.5059894, 56.38598, 81.72916, -0.37147698, -0.29203108]]
%%timeit
plain_vanilla_mc_tf_pricer(100, 110, 2, 0.2, 0.03, 1312, 1000000)

392 ms ± 20.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)


plain_vanilla_mc_tf_pricer = create_plain_vanilla_mc_tf_pricer(False)
%timeit plain_vanilla_mc_tf_pricer(100, 110, 2, 0.2, 0.03, 1312, 1000000)

336 ms ± 24.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

The MC pricing is faster then the analytical solution in TensorFlow (ok we only have first order derivates) and the
extra costs for the greeks is neglect-able. I really wonder why a MC Simulation is faster than a closed formula.

Lets try some exotic (path dependent) options.

Exotic path dependent option

Asian Options

The fair value of a Asian option depends not only on the price of the underlying at option maturity but also on the values during the lifetime to predefined times. A Asian Call pay the difference between the average price and the strike or nothing.

A asian option which use the geometric mean of the underlying is also called Geometric Asian Option.

We follow the previous design we only make some extensions to the payoff and pricing method (need number of observation points).

def create_geometric_asian_mc_tf_pricer(enable_greeks = True):
    (S,K, dt, T, sigma, r, dw, S_T) = create_tf_graph_for_simulate_paths()
    A = tf.pow(tf.reduce_prod(S_T, axis=1),dt/T)
    payout = tf.maximum(A - K, 0)
    npv = tf.exp(-r*T) * tf.reduce_mean(payout)
    target_calc = [npv]
    if enable_greeks:
        greeks = tf.gradients(npv, [S, sigma, r, K, T])
        target_calc += [greeks]
    def pricer(S_0, strike, time_to_expiry, implied_vol, riskfree_rate, seed, n_sims, obs):
        if seed != 0:
            np.random.seed(seed)
        stdnorm_random_variates = np.random.randn(n_sims, obs)
        with tf.Session() as sess:
            timedelta = time_to_expiry / obs
            res = sess.run(target_calc, 
                           {
                               S: S_0,
                               K : strike,
                               r : riskfree_rate,
                               sigma: implied_vol,
                               dt : timedelta,
                               T: time_to_expiry,
                               dw : stdnorm_random_variates
                         })
            return res
    return pricer

geom_asian_mc_tf_pricer = create_geometric_asian_mc_tf_pricer()
geom_asian_mc_tf_pricer(100, 110, 2, 0.2, 0.03, 1312, 100000, 8)
[4.196899, [0.3719058, 30.388206, 33.445602, -0.29993924, -89.84526]]

The pricing is a bit slower than the plain vanilla case about 470ms and 319ms without greeks. Again the extra cost is neglect-able.

Down and Out Options

Now lets try a down and out call. A down-and-out Call behaves as a normal Call but if the price of the underlying touch or fall below a certain level (the barrier B) at any time until the expiry the option, then it becomes worthless, even if its at expiry in the money.

A down and out call is cheaper than the plain vanilla case, since there is a risk that the option get knocked out before reaching the expiry. It can be used to reduce the hedging costs.

In the Black-Scholes model there is again a closed formula to calculate the price. See https://people.maths.ox.ac.uk/howison/barriers.pdf or https://warwick.ac.uk/fac/soc/wbs/subjects/finance/research/wpaperseries/1994/94-54.pdf .

We want to price this kind of option in TensorFlow with a Monte-Carlo Simulation and let TensorFLow calculate the path derivates with automatic differentitation.

Because of the nature of the product it quite difficult to calculate the npv and greeks in a Monte-Carlo Simulation. Obviously we have a bias in our price since we check if the barrier hold on discrete timepoints while the barrier has to hold in continous time. The closer we get to the Barrier the more off we will be from the analytical price and greeks. The MC price will tend to overestimate the value of the option. See the slides from Mike Giles about Smoking adjoints (automtic differentiation) https://people.maths.ox.ac.uk/gilesm/talks/quant08.pdf.

First we implement the analytical solutions in ‘pure’ Python (actually we rely heavly on numpy) and in TensorFLow.

def analytical_downOut_py(S_0, strike, time_to_expiry, implied_vol, riskfree_rate, barrier):
    S = S_0
    K = strike
    dt = time_to_expiry
    sigma = implied_vol
    r = riskfree_rate
    alpha = 0.5 - r/sigma**2
    B = barrier
    Phi = stats.norm.cdf
    d_1 = (np.log(S_0 / K) + (r+sigma**2/2)*dt) / (sigma*np.sqrt(dt))
    d_2 = d_1 - sigma*np.sqrt(dt)
    bs = S*Phi(d_1) - K*np.exp(-r*dt)*Phi(d_2)
    d_1a = (np.log(B**2 / (S*K)) + (r+sigma**2/2)*dt) / (sigma*np.sqrt(dt))
    d_2a = d_1a - sigma*np.sqrt(dt)
    reflection = (S/B)**(1-r/sigma**2) * ((B**2/S)*Phi(d_1a) - K*np.exp(-r*dt)*Phi(d_2a))
    return max(bs - reflection,0)



def analytical_downOut_tf_pricer(enable_greeks = True):
    S = tf.placeholder(tf.float32)
    K = tf.placeholder(tf.float32)
    dt = tf.placeholder(tf.float32)
    sigma = tf.placeholder(tf.float32)
    r = tf.placeholder(tf.float32)
    B = tf.placeholder(tf.float32)
    Phi = tf.distributions.Normal(0.,1.).cdf
    d_1 = (tf.log(S / K) + (r+sigma**2/2)*dt) / (sigma*tf.sqrt(dt))
    d_2 = d_1 - sigma*tf.sqrt(dt)
    bs_npv =  S*Phi(d_1) - K*tf.exp(-r*dt)*Phi(d_2)
    d_1a = (tf.log(B**2 / (S*K)) + (r+sigma**2/2)*dt) / (sigma*tf.sqrt(dt))
    d_2a = d_1a - sigma*tf.sqrt(dt)
    reflection = (S/B)**(1-r/sigma**2) * ((B**2/S)*Phi(d_1a) - K*tf.exp(-r*dt)*Phi(d_2a))
    npv = bs_npv - reflection
    target_calc = [npv]
    if enable_greeks:
        greeks = tf.gradients(npv, [S, sigma, r, K, dt, B])
        # Calculate mixed 2nd order greeks for S (esp. gamma, vanna) and sigma (esp. volga)
        dS_2ndOrder = tf.gradients(greeks[0], [S, sigma, r, K, dt, B]) 
        #dsigma_2ndOrder = tf.gradients(greeks[1], [S, sigma, r, K, dt, B]) 
        # Function to feed in the input and calculate the computational graph
        target_calc += [greeks, dS_2ndOrder]
    def price(S_0, strike, time_to_expiry, implied_vol, riskfree_rate, barrier):
        """
        Returns the npv, delta, vega and gamma
        """
        with tf.Session() as sess:
            
            res = sess.run(target_calc, 
                           {
                               S: S_0,
                               K : strike,
                               r : riskfree_rate,
                               sigma: implied_vol,
                               dt: time_to_expiry,
                               B : barrier})
        if enable_greeks:
            return np.array([res[0], res[1][0], res[1][1], res[2][0]])
        else:
            return res
    return price

analytical_downOut_py(100., 110., 2., 0.2, 0.03, 80)
9.300732987604157

And again the TensorFlow implementation is super slow compared to the python one. 344 µs (pure Python) vs 1.63 s (TensorFlow with greeks).

We use the python implementation to visualise the npv of option dependent on the barrier.


def surface_plt(X,Y,Z, name='', angle=70):
    fig = plt.figure(figsize=(13,10))
    ax = fig.gca(projection='3d')
    surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
                           linewidth=0, antialiased=False)
    fig.colorbar(surf, shrink=0.5, aspect=5)
    ax.set_title('Down-And-Out Option (%s vs Barrier)' % name)
    ax.view_init(35, angle)
    ax.set_xlabel(name)
    ax.set_ylabel('barrier')
    ax.set_zlabel('price')
    plt.savefig('npv_%s_barrier.png' % name, dpi=300)
    

T = np.arange(0.01, 2, 0.1)
S = np.arange(75, 110, 1)
V = np.arange(0.1, 0.5, 0.01)
B = np.arange(75, 110, 1)
X, Y = np.meshgrid(S, B)
vectorized_do_pricer = np.vectorize(lambda s,b : analytical_downOut_py(s, 110., 2, 0.2, 0.03, b))
Z = vectorized_do_pricer(X, Y)
surface_plt(X,Y,Z,'spot', 70)

X, Y = np.meshgrid(V, B)
vectorized_do_pricer = np.vectorize(lambda x,y : analytical_downOut_py(100, 110., 2, x, 0.03, y))
Z = vectorized_do_pricer(X, Y)
surface_plt(X,Y,Z,'vol', 120)

X, Y = np.meshgrid(T, B)
vectorized_do_pricer = np.vectorize(lambda x,y : analytical_downOut_py(100, 110., x, 0.2, 0.03, y))
Z = vectorized_do_pricer(X, Y)
surface_plt(X,Y,Z,'time', 120)

Now lets implement the Monte Carlo Pricing. First as pure python version.

def monte_carlo_down_out_py(S_0, strike, time_to_expiry, implied_vol, riskfree_rate, barrier, seed, n_sims, obs):
    if seed != 0:
            np.random.seed(seed)
    stdnorm_random_variates = np.random.randn(n_sims, obs)
    S = S_0
    K = strike
    dt = time_to_expiry / stdnorm_random_variates.shape[1]
    sigma = implied_vol
    r = riskfree_rate
    B = barrier
    # See Advanced Monte Carlo methods for barrier and related exotic options by Emmanuel Gobet
    B_shift = B*np.exp(0.5826*sigma*np.sqrt(dt))
    S_T = S * np.cumprod(np.exp((r-sigma**2/2)*dt+sigma*np.sqrt(dt)*stdnorm_random_variates), axis=1)
    non_touch = (np.min(S_T, axis=1) > B_shift)*1
    call_payout = np.maximum(S_T[:,-1] - K, 0)
    npv = np.mean(non_touch * call_payout)
    return np.exp(-time_to_expiry*r)*npv

monte_carlo_down_out_py(100., 110., 2., 0.2, 0.03, 80., 1312, 10000, 500)

Now now as TensorFlow version.

def create_downout_mc_tf_pricer(enable_greeks = True):
    (S,K, dt, T, sigma, r, dw, S_T) = create_tf_graph_for_simulate_paths()
    B = tf.placeholder(tf.float32)
    B_shift = B * tf.exp(0.5826*sigma*tf.sqrt(dt))
    non_touch = tf.cast(tf.reduce_all(S_T > B_shift, axis=1), tf.float32)
    payout = non_touch * tf.maximum(S_T[:,-1] - K, 0)
    npv = tf.exp(-r*T) * tf.reduce_mean(payout)
    target_calc = [npv]
    if enable_greeks:
        greeks = tf.gradients(npv, [S, sigma, r, K, T])
        target_calc += [greeks]
    def pricer(S_0, strike, time_to_expiry, implied_vol, riskfree_rate, barrier, seed, n_sims, obs):
        if seed != 0:
            np.random.seed(seed)
        stdnorm_random_variates = np.random.randn(n_sims, obs)
        with tf.Session() as sess:
            timedelta = time_to_expiry / obs
            res = sess.run(target_calc, 
                           {
                               S: S_0,
                               K : strike,
                               r : riskfree_rate,
                               sigma: implied_vol,
                               dt : timedelta,
                               T: time_to_expiry,
                               B: barrier,
                               dw : stdnorm_random_variates
                         })
            return res
    return pricer

downout_mc_tf_pricer = create_downout_mc_tf_pricer()
downout_mc_tf_pricer(100., 110., 2., 0.2, 0.03, 80., 1312, 10000, 500)
[9.34386, [0.47031397, 54.249084, 75.3748, -0.34261367, -0.2803158]]

The pure Python MC needs 211 ms ± 8.95 ms per npv vs the 886 ms ± 23.4 ms per loop for the TensorFlow MC with 5 greeks. So its not that bad as the performance for the closed formula implementation suggests.

There are many things we could try now: implementing other options, try different models (processes), implement a discretisation scheme to solve a stochastic process numerically in TensorFlow, try to improve the Barrier option with more advanced Monte Carlo techniques (Brownian Bridge). But that’s it for now.

For me it was quite fun to implement the Monte Carlo Simulations and do some simple pricing in TensorFlow. For me it was very suprising and unexpected that the analytical implementations are so slow compared to pure Python. Therefore the Monte Carlo Simulation in TensorFlow seems quite fast. Maybe the bad performance for the closed formula pricings is due to my coding skills.

Nevertheless TensorFlow offers a easy to use auto grad (greeks) feature and the extra cost in a MC Simulation is neglect-able and the automatic differentiation is faster than DF when we need to calculate more than 4-6 greeks. And in some cases we can be with 5 greeks as fast as pure Python as seen the barrier sample. (approx 1 sec for a Tensorflow (npv and 5 greeks) vs 200 ms for Python (single npv). So i assume we can be faster compared to a pure Python implementation when we need to calculate many greeks (pillars on a yield curve or vol surface).

Maybe we can get even more performance out of it if we run it on GPUs. I have the idea to try Bermudan Options in TensorFlow and I also want to give PyTorch a try. Maybe I will pick it up in a later post.

The complete source code is also on GitHub.

So long…