## Discrete barrier options with Crank-Nicolson (plus help)

This is a recent addition to the pricer's PDE solver which only handled continuously-monitored barriers when I first wrote it. Then I hadn't even tried to solve for discrete barriers, I guess because I had read that the finite difference scheme I had chosen (Crank-Nicolson) had big problems with those. But on this second visit I thought I should give it try, not least because the changes required to the code were minimal and all the "tricks" that could make it work were already implemented. We've seen that smoothing techniques help eliminate the spurious oscillations that CN can produce when there is a discontinuity in the payoff (initial conditions), or one imposed periodically in the first derivative of the solution in the case of Bermudan-style options. So it's worth trying it in the case of discrete barriers as well, where a discontinuity in the solution itself is imposed at every monitoring date.

We can also use non-uniform grids which should help reduce any accuracy loss due to the solution's steep slope near the barriers. For the rest of the option types handled by the pricer, non-uniform grids always have their highest density around the strike or a knock-out barrier and the user cannot override this . The non-uniformity is also kept pretty mild with the overall aim being to keep the average error low across the whole asset range plot. In contrast, for discrete barriers I let the grid cluster (have its highest density) around the asset spot specified by the user (within some limits). This in theory should maximize accuracy for the actual asset spot for which we are pricing the option, at the expense of accuracy far from the spot. My brief testing shows that this is generally the case when the volatility is relatively low and/or the monitoring frequency high, i.e. when the discontinuity cannot diffuse far from the barrier. When this is not the case, the advantage of using a non-uniform grid becomes less clear and one may possibly get lower error for the chosen spot using a uniform grid instead. For this reason the built-in grid generator will generally opt for a grid that is more non-uniform in the first case and less so in the second. You can use the pricer to experiment by changing the location of the grid cluster (there is only one cluster/area of higher grid density by the way) and see how it affects the numerical solution error plot. This of course assumes that you have previously calculated a solution on a very high resolution grid, which the pricer allows you to save as reference. Error plots can then be created readily for any subsequent solution on lower resolutions.

So then, can Crank-Nicolson coupled with Rannacher time-stepping, discontinuity smoothing (via grid cell averaging in this case) and non-uniform grids, be used to price discrete barrier options safely and accurately? As shown below, the answer is yes.

First we take a look at the valuation of a Bermudan knock out barrier option with weekly monitoring and exercise, on a grid of 255 asset points with 5 time steps per week. As the first two slides show, the weekly-imposed discontinuity causes the original CN scheme to exhibit oscillations in the option value itself (the sensitivities / derivatives are of course completely garbage and not shown here). Things get uglier when a non-uniform grid is used, which as mentioned above clusters points around the asset spot (in this case near the barrier). This is because the asset step to time step ratio (dx/dt) becomes smaller near the barrier (remember that as dx/dt gets larger the spurious oscillations diminish and eventually disappear). As slides 3 and 4 show, the periodic application of a Rannacher step after every monitoring/exercising date smooths things up neatly, all the way up to the Gamma. The last four slides show why we would like to be able to use non-uniform grids for discrete barrier pricing. For this European down and out put with weekly monitoring, a grid of 250 asset points with 8 time steps per week was used. Slides 5 and 6 show that in this case on a uniform grid the original CN scheme is problem-free and gives exactly the same error (dominated by the spatial discretization) as CN with Rannacher treatment. When the grid is made non-uniform (slide 7) in order to help better resolve the area of high solution gradation (the barrier, known a priori), the numerical error decreases drastically. This is only the case though when CN with Rannacher treatment is used, as the last slide shows that the original CN produces a significantly higher error very near the barrier. This is because dx/dt there eventually becomes smaller than the scheme can handle and the spurious oscillation problem starts to kick in again in the absence of any extra damping.

The drawback of using the Rannacher procedure here is that it involves replacing one time-step by two or four and this is required every time a discontinuity is (re-)introduced into the solution. So when we have many monitoring dates this will result in an increase of the calculation time and presumably erode the quadratic time convergence of CN. It is also not 100% guaranteed to always smooth things out completely. Indeed one can find some unlikely / unrealistic scenarios where some oscillations may persist. In any case careful testing should be done for all the payoffs and parameters we are likely to encounter. There are of course other second-order in time (or higher) finite difference schemes that do not suffer from spurious oscillations, but I have not tried them and thus have no idea of their relative performance compared to the approach adopted here.

The drawback of using the Rannacher procedure here is that it involves replacing one time-step by two or four and this is required every time a discontinuity is (re-)introduced into the solution. So when we have many monitoring dates this will result in an increase of the calculation time and presumably erode the quadratic time convergence of CN. It is also not 100% guaranteed to always smooth things out completely. Indeed one can find some unlikely / unrealistic scenarios where some oscillations may persist. In any case careful testing should be done for all the payoffs and parameters we are likely to encounter. There are of course other second-order in time (or higher) finite difference schemes that do not suffer from spurious oscillations, but I have not tried them and thus have no idea of their relative performance compared to the approach adopted here.