Monte Carlo pricing of options with early exercise (Longstaff-Schwartz algorithm)
Analyzing the performance of the Longstaff-Schwartz algorithm [1] in general is not an easy task, as it will depend on the particular pricing problem. It is though probably the most popular technique for dealing with the early exercise feature when using Monte Carlo in practice, so it certainly works well enough in many cases. One can play around with the pricer and get a first feel of what to expect, at least for the simple options being priced here. For most cases you will find that there is no reason to use the higher basis function (b.f.) specs available, as even at the lowest setting the accuracy can be more than satisfactory, provided that the regression base (the number of paths used) is not too small.
Analyzing the performance of the Longstaff-Schwartz algorithm [1] in general is not an easy task, as it will depend on the particular pricing problem. It is though probably the most popular technique for dealing with the early exercise feature when using Monte Carlo in practice, so it certainly works well enough in many cases. One can play around with the pricer and get a first feel of what to expect, at least for the simple options being priced here. For most cases you will find that there is no reason to use the higher basis function (b.f.) specs available, as even at the lowest setting the accuracy can be more than satisfactory, provided that the regression base (the number of paths used) is not too small.
For the examples above I used a pretty wide regression base (131K paths) and let the total simulation run for 8 million paths so that the remaining bias more or less reflects the number of b.f.'s used. The blue line marks the exact price as calculated by the PDE solver in high resolution. In the first case of the Bermudan vanilla put, just 3 b.f.'s are enough to get less than half a basis point bias. The reference solution calculated with the PDE solver is also shown for comparison of the Greeks as well. In this (and many more cases, some of which are included in the other mini-presentations, like for example here) it is possible if one wishes to, to get good accuracy in quick time, or very high accuracy if you are willing to wait (and make your PC double as an electric heater if it's winter time!).
The second example is a more difficult case since the early exercise value is small, i.e. the intrinsic and continuation values will in general be close, so any inaccuracy in the latter's fit will more likely lead to sub-optimal exercise decision during the simulation and thus bias the valuation low. This can be seen in the third slide where using the same regression specs as before for the put, this time leads to about 6 times higher error. Increasing the number of b.f.'s to 6 helps towards cutting the error somewhat, but we really need to use an "extreme" spec of 11 b.f.'s and 524K regression paths (which you would probably never use in practice) in order to bring it down to the same level achieved for the put with only 3 b.f.'s.
You can reproduce the above tests, or experiment further if you wish, using the standalone pricer (free beta version for Windows PCs).
The second example is a more difficult case since the early exercise value is small, i.e. the intrinsic and continuation values will in general be close, so any inaccuracy in the latter's fit will more likely lead to sub-optimal exercise decision during the simulation and thus bias the valuation low. This can be seen in the third slide where using the same regression specs as before for the put, this time leads to about 6 times higher error. Increasing the number of b.f.'s to 6 helps towards cutting the error somewhat, but we really need to use an "extreme" spec of 11 b.f.'s and 524K regression paths (which you would probably never use in practice) in order to bring it down to the same level achieved for the put with only 3 b.f.'s.
You can reproduce the above tests, or experiment further if you wish, using the standalone pricer (free beta version for Windows PCs).