This is obviously not intended to price real world deals. Its purpose is rather to provide a reference for what sort of performance (accuracy/CPU time) one can expect from a carefully implemented PDE-based pricing engine. In short, aiming for 1 bp accuracy should be achievable in milliseconds on a modern CPU core, not seconds.
You can use this app to visualize the solution discretization error, just like I did when I was refining the engine. First calculate a benchmark price by setting the grid resolution to something like [1000, 500] for [spot, time] and choose the “CN++” scheme (this would be practically grid-converged). Press Ctrl+R to save this as the reference solution. Then switch to a working resolution and scheme, like [100, 250] and "CN+", price and press Ctrl+X to visualize the discretization error along the spot axis.
Calculating benchmark prices is very fast (should be in seconds still), especially for the Black Scholes (BS) model. This is very useful for testing the basic convergence properties of the implementation. Given the complex structure of the product, it is the details of the FD discretization that can make a big difference in terms of convergence and thus performance, even for the simplest model. Achieve good BS behavior first and then switch to the target model. This demo can easily (in most cases) achieve 10 significant digits of numerical accuracy when you rev it up. Yes really, it is a well oiled engine!
The Local Volatility Model (LVM) brings things closer to industry standards, in case someone dismisses the BS performance as irrelevant. I haven't stress-tested this. Let’s just say that it worked well for the few FX surfaces I found (produced smooth LV surfaces and very tight fits to the vanilla market). Just assume that some properly tested, production-grade LV module can be used instead. The main purpose here is to establish that introducing (S, t)-dependent diffusion coefficients does not significantly affect the PDE discretization's efficiency. The FX volatility surface and zero rate curves are simply read in from a text file. Just replace the data in the file with your data, keeping the same format.
The LVM on its own will still be deemed inadequate in most cases for trading/risk-managing such a product. But an upgrade of the present setup to regime-switching local volatility (RSLV) would fix that, while inheriting the high-performance characteristics. Assuming for instance 3 regimes, we would expect the computational cost to triple, keeping accurate valuation times still in the milliseconds on average. To be confirmed in an upcoming update of the demo adding RSLV pricing once I find some time to test it.
I’ve included a quasi-Monte Carlo engine as well for peace of mind, to facilitate cross validation.
I hope you find this interesting, I certainly enjoyed working on every detail that makes it tick. For more usage details please read the README file that comes with the app before using it. Also do not hesitate to contact me with any queries and to report any bugs or problems you may find.
Yiannis Papadopoulos, Zurich, November 2025
January 2026 EDIT: As promised, the Regime-Switching Local Volatility model (with 3 regimes) has been added to v0.9.3. Note that this currently comes without a calibration module, the main purpose again being to verify that pricing efficiency is carried over from the previous models without significant penalty. And indeed, at least initial testing seems to confirm this. Randomized valuations using a few sample (major pair) IV surfaces show that the CN+ scheme with [100, 250] [spot, time] resolution would be sufficient for keeping errors below 1 bp. Single-core CPU timings, for example of 1Y TARF's with weekly fixings, are between 150-200 milliseconds. For higher vol levels (exotic FX, precious metals) the temporal resolution has to be increased to target the same error. Again, please check the README file for explanation on the inputs.
This now effectively has the core specs of a production-grade pricer. However it has obviously not been tested to production standards; this would require access to long series of market data and calibration for the RSLV. There will no doubt be some challenging inputs leading to larger errors, especially perhaps when it comes to precious metals. That said, the pricing engine is inherently "fast-converging” and a modest increase in resolution should be sufficient to bring errors down. Experience with actual market data would also allow customizations and fine-tuning. I may supplement this post with more specific testing results later on.





















RSS Feed