A Brief History of Volatility Models

Image for post
Image for post

Any option trader’s first interaction with option pricing was probably quite similar to mine. My first interaction with option pricing was while reading “Option, Futures, and other derivatives” (by John Hull). There it was, this option pricing formula (Black-Scholes) where I could just put few parameters in excel, run few formulas and get an actual price of an option. This was mind-blowing… I felt like I just hit the jackpot, I’m going be a billionaire trading options.. Fast-forward 14 years and I’m not a billionaire, but along the way I watched volatility modeling evolves and takes shape, which I doubt quants and practitioners thought about 20–30 years ago…

We tend to think that, like the big bang, there was nothing before Black-Scholes (1973), but the root of modern quant finance, and option pricing in particular is actually dated back to 1900, when Louis Bachelier presented his thesis “The theory of speculation” (it sounds better in French). Bachelier’s option pricing model, which was used mainly for pricing options on French government bonds, exhibits great similarities to what we know today as Black-Scholes option pricing model. Both use pretty similar price dynamic and probability assumptions, input parameters, and payoff.

So why isn’t Bachelier model as widely used and known as B&S?

Mainly because the original model was developed for a specific type of asset, its assumptions and underlying assets are unique in a sense that the option is on spreads (on a future contracts), rather than actual price.

Unlike fashions and trends, the Bachelier model did not become obsolete, and we will later see how it remains relevant even today, more than a 100 year later… (hint: think about the oil move during April 2020)

So what did Fischer Black and Myron Scholes develop that made their option pricing theory to be the cornerstone of modern quantitative finance (and awarded Myron Scholes a Nobel Prize)?

Well, they developed an option pricing model that follows quite a similar path of Bachelier model. The difference between the models lies mostly in the choice of process dynamic. While Bachelier model follows arithmetic Brownian motion, B&S follows Geometric Brownian motion.

Image for post
Image for post
Monte Carlo Simulation GBM (drift=1%,sigma=20%,T=1,M=1000,dt=1/500)

Black & Scholes model, like Bachelier’s, based on relatively few base assumptions:

  1. European options can only be exercised at expiration.

While the 4th assumption doesn’t make a huge difference, the 5th & 6th assumptions are quite significant, as they have been proven to be empirically wrong in practice.

Why would these assumptions be so significant?

Let’s first understand what these assumptions mean…

In financial markets assets prices assumed to follow stochastic (random) process, where there is certain trajectory (drift) but the variations are random along that trajectory. This process is known as Brownian motion. When B&S wrote their 1973 paper they assumed that the dynamic follows continuous Geometric Brownian motion, which means that securities cannot change sign (i.e. price cannot change from positive to negative, and vice versa), Moreover, they assumed that increments distribute normally (i.e. Gaussian distribution). Their assumption of continuity is also creating issues, as they assumed that one can continuously eliminate the delta (underlying price change) risk, while this is technically impossible (and even if possible to some degree, it comes with non-negligible transaction costs).

The last assumption Black Scholes made regarding the volatility is probably the most questionable assumption, which paved the way to what we know today as “Volatility Modeling”.

The Era of Stochastic Volatility

As practitioners started to use Black-Scholes formula as industry-wide standard, criticism with regards to B&S volatility assumption grew, as it was well observed that volatility is far from being constant (it evolves and changes through time, and follows some kind of random process itself). In 1976 Latane and Rendelman published their paper “Standard Deviations of Stock Price Ratios Implied in Option Prices”, which suggested (for the first time) that implied volatility should be derived from traded option in the market (I.S.D — implied Standard Deviation). As practitioners liked B&S pricing model for its simplicity and robust (closed-form) solution, they were pretty reluctant to throw it out of the window, and started calibrating the model to fit their markets and assumption. This is when the “Stochastic Volatility Era” begins…

Let’s first understand what stochastic volatility means. Any stochastic volatility model assumes that the volatility itself follows a random process (parallels and with some degree of correlation to the returns of its underlying asset), so basically the stochastic volatility’s process is being (partly) controlled by the dynamic of the underlying it describes (hypothetically speaking each derivatives of the previous dynamic will follow the same process, but let’s leave it for now…)

It wasn’t until Oct 19th, 1987 (Black Monday) that option traders fully comprehended that the assumption of flat volatility surface doesn’t hold water in financial markets, as there should be an additional cost to insure ourselves against adverse move in our stock portfolio. If we look at the Black-Scholes “theoretical” volatility surface it should look similar to this surface:

Image for post
Image for post

While in reality we know that our usual volatility surface looks very different from that. This would be our typical S&P500 volatility surface:

Image for post
Image for post

We can trace the root of all modern stochastic volatility models to Heston’s 1993 paper, which offered a new, closed-form, approach for pricing bond options and foreign-exchange options under stochastic volatility dynamic.

As not all asset classes were created equal, each asset class has its own nuances and standards, and therefore, different asset classes adopted different models to reflect assets’ dynamic:

Interest Rates

Interest rates are modeled in a form of yield curve. An appropriate model to evaluate bonds , and options on interest (like options on interest rate swap, or swaptions) should incorporate the dynamic of the yield curve term structure. Since introduced, the standard models have been Vasicek (1977) and Hull-White (1990), until the introduction of SABR model (Hagan et al., 2002)

The SABR model is a two-factor stochastic model, where each parameter controls a different aspect of the volatility (alpha) dynamic:

Beta- The beta controls the variance and shape of the volatility distribution(or vol-of-vol)

Rho- The rho is the parameter that correlates between the underlying spot dynamic and the volatility dynamic (i.e. controls the slope/skew of volatility distribution).

As the Interest Rates market is unique in a sense that many assets are trading nowadays with negative rates (or alternate between positive rates to negative rates), Black-Scholes cannot be used for derivatives pricing (as it assumes log-normal distribution of underlying price). To accommodate that unique dynamic practitioners tend to use Bachelier model for pricing these assets. Earlier this year, in the aftermath WTI front-month contract’s collapse to -40$ CME changed its pricing model to Bachelier model to accommodate negative strikes.

Foreign Exchange

Foreign Exchange option market is mainly traded OTC (over the counter), and as such it adopted a model that can handle 1st, 2nd generation exotic options (as these are more bespoke in their nature). The model that most FX practitioners use is known as Vanna-Volga pricing (Malz 1997, Lipton et. McGhee 2002, Wystup 2003). This pricing model is widely used in FX market to price exotic options, like barrier options, and digital payout options (one/no touch, European Digital, etc.).

The essence of the Vanna-Volga pricing model, is the ability to quantify the hedging cost of the 2nd order derivatives with respect to Vega (i.e. Vanna- dVega/dSpot , and Volga- dVega/dVol), under the assumption that the quoted options strategies (Risk Reversal/Collar for Vanna, and Butterfly spread for Volga) reflect the implied risk neutral probability for the particular asset.

Equities/Commodities

Stocks’ uniqueness lies in their structure, as they are, in fact, a call option on the underlying company’s asset (Merton, 1974). Due to that nature, stocks are driven, by what is called “the leverage effect”. The leverage effect phenomenon was developed by Fischer Black (1976), and assumes that as asset prices decline, companies become mechanically more leveraged (their debt/equity rises), which makes their default probability higher, hence negative correlation between stock price return and volatility.

To accommodate this dynamic equity market practitioners turn to CEV (Constant Elasticity Variance) model. While the original model was developed by John Cox in 1975, it became widely used after Linetsky and Mendoza 2009 work that showed various applications and additions to accommodate various dynamics (including jumps and discontinuity), and ability to price wide range of assets (including credit spreads).

Other to stochastic volatility models there are local volatility models (Derman et. Kani 1994, Dupire 1994). Local volatility model, instead of using parameters to fit the dynamic to model, extract the volatility from set traded options with different strikes and maturities, after calculating the implied (observed/liquid) volatilities, an interpolation is being applied to allow smooth volatility surface. The more liquid our option chain will be, the more our volatility will reflect the fair implied volatility.

Looking forward into the future — Rough Volatility

As volatility modeling evolved researchers and practitioners have started raising some questions with regards to the behavior of realized volatility, and volatility dynamic.

Mainly, researchers asked themselves whether the assumption of GBM holds water in volatility analysis. To understand why this process was the subject of their research we probably need to understand what the key assumption of GBM is…

Geometric Brownian motion assumes that the process has no memory, in a sense that the noise in each step is independent of the previous steps (AKA — Markovian process), so theoretically the noise (or volatility) has zero autocorrelation. If that assumption holds, our process grows proportional to the square root of time

Image for post
Image for post

As we can see, the 1st part of the equation is the drift part of the process, while the 2nd part is the noise/volatility part.

So we need to ask ourselves “ what if volatility process does have memory?” (or in other words, what if the volatility increments are positively/negatively correlated). That feature of timeseries behavior cannot be modeled by GBM, as it assumes zero memory of the process, so we need to turn to another type of Brownian motion — fBM (or fractional Brownian motion)

fBM is a generalization of Brownian motion, but it’s unique in its behavior, as it allows increments to be not independent of each other. To account for the autocorrelation of the increments we turn to the Hurst Exponent (Mandelbrot & Van Ness in 1968). This exponent basically is a number in (0,1) range that describes the degree of autocorrelation (mean-reversion) of the process.

Let’s explore the different states of H:

H = 0.5 — increments are uncorrelated of each other. A special case of Brownian motion

H <0.5 — increments are negatively correlated. rougher edges of the time series. dynamic is mean reverting

H>0.5 — increments are positively correlated. smoother edges of the time series. dynamic is trending.

Image for post
Image for post

So we see that if we know how to model (or figure out the value of) H we can build a more accurate process, that describes the dynamic.

Another very important (and interesting) feature of fBM is it’s fractality.

Fractality means that the shape of an object doesn’t change its form at different scales. Think about snowflakes, cauliflower and ridges. These are self-similar objects, where we can zoom in/out and note they will be approximately the same… If we go back to time-series we see that this is a very important feature in analyzing volatility behavior. We can look at intraday price pattern, zoom out to daily price action and not lose the dynamic.

Image for post
Image for post

As we can see, despite drawing two different processes (one of 1000 samples and one scaled x5), the rough shape remains the same, and it looks as if we zoomed in on the circled part (while in fact these were two random samples)

Although fBm exists for over 50 years, it wasn’t until Gatheral et al. 2014 paper that this idea made its way to mainstream volatility modeling.

In their paper Gatheral et al. showed the first application to this dynamic (both in pricing and volatility forecasting), as well as their theory with regards to why volatility exhibits roughness.

So what has changed over the last few decades that makes practitioners think that volatility is rough?

According to those who believe that volatility is rough, the roughness lies in the market microstructure and dynamic (order flow distribution and behavior). The fact that in recent years most order flows are HFT (high-frequency) just makes it more pronounced.

The theory behind the market microstructure is that buy orders generate more buy orders, and sell orders generate more sell orders, BUT sell orders have more impact on the order book and underlying spot (liquidity asymmetry , “leveraged effect”). Another feature of the market microstructure is that the most orders in the market are between market players (trading firms, not end users).

The last, and probably the most important development that took place over the last few years, has been the rapid increase in computation speed (quantum computing, low-latency infrastructure , and strong numerical libraries are only some of these developments) . Rough Volatility models cannot be calculated directly (like B&S, Heston, SABR), as they are non-markovian (i.e. lack memory/independent), so one needs to use Monte Carlo simulation to compute these models. MC simulations are time consuming processes (especially for practitioners who need fast results/prices).

Rough Volatility applications

Since it made its debut in 2014, the Rough Volatility research has been growing exponentially. Papers and applications are frequently being published (link to the literature in the appendix). Here are a few interesting applications of this field of volatility modeling:

  1. Estimating volatility regime using the Hurst index — As we know by now, the Hurst index is a measure of the timeseries autocorrelation. Empirical studies have shown that this index stays relatively stable at a very low level most times (Gatheral et. al found that for equity indices H=0.13 on long term horizon), this means that volatility exhibits in normal markets strong mean reversion. Historically speaking, during period of financial stress that index tends to rise significantly, therefore, we can use that indicator as a signal of change in volatility regime.
Image for post
Image for post
Credit — Rough Volatility: An overview. Jim Gatheral

2. Modeling volatility arb in stocks — as per paper published by Glasserman et He(2018), volatility arb strategies that were long stocks with rough volatility and short stocks with smooth volatility earned excess return.

3. S&P500/VIX smile joint calibration — Probably the biggest achievement in the field of rough volatility research. As S&P500/VIX dynamic is in the epicenter of US equity market (and some would say the epicenter of modern finance), it’s the subject of huge interest by quant researchers and practitioners. Julien Guyon’s research and attempt to solve the joint smile calibration of S&P500/VIX proved to be successful (as per his latest findings, dated to the beginning of 2020). This is a major breakthrough, as past attempts, using different dynamics and models were not successful in accurately calibrating the two volatility surfaces jointly. This research shows that the assumption of rough volatility is probably superior to other volatility dynamics that were assumed to be describing the volatility dynamic.

Obviously, as the market is a living organism, volatility and its derivatives evolve and change as well. Volatility is no longer a parameter that we plug into a model to get a price, it’s the asset itself (with everything else being a derivative). Volatility trading is the essence of modern finance. We are all volatility traders, even if we don’t intend to. It is an exciting time to be a researcher/practitioner in the field of volatility, as we are merely scratching the surface of research and applications to volatility modeling.

Feel free to share your thoughts and comments.

Twitter: Harel Jacobson

Appendix:

Rough Volatility site

Jim Gatheral Presentation on Rough Volatility

Share Dropbox

Quant PM. Global Volatility Trading. Python addict. Bloomberg Junkie. Amateur Boxer and boxing coach (RSB cert.) !No investment advice! Don't try this at home

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store