Beware of the traps — Quantitative Trading Mistakes

Harel Jacobson
12 min readDec 19, 2020

--

The world of quantitative finance is a fascinating world. The ability to make sense of financial markets using data, math, and statistics is a mind-blowing idea in my opinion. Ever since I discovered the wonderland of the derivatives market I knew that my path in the world of trading was going to be the quantitative path (rather than the discretionary path).

The history of modern quantitative finance can be dated back to the early 1900s, with Bachelier’s option pricing model (which was later followed by the Black-Scholes option pricing model), but the real evolution of quantitative finance came in the mid ’80s when mathematicians and statisticians started developing quantitative models to predict (and trade) financial markets.

If we examine funds like Winton Capital, AHL(now Man AHL), Aspect Capital, and Renaissance Technologies, we can trace their roots back to the mid ’80s. These funds utilized quantitative models to detect (and trade) opportunities in financial markets, using massive datasets and data science (or at least an early version of what we know today as Data Science).

The rapid growth in quant/systematic funds came in the ’90s when funds like Millennium Partners, D.E Shaw, LTCM, and AQR (to name a few) raised substantial capital to trade systematic strategies. The exponential growth in computation power, and the growing interest of the quantitative community (mostly Ph.D./MS from hard science departments) turned quant funds to the hottest area investors were flocking to.

Quant trading covers a rather wide array of trading strategies (anything from big-data analysis to HFT market-making). For the purpose of this article, we will focus on quant analysis and data science, as these are widely used by different types of traders (both on the institutional and the retail side). Based on my experience with quant trading there are four major traps when building a quantitative trading strategy :

  1. Understanding statistics/probability.
  2. Model implementation.
  3. Strategy back-test/simulation.
  4. Risk Management.

After we acknowledge these traps let’s dive in to understand where we could fail…

Understanding Statistics and Probability

Statistical analysis is the foundation on which data science and quant trading are based. When we analyze data (especially time-series) we can easily fall into various traps when we don’t have a good understanding of statistics/probability (and statistical concepts).

Normal Distribution

The assumption of the normal distribution is, by far, the weakest assumption we can make when it comes to modeling the dynamics of financial assets. Numerous papers have been published with regards to the assumption of normality in financial assets time-series, yet, for the lack of a better choice, we use Gaussian (normal) distribution, as it allows us to analyze data pretty easily. Knowing that the assumption of “normality” is weak, we should treat the properties of normal distribution the same way (i.e., skewed returns, fat tails). Assuming that the distribution of returns will fall under the bell curve usually results in a huge surprise when 3+ standard deviations returns occur.

As we understand that “normality” can sometimes be a weak assumption, we can use realized distribution (taking different time frames) and see how it fits the normal distribution, so we can asses how normal the return distribution of our asset(s) is.

Correlation

If there is one thing most quant analysts and traders love it is “Correlation”. Correlation is probably the most used, yet most misunderstood concept in statistics. The correlation coefficient most of us use is the “Pearson Correlation” (named after Karl Pearson). Correlation, in short, describes the linear relationship between two variables (X, Y). Correlation oscillates between (-1, negative linear relation) and (+1, positive linear relation). So it all sounds so easy, how can we get it wrong?

First, we need to understand what correlation is NOT. It’s not a predictor (i.e., it doesn’t indicate causation, but rather a linear relation). When we use the correlation function we need to make sure we don’t make the following mistakes:

  1. Correlate prices instead of returns (either log-returns or actual returns) — When we deal with time-series we usually deal with assets prices. Assets prices are “non-stationary” in their nature. “Non-stationary” process basically means that the asset presents a trend (or a non-mean-reverting process). If we will take, for example, Gold Spot price vs. US 10yr real yield we can clearly see the effect of using non-stationary data. This is a regression of the Gold/Yield prices
Gold/US 10y Real yield price regression. R²=0.81 , R=-0.9

However, this is what the regression of the price change (i.e., stationary time series) would look like

Gold (Log Rtn)/10yr Real yield ($ change) . R²=0.18, R=-0.43

2. Not giving too much thought to sample size/frequency— When we analyze correlation, very much like when we analyze volatility, we need to give a lot of thought to our sample size/frequency. Both the size and frequency have a great impact on our correlation estimation. If we use frequency too short we might confuse short-term behavior with persistent correlation. The flipside of this problem is using a sample size too long (let’s say 1-year correlation when we want to trade a short-term strategy). If we look at the below correlation matrix heatmap we can clearly see the difference between using a 20-day window and a 180-days window

A good way to remedy the sample size issue is to sample different sizes and different periods (non-overlapping periods) to test correlation persistence.

3. Assuming long-term correlation persistence — One of the major flaws when it comes to correlation analysis is the perception that long-lasting correlations don’t break. This assumption is usually tossed out of the window in periods of financial stress and market downturn, as correlations tend to break and move towards the extremes. If we look at March 2020 we can clearly see the massive change in correlations

Z-score (%ile) and mean reversion

As traders, we want to enter trades with a good risk-reward. One way that we use to determine our risk-reward is by using Z-score. Z-score, in short, is a way to measure the distance (in standard deviations term) of our observation from the mean of the distribution (That score can also be transformed into %ile terms quite easily). Obviously, our aim, as traders, is to look for the extreme occurrences, as they present the best risk-reward (as we assume some kind of reversion/convergence toward the mean). The thing about Z-score that makes it tricky is that, like everything else in statistics, it’s highly dependent on our sample size and frequency. A good example of the pitfall of using a z-score as a signal would be the March/April 2020 move on a Gold Future-OTC basis. If we were to examine the spread on March 24th, we would have concluded that the spread is at extreme levels (+5.8 stdev from the mean)

This was indeed extreme, relative to the 1-year lookback period, but if we would have looked only 2-weeks later we would have seen that this spread has widened even further to +8.5sdtev from the mean basis (nearly as twice as it was on March 24th)

So we see that using a z-score is a good indicator of extremeness in time series, but it cannot be the sole indicator.

A good remedy for this pitfall would be to compare our current observation to different lookback windows (periods) in history, to account for different market regimes.

Model Implementation

After we understand the possible traps in statistical analysis, we move on to the core of our trading — our model. While there are many ways we can fail when implementing our model/strategy, we will focus on the most common (and most crucial) mistakes we can make. Avoiding these traps will go a long way in ensuring our model’s adaptiveness and robustness.

Overfitting/Underfitting

Model fitting is an art as much as it is science. When fitting our model we need to walk the fine line between model overfitting and model underfitting. Both biases are most likely to cause poor performance of our model.

Overfitting — overfitting is caused when our model is extremely precise when capturing the dynamic. Usually overfitting a model will involve a relatively large number of explanatory variables.

Underfitting- underfitting is the mirror problem of overfitting. Underfitting is caused when the model is too simple (had too few variables), which makes it inflexible in capturing the dynamic.

USDJPY 1-week RVol vs UDSJPY 1-month RVol regression.

When fitting the model our aim is to use the minimum amount of variables, yet the greatest predictive power. The idea behind that is that we want to calibrate our model to the bare minimum, yet have a model that will be able to produce robust results. The more variables we will add, the more calibration we will need to make, the less it will be able to cope with changing markets quickly.

In-sample/Out-of-sample data

Differentiating between in-sample/out-of-sample is a crucial point in any model development, as the overlap between our “training set” and “test set” will make our results inaccurate. I can’t tell you how many times I got pitched models that looked extremely successful and profitable, only to find out that they were tested partly on the training set (which explains how they could perform so well…). When we build our sample data we need to make sure we split our data sets to “training set” and “test set” (and making sure they are not mixed together). To understand the importance of in-sample/out-of-sample split let’s look at the following example: We want to predict the 1-month USDJPY realized volatility using the 1-month realized volatility…

If we try to plot the 1-month volatility against its previous day value (i.e., with a 1-day lag), we will get close to perfect regression… why? because 90% of our data overlap (we added 1 new observation and took out 1, but ~18 observations are the same between the two series)

We can, mistakenly, think that the current 1-month vol realized is a good predictor for the future, but if we try to regress the current 1-month against a greater lag, we will get totally different result

As the portion of the overlapping data decreases the regression fit worsens.

Outlier handling

Outlier observations are part of our data series whether we like it or not. We cannot ignore these outliers, however, we need to know how to handle them so our model will not be biased due to extreme observations. While we might be tempted to ignore (or delete) outliers, we should resist this urge, as live trading will probably introduce our model outliers every now and then. Obviously, we need to differentiate between types of outliers — if we see that the observation is clearly false (data error) we can delete it, however, if it’s a valid observation we should accept it and let our model handle that.

Model Simulation

Now that we have a sound model, based on robust statistical/data analysis, we want to back-test (or simulate) it on historical (or generated) data. This is a crucial part of our model development, as this is the juncture where we can see (and analyze) how our model behaves in a controlled environment. Although there are fewer ways to make mistakes at this stage (compared to previous stages), these mistakes can be very costly, as we will fail to spot weaknesses (or problems) with our model.

Testing different market regimes

When we build a model, we want it to perform well 100% of the time. That, unfortunately, is nearly impossible, as different strategies will perform well in different market regimes (think about trend-following strategies in a choppy market). While we can’t build a bulletproof model, we can, however, identify the point where the model underperforms. In order for us to identify these weak spots, we should test our strategy under different market regimes (a regime-switch model is a good way to identify these regimes)

Accounting for Transaction Costs

When we back-test/simulate our model we usually either take historical datasets or simulate them. When doing so we tend to ignore transaction costs (as this complicates our analysis). Ignoring TC will produce unrealistic results (will produce relatively higher P&L than we should expect in live trading). When analyzing TC we need to account for the asset’s liquidity, bid-ask spread, slippage, etc. Taking all TC into consideration will make our tested results much closer to live results, and will allow us to assess the profitability of the model/strategy.

Risk Management

The risk management process is exogenous to our model, however, it is the process that ensures the survivorship of our trading activity. Prudent risk management will ensure that even if our model loses money our capital will not be wiped out completely. Good risk management will cover three areas: Position sizing, Risk limits, and Exit points (S/L or T/P).

Position Sizing

Position sizing is something that is mostly overlooked by traders. We tend to put very little thought into position sizing. A proper position sizing will take into consideration many variables, including account size, desired level of risk, and required margin to maintain the position. Furthermore, when we size our position we should take into account the underlying variance, as a more volatile asset will increase our VaR, which means that our positioning is partly a function of the underlying distribution assumptions we make.

Ignoring these variables will backfire once the market becomes volatile. Some traders tend to double their positions once the position goes against them (as they try to average the entry-level), but this may expose them to further losses if the market continues to move against them (which will eventually force them to stop out). A good remedy for this is to define at the inception the total position size and the levels we would increase our position (taking to account all of the above variables).

Risk Limits

Defining our risk limits is an important part of our risk management process, and should be independent of our trading model/strategy. Risk limits should draw clear boundaries of our exposures and maximum losses we allow ourselves given our risk parameters (account balance, margins, risk tolerance, etc..). If our risk limits are vague we are more likely to incur significant losses when the position turns against us, so we must be disciplined when running our strategy not to go over the risk limits. Different strategies will have different types of risk limits. For example, an option book risk will be defined as a function of the greeks (delta, gamma, vega, theta, etc..), while trend-following strategy will probably be defined as draw-down (or max-loss).

Exit Points

The exit points of our trade are going to determine our risk-reward. Each trade should have a clear set of exit points (both stop-loss and take-profit) to reflect both our view with regards to the potential profit and we would exit the trade and admit our signal/view was wrong. Failing to set these points will result in low risk-adjusted returns (as our P&L variance will be high and our returns will probably not make up for the high variance).

As we can see, there are many creative ways we can fall into quant analysis/trading traps. The few examples shown above are merely a handful of ways we can turn a sound model/strategy into a losing one. My experience over the years has taught me that a profitable strategy is much more than throwing data into ML library (PyTorch/TensorFlow/SkLearn) and waiting for it to produce a model. It involves a lot of preparation, understanding of quant concepts, correct modeling, and prudent risk management. Acknowledging the pitfalls and possible traps is the first step in building a sound quant trading activity.

Feel free to share your thoughts and comments.

Harel.

Twitter: Harel Jacobson

--

--

Harel Jacobson

Global Volatility Trading. Python addict. Bloomberg Junkie. Amateur Boxer and boxing coach (RSB cert.)!No investment advice!