I'm working through Van Tharp's book "Definitive Guide to Position Sizing Strategies." In this book he discusses identifying the market type. He highlights six market types based on trend and volatility:

* Up Volatile
* Up Quiet
* Sideways Volatile
* Sideways Quiet
* Down Volatile
* Down Quiet

He suggests using a 20 day ATR% indicator for measuring volatility.

The ATR% is just the 20 day ATR divided by the current day close times 100. This then gives us the volatility as a percentage of price.

ATR% = 100 * ATR(20)/Close[0]

The nice thing about this is that it normalizes the volatility across time. For example, if 20 years ago, the price was $100 and the volatility was 1% we'd typically see +/- $1 fluctuations in price. But say the price is now $1000 and volatility is still about 1% with price fluctuations of about $10. There's a 10 fold difference in dollar volatility, but the percent volatility is still the same...and so we want to work with this percent volatility so we are not misled by the fact the price is significantly different.

This works great for equities which are typically split adjusted, but it's not so straight forward with futures.

We typically work with backadjusted continuous contracts using the offset or panama method to stitch contracts together. But due to backwardation and contango effects, the real price a few years ago may be significantly different than the backadjusted price, yielding an invalid ATR%.

The right way to do this, I think, is to use ratio/percentage adjusted continuous contracts, but it seems that most data providers, including NinjaTrader and Kinetick/IQFeed do not offer ratio adjusted continuous contracts from what I can tell.

Does anyone have suggestions for how to get a valid ATR% for backadjusted futures or perhaps another comparable approach that would give a measure of volatility that could be properly compared across the years?

Hm, I use TradeNavigator and have all this data (back-adjusted and not back-adjusted), but if you don't have this data, maybe you can use the indices instead.

kevindog, how would you get ATR%? The non-backadjusted data (spliced) could still have some gaps. Or would you scrap ATR% and just use ATR? Is that still valid across time?

UPDATE:

So I measured this and it appears that calculating ATR% on the non-backadjusted series will work fine even though there are some gaps at rollover points. With a 20 day ATR, any gaps are in the noise it seems.

I tested CL from 1/1/2009 to today (11/13/2018) and compared non backadjusted to backadjusted (panama method). The difference in ATR% is largest at the start, about a 2.5x difference, which is proportional to the price differences in the two series.

Clearly, we don't want to use the backadjusted data for ATR%. A 2.5x difference is significant so, yes, this confirms the importance of using the unadjusted series.

But also, a visual inspection shows us that the gaps in the unadjusted series don't appear to create any significant issues.

Sorry, I was not clear. Using non back adjusted would be the best way to get the true price in the denominator to calculate a %ATR. But, it will jump on gap days (imagine if the ATR stays the same day to day, but the price gaps because of contract change. It will look like the % ATR has changed, when really it has not). Plus, to do this, you'd have to no when the roll dates were, to account for those gap days.

If you use simple backadjustment (adding or subtracting an offset to align prior months to the current month), this will not have any impact on average range or average true range. Absolute prices will differ from their historical values, but as high, low and close are all shifted by the same value, ATR will be not by affected by the change in absolute prices.

Therefore all ATR values will remain valid through out the backadjusted chart.

However, ATR will be invalid for

- a merged non adjusted chart (which makes appear the rollover gaps)
- ratio backadjusted chart
- a perpetual future obtained by interpolating the values from different contract months

Just stay with simple backadjustment, and all ATR values will be good.

Thanks, Fat Tails! Yes, I agree if we're only talking about ATR. ATR% is a little more complicated, though, because it's ATR divided by price and we want that price to be the unadjusted price, and unfortunately we have gaps in the unadjusted data.

As Kevin mentioned we'd have to adjust for those price gaps. This is where the ratio backadjusted series, I believe, would actually give correct values for the ATR% because even though all prices are ratio adjusted they are adjusted in the same proportions. The ATR alone from a ratio adjusted series would not be valid, but ATR% is. This is just like split adjusted charts for equities where ATR% is valid.

I thought I might be able to derive a valid ATR% from just the regular backadjusted and unadjusted charts. I'm still thinking about that. It's more than I really want to get into right now.

The eventual goal is to generate a distribution of volatility for a long history of each market in order to quantify the historical mean volatility. Then we can know if a market is quiet or volatile compared to its own history. But if we're using just ATR as our measure of volatility and ATR today is not directly comparable to ATR 10 years ago, then our distribution wouldn't be valid. ATR% is a normalized form of volatility which can be directly compared regardless of price differences.

You are correct, I was tired yesterday, and did not read the definition for ATR%.

Anyhow, the ATR% is not an obvious measure for relative volatility. The problem here is that different instruments have a different beta, and therefore a value X could mean high volatility for a low beta stock while it means low volatility for a high beta stock.

In fact you will need a different scale for determining relative volatility for each stock! It is a bit like the scale of the MACD. Try to find an overbought or oversold level for a MACD. It is possible, but you will need a different scale for each instrument, bar type and timeframe.

A mathematically coherent approach to measuring relative volatility would apply a mathematical norm to the average true range by using a lookback period of 252 (number of working days in a year typically used for calculating annualized volatility). The scale of this measure is universal for all instruments.

And you may use standard mergebackadjusted futures without any problems.

The following user says Thank You to Fat Tails for this post:

Thanks for your response. Yes, I was using the term _market_ too loosely. I meant instrument. I understand that each instrument needs its own scale. I'm not actually interested (at this point) in comparing volatility between two or more instruments. I'm really only interested in being able to compare volatility on a single instrument to itself. I'm just wanting to make sure that historical comparisons are valid so that I can say, this instrument is in a quiet period or this instrument is in an extremely volatile period--compared to itself, its own history.

Absolute ATR seems problematic because volatility is, to some degree, a function of price. And if price was $100 ten years ago and $1000 today, then saying that today's ATR of 80 is more volatile than an ATR of 20 ten years ago might not really be correct. The volatility 10 years ago was 20% and today 8%. In absolute terms, volatility is greater today. In percentage terms it was greater 10 years ago.

As far as PnL per trade in futures we only care about the ticks of movement so ATR might be a reasonable thing to compare. But if we're concerned about the earning potential of our account or total risk on our trade, then a percentage of price would be an important measure.

ATR% (or any approach that is a percentage of price) is the only thing I can think of that normalizes the volatility for differences in an instrument's price across time.

So I remain confused by the idea that one could use just ATR, an _absolute_ measure, rather than some percentage of price to support comparisons of volatility across time.

However, it seems you're saying that so long as you annualize ATR, that comparisons are valid?? Maybe that's only in the context of measuring relative volatility between instruments?

Formulas for annualizing volatility like (StdDev_daily * SQRT(252)) still generally assume, I believe, that stddev is actually represented as a percentage of price. Maybe it doesn't have to be, but it seems this would be a fundamental part of making comparisons within a single instrument's history.

If I'm reading your charts right, your volatility distribution is based on one year of data, so you only know whether you're more or less volatile than some time during the last year. If you've been in, say, a quiet market, that whole time, then you have a limited understanding of the instrument's volatility. This is still valuable information, of course. I just don't think it applies quite to what I'm thinking about.

I feel like there's a lot I don't know on this subject and I don't want to prolong this thread because of my lack of understanding.

Do you know of any good books or articles on this subject that might be a good jumping off point?

Alright, I'd like to offer an update here on what I've learned.

First, yes, when comparing today's volatility to volatility in the same instrument some time in the past, it IS advisable to use a volatility metric that is a percentage of price. By and large, volatility is a function of price.

Let's look at an example to understand why:

In the previous image we compare two histograms. The top one is a histogram of the 20 day ATR for ES. The bottom histogram is the 20 day ATR% for ES. ATR% is just the 20 day ATR divided by the price times 100 to make it a percentage.

What we see is that ATR is a much more washed out distribution, it's wider. Whereas, the ATR% distribution is narrower with fewer anomalies.

Let's look at another one. Here's wheat:

Again the top histogram (ATR) has some ugly spikes and really doesn't look like a well-behaved distribution at all.

The bottom histogram is a much more well-formed distribution. This gives us some confidence that our comparisons of volatility for different underlying prices has some merit. By dividing by price we are effectively normalizing the volatility.

If you think about it, this makes sense. Just comparing ATR, you'd expect a washed out distribution as prices go up and down and ATR increases and decreases over time.

With the much more well-behaved ATR% distributions I can now define some volatility zones based on the distribution. For example,

* Quiet: Minimum to Mode (peak)
* Normal: Mode to Mean+0.5 standard deviations
* Volatile: Mean+0.5 standard deviations to 3 standard deviations
* Very Volatile: Above 3 SD

That is somewhat arbitrary and one could experiment with those zones, but it's loosely based on Tharpe's approach.

Calculating ATR% for Futures:

Alright, here's the hard part and the question I was trying to answer with my original post:

The problem has been articulated earlier in this thread.

The solution was not trivial.

I was correct that you'll need a ratio adjusted continuous contract in order to calculate ATR%.

In NinjaTrader I was able to derive the ratio adjusted continuous contract from the normal panama back-adjusted contract (ES ##-##) and the unadjusted contract (ES 12-18).

Now, NinjaTrader does not make it easy to deal with the adjusted and unadjusted at the same time. I had to use BarRequest to pull ES ##-## and ES 12-18 separately. Then I could loop through bars doing comparisons on adjacent price differences to determine where rollover days occurred and how much the offset was. This allowed me to get a list of ratio multiples from which I could produce the ratio adjusted contract.

If you have questions about this, message me. I don't have time right now to go into more detail.

I calculated the 20 day ATRs from the unadjusted series, multiplied this number by the ratio multiple we got from creating our ratio adjusted contract and then divided by the price in the ratio adjusted contract. Finally, I multiplied by 100 to make it a percentage.

Then I created an indicator to analyze a long history of an instrument and capture its distribution and the basic metrics of that distribution which I stored in a file for that instrument. Here it is for ES:

Then I created an ATRPercent indicator. It plots the ATR%, but also reads the metrics from the file and classifies each bar's volatility based on the historical range of volatility for the instrument. Here is ATR Percent for ES.

Red is volatile. Blue is quiet. Gray is normal.

I hope this offers a roadmap for anyone else who might want to do this.

The following 3 users say Thank You to Barz for this post: