This forum was established to help traders (especially futures traders) by openly sharing indicators, strategies, methods, trading journals and discussing the psychology of trading.

We are fundamentally different than most other trading forums:

We work extremely hard to keep things positive on our forums.

We do not tolerate rude behavior, trolling, or vendor advertising in posts.

We firmly believe in openness and encourage sharing. The holy grail is within you, it is not something tangible you can download.

We expect our members to participate and become a part of the community. Help yourself by helping others.

You'll need to register in order to view the content of the threads and start contributing to our community. It's free and simple, and we will never resell your private information.

We can certainly do a backtest if you would like, but we are going to have to match trading strategies, platforms and data. Again I am trying to understand why you don't believe the RAD perform as intended. They preserve the ratio between contracts as opposed to the point difference between contracts. Do you not believe this? If this property of the contracts is not correct then Pinnacle and even CSI have some explaining to do.

If you believe this then I can calculate the percent gain/loss of each trade. If I can calculate the percent gain/loss of each trade I can calculate the average gain/loss for a sequence of trades.

@whipsaw: This is not an easy subject, but let me put forward a few ideas....

% Gain/Loss

The concept of a % gain/loss should not be applied to futures, but only to investments where the principal is fully paid, such as stocks or bonds. Futures trades are always leveraged, and there is no mathematical reason to compare the outcome of a trade to the value of the underlying commodity or the futures contract. Leverage is a function of volatility and not of absolute prices.

The logic for trading futures looks more like this:

(1) Amount of capital available -> (2) Determine winning percentage and win loss ratio (after slippage and commissions) -> (3) select acceptable risk of ruin -> determine position size by taking into account (1), (2) and (3). Position sizing will grow with amount of capital available, if you engage in fixed-fractional betting, but will remain unchanged, if you do not the position size over the life of the stragey. Both fixed-fractional and constant position sizing can be optimized with a Monte-Carlo simulation.

The absolute price of the commodity is not needed to calculate anything. The only thing which is needed as input variable are

-> the exact $$$ results of the trades (including slippage, commission & rollover cost)
-> the exact $$$ risk incurred with each single trade

Appropriate data

If I use back-adjusted data, then I can perform a backtest of my strategy, but have to add slippage and commissions to get realistic results.
If I use non-adjusted data, then I can perform a backtest of my strategy, but have to add slippage, commissions and rolling costs to get realistic results.

Now, if I use ratio-adjusted data, I simply get distorted results. Let me construct a simple example.

long entry contract price (old contract) 900 points
max risk accepted: 100 points (stop loss at 800 points)
old contract on rollover day: 950 points
new contract on rollover day: 1000 points
long exit contract price (new contract) 1050 points

The long trade could have earned me 1050 - 900 - 51 (rollover cost + spread) - 1 (slippage) - 1 (spread) = 97 points. For trade evaluation I would have used the R multiple = 97/ 101 = 0.96. Of course, I can also calculate the % gain/loss in terms of the price of the futures contract. This would be 97 points divided by 900 points = 10.78 %, a meaningless number, because it does neither take into account the downside nor the leverage due to the fact that I traded on margin.

The problem with the ratio-adjusted contract is that for rollover day, it will show a close of 1000 points (backadjusted to match the new contract price). To shift it to that level I need to apply a ratio of 1000/950 = 1.0526 to all data points of the old contract. Thus the ratio-adjusted entry price for my position will be shown as 900 * 1.0526 = 947.37 points. If I take these values my % gain/loss calculated from ratio-adjusted data would be 1050 - 947.37 - 2 (slippage and commissions) = 100.63 points divided by 947.37 points or 10.62%.

If I look at the difference between the (correct) result of 10.78% and the slightly lower 10.62% calculated from ratio-adjusted data, then I can see that

-> the gain calculated from ratio-adjusted data was overstated (100.63 versus 97 points)
-> the value of the principal was also overstated (947.37 versus 900 points)

The return or ratio of the two higher values comes close to the correct value but is not identical. It is true that you cannot calculate that return directly from standard backadjusted data. But you can calculated the exact value from non-adjusted data, if you add rollover gains / losses to your trades.

Absolute values versus volatility

The main problem lies elsewhere. Volatility can be used as a proxy for risk. Volatility determines the downside of a trade and therefore is quintessential to calculate position sizing and return. However, the absolute value of a commodity cannot be used as a proxy for risk.

As an example look at the ES and GC contracts. ES absolute value = 1825, annualized volatility = 11.2%. Compare this to GC with an absolute value of 1202, but an annualized volatility of 22.38%. This risk of trading GC for an equal amount of points is much higher compared to the risk of trading ES, although the absolute value of the contract is lower.

Therefore I do not use %gain/loss and I do not need ratio-adjusted data.

Please correct me, if I am wrong ....

Last edited by Fat Tails; January 7th, 2014 at 07:01 AM.

The following user says Thank You to Fat Tails for this post:

You seem to agree that RAD contracts accurately (or close enough) represent the ratio of prices for a price series, so I will precede with that assumption. In case you don't agree take a look at the percent drop in prices from the October 2, 1987 drop to the October 20, 1987 low. At the time the drop was on the order of 45.5%. Most point based back-adjusted contract show the drop at 25% or lower. In other words the importance of the decline is being minimized by successful rolls with the point based back-adjusted contracts.

I think we may different purposes for back-adjusted data. I am purely a system developer and am interested in determining whether a trading system is worth trading.

One of my key metrics is average %gain/loss throughout the backtest period. I focus on this because this tells me how much money I would make in today's dollars. This is done by simply multiplying the avg. percent gain by the market value and the big point value, This cannot accurately be done with point based back-adjusted contracts. Any average/gain loss in %terms from a point based back-adjusted contract is meaningless. Hence you cannot tell how much you would lose on average from you backtest. This cannot be overstated and is a major reason backtested performance deviates so much from live trading.

Additionally the profit factor, %winners/losers, and other key ratio based measures can be utilized using RAD contracts. If you are not familiar with Thomas Stridsman work on RAD contract and backtesting, I strongly recommend "Trading Systems That Work".

Please understand that I use point adjusted contracts when I need to know the dollar value of trades when the system goes live. However I spend the majority of my time developing systems, and hence the majority of my time using RAD contracts. That's why during system development I could care less how much money was made during a backtest. That sequence of trades will never occur again. I

Iam quite interested in researching, backtesting and automated trading. I focus on the longer term picture (a la trend following), data quality and research software (our recommondations are welcomed) are honestly my main headaches, becuase I wanna to able to thrust my research in the future.

With our own example of long trade, but without cost (rollover, commision, slippage or otherwise, which I know isn't realistic but doesn't matter in this context)..

Entry Rollover Exit
TimeLine: T1 T2 T3

Point Values:
Old Contract: 900 950 Expired
New Contract: NA 1000 1050
RAD Contract: 947,37 1000 1050

% Change:
Old Contract: NA 5,41% Expired
New Contract: NA NA 4,88%
RAD Contract: NA 5,41% 4,88%

Calculation Method 2: RAD Rollover Buy Old @ RAD Value.
@T1 Buy 947,37, @ T3 Sell 1050, Result: (1050-947,37)/947,37 = 10,83%

So the two calculation methods actually give the same results as intented.

And let me put forth an example to consider. Say you are testing a system trading 1 S&P 500 Futures contract, you want to see how a dollar based stop loss would have done in the past, lets say a 1000 $ stop. Such a stop wouldn't make any sense in a historical context, since it is not going to be hit as often if S&P 500 is at 700, as it is going to with S&P 500 @ 1700. Only percentage based stops, targets or triggers make sense in a historical sense aswell as going forward.

Take for instance point based volatility versus percentage based. If one look at historical point based volatility one would conclude at volatility has increased over the decades, however this is not true, annualized volatility doesn't change that much, or does atleast not drift perpetually. Percentage based calculations are so to say not affect by the price level, and are therefore the only volatility measure that makes sence for calculating position sizing.

Anyway I think it is a very relevant discussion, because one has to have faith in one backtesting results.

L. Black,

The following user says Thank You to Lblack for this post:

Total Result = (1 + 5.56%) * (1 + 5.0%) - 1 = 10.84 % (without commissions and slippage)

This calculation supposes that the amount invested has increased from 950 to 1000 after rollover. In fact this is not the case, as the margin requirement did not increase. The problem with futures is that they are traded on margin and not fully paid. Therefore the result 10.84% has no meaning.

Why would I want to know something which is meaningless?

My return on investment should be calculated based on the margin for the futures trade. For the sake of simplification it is assumed that the margin is constant throughout the trade. Margin is not the margin required by the broker, but the margin used to cover the risk of the trade (including drawdowns).

Because the margin requirement does not change when a contract is rolled, one should actually simply add up the results, and divide them by the margin. That would give a return of 5.56% + 5.0% = 10.56% which needs to be multiplied with the initial leverage factor ILF.

You are right, if you consider that you had the equivalent of 947.37 points invested.

But you never had invested this amount, your investment was the initial margin, which is not affected by our rollover arithmetic. Moreover, when using 947.37 as reference price, you would find a higher leverage factor compared to the first case, and then again a higher return on investment. Therefore the return would be

which is quite different from the 10.83% that was achieved from the trade.

The main problem is not the calculation. The main problem is that we try to use a tool - return on investment - which cannot be easily used that way with futures. The principal is not paid here, and therefore any returns generated need to be evaluated relative to volatility (downside risk) and the amount actually paid as margin.

When rolling a contract neither margin or underlying risk change. The gain which was obtained prior to rollover is not reinvested and there is no reason to increase the denominator to take into account the higher nominal price of the new front month for the purpose of calculating a ROI.