Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
Currently I create the code, backtest on 6 months worth of data, then optimize that 6 month period and see whether those optimized settings work on previous historical data for the same instrument in a backtest. I know some of you guys will say 'read the fucking literature' and I hear you loud and clear, I will, but I'm keen to hear what people think about that simple method. Am I curve fitting the data for the 6month initial test period and as a result destined for it not to succeed on other historical data. Just want to make sure I'm not barking up the wrong tree. Peace.
Can you help answer these questions from other members on NexusFi?
Some people argue there's no regime shift between the 2 periods so it doesn't matter if you optimize on recent (e.g. 2018) and test on earlier data (e.g. 2017). In my experience though, working strategies tend to have a decaying convexity i.e. make less money over time so I prefer to go the other way, among other reasons.
It depends on which 6 month has market condition been. If that 6 month has been volatile or bear market, and system made a good amount of profit, I would start to worry about when market condition becomes to norm.
Philosophically, it's just a byproduct of no-free-lunch. If it makes money, chances are that more people will find it and get better at competing away your alpha over time conditioned on a fixed strategy.
But quantitatively, I've seen this manifest in many different ways. In some cases the intuition is pretty direct: the number of mispriced orders goes down so you explicitly see your volume decay over time. I actually see this so often that I have plots like this all over my desktop, where a strategy starts out maybe trading 80-150k contracts per day and drops down to 40k.
Thanks. I agree with the competition aspect. Another thing I see is that many people expect a backtest to continue at same pace. Since there is some "good luck" built into every backtest, if that random good luck becomes bad luck, the strategy will level off. Same end result occurs due to the biases we bake into every backtest (and try like heck to avoid adding in!). Profit due to those biases tends to go away with future unseen data...
Hm to avoid confusion, all of my above statements are only within the out of sample set on first try. So there's no additional overfitting baked into it over time. But certainly, depending on the information leakage, that can also cause error in generalizing into the future.
To fix this I use non-correlated conditions with a linear operator. This will keep any one condition from bending out of usefulness and also usually keeps the strategy from decaying as fast over time. I think of it as wrapping the market around my strategy, not wrapping the strategy around the market. Also it takes a little more advanced math and most traders will not take their strategies that far which results in more time for decay.