NexusFi: Find Your Edge


Home Menu

 





During development, when do you decide to quit or keep coding a Strategy? See Equity


Discussion in Traders Hideout

Updated
      Top Posters
    1. looks_one tpredictor with 2 posts (3 thanks)
    2. looks_two kspreier with 2 posts (2 thanks)
    3. looks_3 Quick Summary with 1 posts (0 thanks)
    4. looks_4 xplorer with 1 posts (1 thanks)
      Best Posters
    1. looks_one AutomatedTrader with 2 thanks per post
    2. looks_two TradingOgre with 2 thanks per post
    3. looks_3 kevinkdog with 2 thanks per post
    4. looks_4 tpredictor with 1.5 thanks per post
    1. trending_up 1,833 views
    2. thumb_up 14 thanks given
    3. group 7 followers
    1. forum 9 posts
    2. attach_file 1 attachments




 
Search this Thread

During development, when do you decide to quit or keep coding a Strategy? See Equity

  #1 (permalink)
Dvdkite
Trieste Italy
 
Posts: 162 since Feb 2018
Thanks Given: 131
Thanks Received: 25

Hello guys,

again I would like to hear some opinions about the strategy development process.

I started with an idea and I chose the 2016-2017 years to code and test the first raw strategy for the FDAX with multichart as usual. I left the 2018 year beacuse I want untouched data just in case I'm going to perform a Walk forward optimization and I need out of the sample data.

Ok the idea was built over bollinger bands using an unusual timeframe of 90 minutes over a regular bar chart. In 2016/17 years it performed very well in the backtest so I tried to add more years going backwords...So I added 2014/2015 and the system was still "good enough" to keep developing. Then I tried on the fly to add another 3 years 2011/13 and in thos years the results were horrible.

So here is the equity line




Please keep in mind that I've just started to develope this strategy and after a day and half of code and test I came up with this "middle development" result. There's no way by just optimazing the strategy to fix those horrible 3 years.. but from 2014 to the end of 2017 it looks promising.

I'm sure that most of you (automatic system traders) have seen a lot of situations like this. I'm curious to know when you reach the moment to say " ok I'll abandon this idea "?
Or, in case you like too much those 2014-2017 period, would you spend more time on coding by RISKING to overfit the system to look good during all the years?

I'm intested to know some workflow suggestion for CASES LIKE THIS from experts.

Thanks in advance,

David

Reply With Quote
Thanked by:

Can you help answer these questions
from other members on NexusFi?
Better Renko Gaps
The Elite Circle
Quant vue
Trading Reviews and Vendors
ZombieSqueeze
Platforms and Indicators
What broker to use for trading palladium futures
Commodities
Trade idea based off three indicators.
Traders Hideout
 
  #3 (permalink)
 
TradingOgre's Avatar
 TradingOgre 
Evans GA/USA
Legendary Market Wizard
 
Experience: Intermediate
Platform: NinjaTrader
Broker: NinjaTrader Brokerage - Philip Capital
Trading: NQ,ES,6E,CL
Posts: 556 since Jul 2018
Thanks Given: 908
Thanks Received: 1,672


If you can, view the data from 2011-2013 on a chart. It's possible that the data is just incomplete. Sometimes the further you go back the less reliable the historical data becomes. You may find periods where multiple candles are missing.

Most systems will go through periods where it just doesn't perform well. Markets change but the system doesn't. It is an unfortunate thing with regards to automated systems. You can either attempt to ride through the rough patches or figure out a way to recognize when it is happening and then either switch strategies or just switch it off and trade manually. Although I think it is possible to have a strategy that is flexible enough to adjust to changing times I also think that to get there is a long and tough journey. I have yet to create one that is adaptive enough to handle the dynamics of trading. Doesn't mean I am giving up.

I only stop working with a strategy when I can see that it has a high frequency of bad trades.

Visit my NexusFi Trade Journal Reply With Quote
Thanked by:
  #4 (permalink)
 
xplorer's Avatar
 xplorer 
London UK
Site Moderator
 
Experience: Beginner
Platform: CQG
Broker: S5
Trading: Futures
Posts: 5,978 since Sep 2015
Thanks Given: 15,502
Thanks Received: 15,403

I'm sure that FIO's resident guru on the subject, @kevinkdog can share his insight...

Reply With Quote
Thanked by:
  #5 (permalink)
 kevinkdog   is a Vendor
 
Posts: 3,666 since Jul 2012
Thanks Given: 1,892
Thanks Received: 7,360


xplorer View Post
I'm sure that FIO's resident guru on the subject, @kevinkdog can share his insight...

The way I do things is different than the OP described, but basically it is almost never a good idea to take a system tested on all the data and then try to make it better, or try to engineer out the bad trades/days/months/years. You'll end up with better backtest results, for sure, but that does not translate (usually) to better live results.

Personally, if I had a walkforward equity that looked like that (I believe some/most of yours is optimized, so we are talking apples and oranges here) I would evaluate the curve as is (no rule changes) and if it passed my criteria, I'd go to my next step. If it failed because of that drawdown period, I'd move on to the next strategy idea.

Hope this helps!

Follow me on Twitter Reply With Quote
Thanked by:
  #6 (permalink)
 tpredictor 
North Carolina
 
Experience: Beginner
Platform: NinjaTrader, Tradestation
Trading: es
Posts: 644 since Nov 2011

Think more and think deeper.

There are 3 basic ways to approach a question like this (1) a test, i.e. reject the null-hypothesis, (2) data discovery, i.e. studying the data provides insight, and (3) model driven, i.e. causal model. I might recommend looking into Judea Pearl's, "The Book of Why" or perhaps watch this video of a lecture he gave as a way of starting to think about some of these sorts of questions. There is no simple answer to these sorts of problems.

However, a more practical set of thoughts might be to consider the reason certain developers follow certain methodologies and their underlying assumptions and when they might work or might not work:
  1. Backtest over a long history. Why? Because, if a system works over a long period of time then it suggest you may have found a persistent or stable edge.
  2. Validate with walk-forward optimization. Why? Because, if your system can adapt to changing dynamics then it suggest the general principle behind the system may be valid, a form of generalization, even if the specifics change.
  3. Backtest over the most recent data only. Why? Because it suggest you may be able to take advantage of current dynamics or unfolding uncertainties. The objective is to take advantage of the current market dynamics. You are not seeking to find a stable or persistent edge but take advantage of dynamic unfolding situation. Ask what changed.
  4. Use a 3 split approach. A reasonable compromise approach is to use a 3 split data-set. One part you are allowed to optimize on and test against. The second split is your active validation. You evaluate based on the test sample but do not optimize over it. The third split is your final verification or your hold out which you do not look at.

I honestly think your general approach is natural but flawed. It is a natural thing to do, and I am sure I have did such things myself working on systems. But, unless you setup some requirements in advance or unless you are developing seeking to learn from the data then it does look more like you might be playing mental games vs. actually testing an idea. As an aside, I do not always agree with the "data blind" approach. I think data mining approach has value when approached with proper orientation. Basically, you have to understand the orientation or methodology that you are following as to why or how it makes sense or not. If you are trying to understand characteristics of the data that will help you discover an edge, this exploratory approach makes more sense. On the other hand, if you are trying to test an idea in the form a null hypothesis-rejection then you need to setup some requirements up front. And, that is what I mean by playing games because you could go back another few years and if it did well then you might change your mind again or if did poorly you might reject it but in any case.

Reply With Quote
  #7 (permalink)
kspreier
Twin Falls ID
 
Posts: 7 since Sep 2018
Thanks Given: 7
Thanks Received: 3


tpredictor View Post
...The third split is your final verification or your hold out which you do not look at.

Why wouldn't you evaluate the third split, especially the losses? I don't see the value of ignoring potential bugs.

Reply With Quote
Thanked by:
  #8 (permalink)
 Koepisch 
@ Germany
 
Experience: Beginner
Platform: NinjaTrader
Broker: Mirus Futures/Zen-Fire
Trading: FDAX
Posts: 569 since Nov 2011
Thanks Given: 440
Thanks Received: 518

Hi,

the equity curve isn't as bad as you think. Use an equity curve filter (simple SMA) and see if you can filter out most of the drawdowns.

Regards,
Koepisch

Reply With Quote
Thanked by:
  #9 (permalink)
 tpredictor 
North Carolina
 
Experience: Beginner
Platform: NinjaTrader, Tradestation
Trading: es
Posts: 644 since Nov 2011

Well, your third split is your validation split. Right, the 3 split approach (train, test, validate) is useful because it allows you to learn from the data while still holding out a split to minimize data leakage. So, it is a compromise approach that allows you to learn from the data-- which I think has a lot of value while still having an out-of-sample "ground truth" set.

Example, you build your strategy and optimize it over your training set (1). And you then verify against your test set (2). It looks pretty good but you learn if you had steep losses on Monday. So, you add a rule to prevent trading on Monday. Now, everything looks good but there isn't a ground truth as to whether your results generalize or will hold. So, your final "ground truth" verification is your final verification set (3).

But keep in mind all of these are forms of games which probably hinge as much on the quality of the original idea (or from the quality of the insights gained from data exploration, if using that approach) and the actual value, of any method, is in the results.


kspreier View Post
Why wouldn't you evaluate the third split, especially the losses? I don't see the value of ignoring potential bugs.


Reply With Quote
  #10 (permalink)
kspreier
Twin Falls ID
 
Posts: 7 since Sep 2018
Thanks Given: 7
Thanks Received: 3


Thanks for clarifying. That makes sense.

To answer the original question, I use backtesting mostly for debugging code and for some optimization. I'm a proponent of out-of-sample tests, but the optimizations have to be logical as well.

I put a lot of emphasis on small-volume live testing, and will discard a losing robot at that point. It costs me money, but that's the only way I know how to estimate slippage. Of course, if optimizations won't yield paper profits, I wouldn't venture a live test.

Reply With Quote
Thanked by:




Last Updated on October 4, 2018


© 2024 NexusFi™, s.a., All Rights Reserved.
Av Ricardo J. Alfaro, Century Tower, Panama City, Panama, Ph: +507 833-9432 (Panama and Intl), +1 888-312-3001 (USA and Canada)
All information is for educational use only and is not investment advice. There is a substantial risk of loss in trading commodity futures, stocks, options and foreign exchange products. Past performance is not indicative of future results.
About Us - Contact Us - Site Rules, Acceptable Use, and Terms and Conditions - Privacy Policy - Downloads - Top
no new posts