Taking a Trading System Live - futures io

# Taking a Trading System Live

Updated
Top Posters
1. looks_one kevinkdog with 260 posts (717 thanks)
2. looks_two Big Mike with 61 posts (97 thanks)
3. looks_3 deaddog with 25 posts (24 thanks)
4. looks_4 swz168 with 20 posts (40 thanks)
Best Posters
1. looks_one kevinkdog with 2.8 thanks per post
2. looks_two swz168 with 2 thanks per post
3. looks_3 rk142 with 2 thanks per post
4. looks_4 Big Mike with 1.6 thanks per post
1. trending_up 67,790 views
2. thumb_up 1,131 thanks given
3. group 97 followers
1. forum 529 posts
2. attach_file 189 attachments

 Welcome to futures io: the largest futures trading community on the planet, with well over 125,000 members
 Genuine reviews from real traders, not fake reviews from stealth vendors Quality education from leading professional traders We are a friendly, helpful, and positive community We do not tolerate rude behavior, trolling, or vendors advertising in posts We are here to help, just let us know what you need You'll need to register in order to view the content of the threads and start contributing to our community.  It's free and simple. -- Big Mike, Site Administrator (If you already have an account, login at the top of the page)

# Taking a Trading System Live

kevinkdog

Posts: 3,004 since Jul 2012

swz168
 I've done a Monte Carlo Simulation with the Data Kevin provided me. I'm currently building up my own Monte-Carlo-Risk model for trading. Currently, I can only provide the metrics below. As already mentioned , I use the Excel add-on @risk. The first two simulation have following inputs: Data: 614 trades Distribution: Discrete! (I will do a simulation with fitted data distribution another time) Iterations: 10.000 Initial Capital: 10.000 Let's have a look at the Standard Risk scenario (Sim 1): It tells: The probability that the Equity will not fall below 10.990 is 5% (after 100 trades done). Or: the probability that the equity will exceed 10.990 is 95%. The last two lines shows the unexpected risk. 95% is the standard scenario risk. And 99% the stress scenario risk. I have named it Value at Risk "Drawdown", because I compare it to real drawdown. This figure gives a orientation, when to stop a strategy in real trading (depending on your own risk appetite) Notice: The calculation is totally different to real draw down calculation! Value at Risk "Drawdown" has a weakness, the the more trades, the smaller the VaR-DD (if strategy shows a positive expectancy). So I have to compare simulations with smaller amount of trades with simulation with higher amount of trades (sim 1 and sim 2). Simulations with smaller number of trades is absolutely necessary. It gives you more detail about the risk. Because if you have a positive expectancy, naturally the end values of an Monte Carlo simulation will be better with higher number of trades. Interpretation Example for VaR-DD: The probabilty that the drawdown will fall below 39% is 5%. The most important figure for me is the VaR-DD. All other metrics is more or less playing with numbers. @Kevin: I've first done a discrete distribution simulation, so that we can directly compare our figures. Could you do the same simulation with your method with the above inputs, so we can compare?

Thanks for the analysis! I can't see your chart, though.

I ran my simulator to evaluate these statements:

"The probability that the Equity will not fall below 10.990 is 5% (after 100 trades done)."

I assume you mean end equity, not equity at any point during the 100 trades. If so, I got 126 cases with equity below \$10,990, out of 2500 runs, which is 5.04%. So, I'd say we agree on that point!

"The probabilty that the drawdown will fall below 39% is 5%."

I assume you mean the max drawdown will be less than 39%. I get 104 cases that have max drawdown 39% or larger, which is 4.2%. That is close, but smaller, than your 5% number. For my purposes at least (realizing that this is not a perfect science), I'd say that is a match.

THANKS AGAIN!!

 The following user says Thank You to kevinkdog for this post:

 (login for full post details) #102 (permalink) kevinkdog      Posts: 3,004 since Jul 2012 Thanks: 1,598 given, 5,971 received Just to clarify, when I say Monte Carlo is "not a perfect science" I don't mean to disparage it or the results. I just know with all the assumptions that go into it, I can't expect the answers to be exact. So, I put a tolerance band around any results (at least in my head). So, if Monte Carlo says something has a 50% chance of happening, I assume the "real" number could be anywhere from say 40 to 60%. But it likely won't be 10%, or 90%. My point is I rely on the answers provided, but I do not expect perfection from them.
 The following user says Thank You to kevinkdog for this post:

swz168
Nuremberg, Germany

Experience: None
Platform: MultiCharts

Posts: 49 since Jun 2010

kevinkdog
 "The probabilty that the drawdown will fall below 39% is 5%."

Your assumptions are correct. I've corrected my text to: The probabilty that the drawdown is 39% or larger is 5%.

Since my simulation used a discrete distribution, the results shouldn't differ much from yours (otherwise the random function in excel would be a total failure). Good to see that this is the case.

When I have more time, I will do a fitted distribution simulation. The results will depend on how you fit them. This method is good if your strategy doesn't make much trades. Then often, a discrete distribution simulation doesn't give you useful results.

 The following user says Thank You to swz168 for this post:

swz168
Nuremberg, Germany

Experience: None
Platform: MultiCharts

Posts: 49 since Jun 2010

kevinkdog
 Just to clarify, when I say Monte Carlo is "not a perfect science" I don't mean to disparage it or the results. I just know with all the assumptions that go into it, I can't expect the answers to be exact. So, I put a tolerance band around any results (at least in my head). So, if Monte Carlo says something has a 50% chance of happening, I assume the "real" number could be anywhere from say 40 to 60%. But it likely won't be 10%, or 90%. My point is I rely on the answers provided, but I do not expect perfection from them.

Totally agree. Especially in trading, prediction is very hard. Beside that, it is hard to know, how much curve fitted ones strategy is. So end values (equity) of simulations can be total garbage. That is why I mainly focus on risk metrics if I do monte carlo simulation.

 The following user says Thank You to swz168 for this post:

kevinkdog

Posts: 3,004 since Jul 2012

swz168
 Your assumptions are correct. I've corrected my text to: The probabilty that the drawdown is 39% or larger is 5%. Since my simulation used a discrete distribution, the results shouldn't differ much from yours (otherwise the random function in excel would be a total failure). Good to see that this is the case. When I have more time, I will do a fitted distribution simulation. The results will depend on how you fit them. This method is good if your strategy doesn't make much trades. Then often, a discrete distribution simulation doesn't give you useful results.

When you do this, could you explain for everyone what exactly the difference is between a "fitted distribution" and "discreet distribution" and maybe the general process to determine the fitted model?

 The following user says Thank You to kevinkdog for this post:

swz168
Nuremberg, Germany

Experience: None
Platform: MultiCharts

Posts: 49 since Jun 2010

kevinkdog
 When you do this, could you explain for everyone what exactly the difference is between a "fitted distribution" and "discreet distribution" and maybe the general process to determine the fitted model?

A discrete distribution contains a finite number of trades showing the profits or loss. Each trade can be drawn with equal probability. If you have no trade with, for example, 50 USD profit, then a trade with 50 USD profit will never occure in your simulation. (Currently done by you and in my simulation above)

A fitted distribution is derived from the discrete distribution. You are searching a distribution model that best fits your data. In other words, you are making assumptions how your distribution probably looks like. You overlay a distribution model on your data. And based on that distribution model, a 50 USD profit trade could also be drawn with a certain probability, which wasn't possible before. So all profits/loss in between the trades from the discrete distribution data and even bigger wins or loss can be drawn, like in the reality. If your distribution assumptions is good, then your simulation quality will be good, even if you have only a few trades of data.

I will make some screenshots, then it will be more clear.

 The following 5 users say Thank You to swz168 for this post:

kevinkdog

Posts: 3,004 since Jul 2012

swz168
 A discrete distribution contains a finite number of trades showing the profits or loss. Each trade can be drawn with equal probability. If you have no trade with, for example, 50 USD profit, then a trade with 50 USD profit will never occure in your simulation. (Currently done by you and in my simulation above) A fitted distribution is derived from the discrete distribution. You are searching a distribution model that best fits your data. In other words, you are making assumptions how your distribution probably looks like. You overlay a distribution model on your data. And based on that distribution model, a 50 USD profit trade could also be drawn with a certain probability, which wasn't possible before. So all profits/loss in between the trades from the discrete distribution data and even bigger wins or loss can be drawn, like in the reality. If your distribution assumptions is good, then your simulation quality will be good, even if you have only a few trades of data. I will make some screenshots, then it will be more clear.

Thanks. As you mentioned, the way I do it is simpler than the fitted distribution. It will be very interesting to see the "cost" for that simplicity (ie, less reliable results, etc.).

 (login for full post details) #108 (permalink) Nicolas11  Elite Member near Paris, France   Experience: Beginner Platform: - Trading: -     Posts: 1,071 since Aug 2011 Thanks: 2,232 given, 1,755 received Just a word to thank @kevinkdog for having initiated this trade and other futures.io (formerly BMT) fellow members for their high-quality contributions. I find that this thread is of the utmost interest and really shows a professional view on some key aspects of trading (within the retail world). Thank again! Nicolas
 The following 5 users say Thank You to Nicolas11 for this post:

swz168
Nuremberg, Germany

Experience: None
Platform: MultiCharts

Posts: 49 since Jun 2010

kevinkdog
 I can't see your chart, though.

Strange, at home I saw without problem the image with the numbers.
I will try to upload again later.

Edit: Updated Post# 99. Hope you can see the table now.

 The following user says Thank You to swz168 for this post:

swz168
Nuremberg, Germany

Experience: None
Platform: MultiCharts

Posts: 49 since Jun 2010

kevinkdog
 Thanks. As you mentioned, the way I do it is simpler than the fitted distribution. It will be very interesting to see the "cost" for that simplicity (ie, less reliable results, etc.).

Since your data includes 614 trades, I don't see any problem using a more "simple" simulation. Beside that we already talked about all the uncertainty and assumptions, so I would say that a simple simulation is good enough for trading purpose, if the data is big enough like in your case.

It would be different, if we have only 50 historical trade. Then I wouldn't do a simple simulation, because the results can be questioned a lot.

The data fitting to a distribution model can be all done with @risk with minimum effort. To see how good a data is fitted to a model, their are several statistical tests (like Kolmogorov?Smirnov test - Wikipedia, the free encyclopedia or Chi-squared test - Wikipedia, the free encyclopedia or Anderson?Darling test - Wikipedia, the free encyclopedia)

I still have to find out, which distribution model and statistical tests for fitting I trust most for trading.

Below you see the distribution for 614 trades and the fitting to logistic distribution:

And a picture to see how good the fitting quality is (the more similar the red and blue line, the better:

Now for comparison, I randomly draw 50 trades from your discrete distribution. this is the 50 trades distribution and the fitting:

As you can see, though I only have 50 trades, the result comes close to simulating with 614 trades. Without the fitting we wouldn't be able to do a serious monte carlo simulation.

 The following 5 users say Thank You to swz168 for this post:

 (login for full post details) #111 (permalink) kevinkdog      Posts: 3,004 since Jul 2012 Thanks: 1,598 given, 5,971 received Very Interesting, thanks for doing that. A couple of questions: 1. From your experience, is there a cutoff for the number of trades before you find it better to try a fitted model? You mention using discrete with mine is OK with 600+ trades, but if I had only 50 trades it would not be. 2. Note that due to my stop loss, my distribution of trades has a spike at (edit: -\$885) or so. How do you account for this when doing a fitted model? A fitted model would likely take a way this spike, and that would be bad, since it is present (and so large) specifically because of the stop loss. Would making 2 distributions make any sense? Your work to this point is definitely appreciated! Kevin

swz168
Nuremberg, Germany

Experience: None
Platform: MultiCharts

Posts: 49 since Jun 2010

Quoting
 1. From your experience, is there a cutoff for the number of trades before you find it better to try a fitted model? You mention using discrete with mine is OK with 600+ trades, but if I had only 50 trades it would not be.

That's tough to answer. It was a visual feeling. When you draw 50 trades from your data set and assume you don't have the whole data set, just the 50 trades. If you then plot for these 50 trades only the discrete distribution, then you can see that too much information is missing to do a reliable simulation. That's why I feel the need to make assumptions about the missing part of the distribution.

Now, for example if you are trading NGEC for several years, and you have then done let's say 2000 trades and plot them, then you'll will see, that the distribution chart of the 2000 trades won't differ much from that of 614 trades. It is just smoother, the shape is pretty much the same.

Currently I cannot provide a number that says you must have at least x trades, if not do a simulation with a fitted distribution model. Still have to do many tests on my side. I also think, much will depend on the strategy itself.

Quoting
 2. Note that due to my stop loss, my distribution of trades has a spike at -\$450 or so. How do you account for this when doing a fitted model? A fitted model would likely take a way this spike, and that would be bad, since it is present (and so large) specifically because of the stop loss. Would making 2 distributions make any sense?

I see in your Data set, that 885 is your biggest lost, so I assume that trade consists of 2 contracts. You can see that lost in the chart below. It is at the most left of the blue distribution.

As you can see, fitting the data to Logistic distribution (red line), the spike is not removed. Indeed it can occure again, and even greater lost than that. Somehow this also reflects the reality, because there are events like black swans or other reasons, that you will exceed your loss despite your stopp loss. But the probaility exceeding lost of 1000 USD is very small, as you can see in the chart.

If you feel that this approach is not right, you can use other distribution models, where maximum and minimum profit or loss of each trade is limited. Actually, there is no limit to distribution models, you can build your own taking every details of your strategy into account. (For example there are only two outcomes: either 10\$ lost or 15\$ win)

 The following user says Thank You to swz168 for this post:

 (login for full post details) #113 (permalink) kevinkdog      Posts: 3,004 since Jul 2012 Thanks: 1,598 given, 5,971 received Thanks for correcting my error : I said \$425 max loss, but it should be \$885. That is good to know that you can tailor the distributions beyond a simple curve. I would bet that increases the accuracy a great deal, since it provides a better fit. Good stuff!

swz168
Nuremberg, Germany

Experience: None
Platform: MultiCharts

Posts: 49 since Jun 2010

swz168

Above you already know, it is a discrete simulation.

To Complete: Here are the results using a fitted logistic distribution for 100 Trades (with 10.000 Iterations)

Minimum: -2.359
Maximum: 32.038
25%. Perc.: 12.420
50%. Perc.: 15.265
75%. Perc.: 18.173

5% Perc.: 8.279
1% Perc.: 5.126

VaR-DD (95%): 45,9% (Standardtest)
VaR-DD (99%): 66,5% (Stresstest)

How to interprete these values and how to compare them with results from different methods, it is up to you, because you know the specific of your strategy.

With a fitted distribution described by mathematical functions, the simulations is quite fast. Above calculation took less than 30 seconds (first generation of i7-CPU). With discrete distribution it took about 3-5 minutes (maybe it is my fault due to inefficient implementation).

 The following user says Thank You to swz168 for this post:

kevinkdog

Posts: 3,004 since Jul 2012

swz168
 Above you already know, it is a discrete simulation. To Complete: Here are the results using a fitted logistic distribution for 100 Trades (with 10.000 Iterations) Minimum: -2.359 Maximum: 32.038 25%. Perc.: 12.420 50%. Perc.: 15.265 75%. Perc.: 18.173 5% Perc.: 8.279 1% Perc.: 5.126 VaR-DD (95%): 45,9% (Standardtest) VaR-DD (99%): 66,5% (Stresstest) How to interprete these values and how to compare them with results from different methods, it is up to you, because you know the specific of your strategy. With a fitted distribution described by mathematical functions, the simulations is quite fast. Above calculation took less than 30 seconds (first generation of i7-CPU). With discrete distribution it took about 3-5 minutes (maybe it is my fault due to inefficient implementation).

Just so I am clear, is your 614 trades case (column 2) have \$10K start equity, and then consists of 10,000 simulations of 614 trades each? Also, do you have a "quit" point - if the equity falls below \$X, trading ceases?

Thanks!

swz168
Nuremberg, Germany

Experience: None
Platform: MultiCharts

Posts: 49 since Jun 2010

kevinkdog
 Just so I am clear, is your 614 trades case (column 2) have \$10K start equity, and then consists of 10,000 simulations of 614 trades each? Also, do you have a "quit" point - if the equity falls below \$X, trading ceases? Thanks!

Correct. I don't have build in any constraints. As for Monte-Carlo-Simulation, I wouldn't want to build in a quit point., because then you say for yourself stop trading if you loose for example 5000USD. That means you limit your risk. But I want to see the the real theoretical risk of the strategy. With limits, you don't see it. It is biased to better results or less risk due to "interferring". If the simulation shows that the risk exceeds in many cases your "quit point risk" then you know, that you either have set your limit too low or the strategy doesn't fit your risk appetite. With limits you won't see this.

Edit: Of course in real trading, you should limit your risk. But it should be in "harmony" with your strategies characteristik.

 The following 5 users say Thank You to swz168 for this post:

 (login for full post details) #117 (permalink) swz168  Elite Member Nuremberg, Germany   Experience: None Platform: MultiCharts Trading: Fx   Posts: 49 since Jun 2010 Thanks: 32 given, 59 received @kevinkdog You are using WFO. In real trading, in which intervall (monthly, half yearly, yearly etc.) do you change and optimize your input variables for real trading? How do you determine the intervalls?
 The following user says Thank You to swz168 for this post:

kevinkdog

Posts: 3,004 since Jul 2012

swz168
 @kevinkdog You are using WFO. In real trading, in which intervall (monthly, half yearly, yearly etc.) do you change and optimize your input variables for real trading? How do you determine the intervalls?

One thing I have noticed with WFO is that a lot of times ANY combination of IN/OUT periods will work. Most times, though, I tend to like to keep them to simple intervals (ie 4 years in, 1 year out). I'll use a varoety of tools to determine my IN/OUT (it is generally different for different strategies).

I generally like 3 months and up for re-optimizations, because reopting every month is a chore. Some people I know reopt every day (I don't know how successful they are).

For those of you confused by my terminology:

OUT = the amount of time you keep parameters, before reoptimizing

For example:

IN/OUT = 252/63 = you optimize over the last 252 days of data, and you re-optimize every 63 days.

One caution: don't optimize IN/OUT periods without checking it on an out of sample period. IN/OUT parameters can be optimized like anything else, with the same bad consequences.

 The following 4 users say Thank You to kevinkdog for this post:

record100
Toronto, CA

Experience: Intermediate
Platform: NT
Broker: IB

Posts: 105 since Jun 2009

I am still looking how to incorporate results of Monte Carlo simulation in my trading. To me simplest way forecasting risk is to calculate average trade and standard deviation of the trade. Using these results for one of my strategy I have compared Monte Carlo Simulation with Normal Distribution Simulation(based on mean and stddev) and boundaries ( mean, mean+stddev,mean-stddev). They look very similar, predicted risk almost the same.

Attached Thumbnails

 The following user says Thank You to record100 for this post:

kevinkdog

Posts: 3,004 since Jul 2012

record100
 I am still looking how to incorporate results of Monte Carlo simulation in my trading. To me simplest way forecasting risk is to calculate average trade and standard deviation of the trade. Using these results for one of my strategy I have compared Monte Carlo Simulation with Normal Distribution Simulation(based on mean and stddev) and boundaries ( mean, mean+stddev,mean-stddev). They look very similar, predicted risk almost the same.

Can you explain your chart a bit more? I don't quite understand:

1. why your average line is zero (edit: OK, it is not, it is just small)

2. what the Total Monte Carlo line represents

4. For the std dev lines, I believe it is not a linear function (mean + X * stdev) where X is the number of trades. It should be square root of X. Unless your calculating something else.

Thanks for sharing!

record100
Toronto, CA

Experience: Intermediate
Platform: NT
Broker: IB

Posts: 105 since Jun 2009

kevinkdog
 Can you explain your chart a bit more? I don't quite understand: 1. why your average line is zero (edit: OK, it is not, it is just small) 2. what the Total Monte Carlo line represents 3. your SimTrade are your actual simulated results, correct? 4. For the std dev lines, I believe it is not a linear function (mean + X * stdev) where X is the number of trades. It should be square root of X. Unless your calculating something else. Thanks for sharing!

1. yes, it was small, about \$9
2. Randomly selected from the list of 97 actual trades
3. for simtrades i have used function that gives normally distributed value for given mean and stddev
4. need to check book on statistics

 The following user says Thank You to record100 for this post:

record100
Toronto, CA

Experience: Intermediate
Platform: NT
Broker: IB

Posts: 105 since Jun 2009

Updated chart: mean +x^0.5*stddev

Attached Thumbnails

kevinkdog

Posts: 3,004 since Jul 2012

record100
 1. yes, it was small, about \$9 2. Randomly selected from the list of 97 actual trades 3. for simtrades i have used function that gives normally distributed value for given mean and stddev 4. need to check book on statistics

I still don't understand 2 & 3, but you do, so that is all that matters. I think tracking a strategy the way you are doing is a very good thing to do.

record100
Toronto, CA

Experience: Intermediate
Platform: NT
Broker: IB

Posts: 105 since Jun 2009

kevinkdog
 I still don't understand 2 & 3, but you do, so that is all that matters. I think tracking a strategy the way you are doing is a very good thing to do.

I am new to Monte Carlo simulations, so please correct me if I am wrong. For the Monte Carlo simulation, system 5000 times randomly selected 20 trades from the list of 97 trades. On the chart one of the selection is displayed and called Monte carlo simulation. I have created this chart to visualize how one of the paticulare equity curve placed between boundaries provided by normal distribution formula.

kevinkdog

Posts: 3,004 since Jul 2012

record100
 I am new to Monte Carlo simulations, so please correct me if I am wrong. For the Monte Carlo simulation, system 5000 times randomly selected 20 trades from the list of 97 trades. On the chart one of the selection is displayed and called Monte carlo simulation. I have created this chart to visualize how one of the paticulare equity curve placed between boundaries provided by normal distribution formula.

Gotcha, you have shown 1 of the 5000 possible equity curves from your simulation. I'm not sure how you picked that one. Does it give you any information, as opposed to picking a different one of the 5000 possible?

Everything else makes sense - your average line, your upper and lower bounds, and your results line. That is what I usually show on my tracking sheet (although I use the sim results instead of the std dev results).

 The following user says Thank You to kevinkdog for this post:

kevinkdog

Posts: 3,004 since Jul 2012

record100
 Updated chart: mean +x^0.5*stddev

What I always find interesting when doing this exercise is the bottom std dev curve. In this case, it is saying that even after 20 trades, there is a significant chance (16%) that you will have a \$600 or greater loss.

Most people, I would imagine, would quit any system after 20 trades if it had lost \$600. But, that is how the system COULD perform - and it is a long term winning system (+\$9 per trade).

My point is most people give up on a system before giving the long term averages to work out, and make the system profitable. Most people need to see profit from the start (they think "I should be profitable right away, since my system is historically profitable!"). This exercise shows that is not always going to happen.

Something to think about for everyone, when you start trading your next system and it starts out bad. Is the system itself bad, or is randomness leading to early losses? Usually, I suspect, it is the latter.

 The following 3 users say Thank You to kevinkdog for this post:

 (login for full post details) #127 (permalink) kevinkdog      Posts: 3,004 since Jul 2012 Thanks: 1,598 given, 5,971 received Only a few trades for the NGEC system this week. Another boring week. I should call this the "Watching Paint Dry" system. If you see yourself saying "this journal is awful, the system is so boring!" you are probably not alone. But ask yourself, why is boring bad? Does trading have to be exciting? My experience is that most good trading is really boring. In this week's results, I added the commission rebate from last month back in, so that improves the results a bit...
 The following 3 users say Thank You to kevinkdog for this post:

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

Mike

 The following 4 users say Thank You to Big Mike for this post:

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike
 Mike

Equity Monaco is a good tool, and free!

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

kevinkdog
 Equity Monaco is a good tool, and free!

I think it's quite a bit better. But not free...

Mike

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike

I think it's quite a bit better. But not free...

Mike

I have never used the product above, but Mike Bryant has some other good tools, for free (through his newsletters) and for pay. He has some very good ideas on trading system development.

rk142
Atlanta, GA

Experience: Intermediate

Posts: 260 since Dec 2011

Kevin,

In response to a question about your approach to optimization you wrote the following:

Quoting
 One thing I have noticed with WFO is that a lot of times ANY combination of IN/OUT periods will work. Most times, though, I tend to like to keep them to simple intervals (ie 4 years in, 1 year out). I'll use a varoety of tools to determine my IN/OUT (it is generally different for different strategies).

Could you say a bit more about the tools / decision making process you use to make decisions about the IN/OUT periods you use?

I'm afraid of optimization.

thanks,
RK

kevinkdog

Posts: 3,004 since Jul 2012

rk142
 Kevin, Excellent thread. Thank you. In response to a question about your approach to optimization you wrote the following: Could you say a bit more about the tools / decision making process you use to make decisions about the IN/OUT periods you use? I'm afraid of optimization. thanks, RK

Yes, I will answer this question. I might just take a little while, because it is an involved topic, and I'll need to create some coherent thoughts...

 The following user says Thank You to kevinkdog for this post:

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

I always try to include a bull and bear market in each set, with at least 20% for the OOS set.

Sent from my LG Optimus G Pro

 The following user says Thank You to Big Mike for this post:

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

kevinkdog
 Yes, I will answer this question. I might just take a little while, because it is an involved topic, and I'll need to create some coherent thoughts...

You may have answered this, but what tools do you use other than TradeStation and Excel?

Sent from my LG Optimus G Pro

rk142
Atlanta, GA

Experience: Intermediate

Posts: 260 since Dec 2011

Quoting
 I always try to include a bull and bear market in each set, with at least 20% for the OOS set.

Thanks Mike. Those are definitely helpful guidelines.

So right now I have a strategy which is simple, based on solid market principles, and is profitable over a large range of parameters (just one parameter in the strategy). I've exposed it to half of my data set during development, and hardly altered the strategy at all from its original conception.

My inclination is to keep it simple and just pick a parameter in the fat part of the "good region" to work with. But I'd also like to up my game and see if I can incorporate a KevinDog style WFO.

The issue is that I'm not sure how to begin making a choice of IN/OUT periods that is intelligent but doesn't involve that extra layer of potential curve fitting that Kevin mentioned. I'm thinking of picking my periods based on trade frequency (trying to get n-trades in each IN period) - or I could just through a dart.

-RK

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

kevinkdog
 I have never used the product above, but Mike Bryant has some other good tools, for free (through his newsletters) and for pay. He has some very good ideas on trading system development.

I looked at his Adap Trade stuff. What turned me off is it's for TradeStation (EasyLanguage) only, and given my falling out with MultiCharts I don't want to go back down that path.

I also could not tell what type of machine learning he was really using. Maybe @NJAMC could view his youtube videos and make a guess. My best guess was it is not the type of ML that @NJAMC is working on, or that Wave59 has, but instead just a Genetic Optimization type formula.

I was also a bit turned off by the "brute force" method of the app, basically trying all kinds of crazy indicators to find a good fit. This clearly is a concern for overfitting....

Anyway, his MSA software is another matter, and I think is quite good for analysis.

Mike

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

rk142
 Thanks Mike. Those are definitely helpful guidelines. So right now I have a strategy which is simple, based on solid market principles, and is profitable over a large range of parameters (just one parameter in the strategy). I've exposed it to half of my data set during development, and hardly altered the strategy at all from its original conception. My inclination is to keep it simple and just pick a parameter in the fat part of the "good region" to work with. But I'd also like to up my game and see if I can incorporate a KevinDog style WFO. The issue is that I'm not sure how to begin making a choice of IN/OUT periods that is intelligent but doesn't involve that extra layer of potential curve fitting that Kevin mentioned. I'm thinking of picking my periods based on trade frequency (trying to get n-trades in each IN period) - or I could just through a dart. -RK

No doubt Kevin will have a much more detailed answer for you, he is much more organized, detailed/methodical with this than I am, so I would wait for his reply if I were you...

I would just use a Monte Carlo for starters to examine the 90% percentile figures for drawdown and profit, and other metrics important to you. Don't just pick a set of parameters based on a single backtest without the benefit of monte carlo.

Doing a walk forward is good but I would save it for literally the last step, only because once you do it -- there is no "un doing" it, in other words what was once out of sample is now in sample. So make sure everything else is how you want it and avoid making changes to "tweak" the strategy after you hit the walk forward phase.

Then you want to do a live sim test, I think Kevin calls this "incubation" for a month or so and compare those results (all metrics, not just profit) to what your MC told you the 90th percentile would be, and make certain they are all inline with expectations. Then go cash!

Mike

 The following 2 users say Thank You to Big Mike for this post:

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

Kevin,

I have a random stupid system that I wrote today and was wondering, if I gave you the raw CSV trade file, would you work your magic on it? I'm talking about giving it the @kevinkdog run through and see what you think of it.

Personally, I think it's not any good, but hey what do I know! It's trading 100 shares of AAPL at a time, commission is factored in, and uses market orders, exit at cash close. No preset stop or target, it's always in the market.

Oh I should mention ignore the 1/1/2000, that is just what I type. NT isn't smart enough to actually list the first trade date there, lol. It's 5/1/2007 to current.

Here are some screens to whet your appetite... lol

NT7 based MC runs

Here is with 10% winning outliers removed, and you can see my concern now...

I went ahead and attached the trade report. Up to you if you want to spend your time on this or not, but I thought it might be fruitful to show how fast a system that initially looks good quickly falls apart under the microscope.

Mike

Attached Files
 The following user says Thank You to Big Mike for this post:

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

Kevin, I hope you aren't mad at my numerous posts this evening. I haven't slept since Friday, apparently I have a new burst of energy tonight all the sudden.

I thought I would attach one more report, to make things interesting.

The prior post was optimized on my custom "Mike" fitness, which is a hybrid of a perfect equity curve, a measure of avg profit-to-drawdown ratio, looks for a well balanced long/short system (not lopsided), plus other things I can't remember.

Now in this post, here is the exact same system but optimized for Net Profit, which is what I assume 99.9% of NT users select by default

Notice the net profit itself isn't drastically different. But look at the total number of trades and the equity curve. Anyway, attached is the CSV for this bad boy as well if you want it. I would prioritize the last post first though.

Mike

Attached Files
 The following user says Thank You to Big Mike for this post:

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike
 You may have answered this, but what tools do you use other than TradeStation and Excel? Sent from my LG Optimus G Pro

To help me with walkforward, I use StratOpt WFP. The new version of Tradestation has basically the same capability, but I've never taken the time to learn it (I like what I am using already).

I should point out that walkforward can be done by hand. It is laborious, but after doing it a few times, you get a good feel for the process.

 (login for full post details) #142 (permalink) kevinkdog      Posts: 3,004 since Jul 2012 Thanks: 1,598 given, 5,971 received Hi @Big Mike - What exactly are you looking for me to do? I guess I'm a little unclear on what I could add to it. My concerns would be 1) that it was created without walkforward, and 2) I'm not sure if there is any out of sample data there (you had mentioned you usually leave last 20%, is that what you did here?). Assuming no walkforward and last 20% out of sample, I'd probably run a Monte Carlo just on the out of sample portion, and see how that does...

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

kevinkdog
 Hi @Big Mike - What exactly are you looking for me to do? I guess I'm a little unclear on what I could add to it. My concerns would be 1) that it was created without walkforward, and 2) I'm not sure if there is any out of sample data there (you had mentioned you usually leave last 20%, is that what you did here?). Assuming no walkforward and last 20% out of sample, I'd probably run a Monte Carlo just on the out of sample portion, and see how that does...

No problem Kevin, I was just looking for an opportunity to show a tear down of a strategy that looks good at first glance (well, not terrible at least)

Sent from my LG Optimus G Pro

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike
 No problem Kevin, I was just looking for an opportunity to show a tear down of a strategy that looks good at first glance (well, not terrible at least) Sent from my LG Optimus G Pro

For what you did, were the last 20% of results out of sample?

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

kevinkdog
 For what you did, were the last 20% of results out of sample?

No I wasn't thinking when I exported this one for the forum. It's current unfortunately.

Sent from my LG Optimus G Pro

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike
 No I wasn't thinking when I exported this one for the forum. It's current unfortunately. Sent from my LG Optimus G Pro

No problem. Was it optimized at all, or is it something you just came up with, and on the first try that is what the equity curve looked like?

If it was optimized at all, the interesting thing to do then is to "incubate" it - let it run for the next month or 2, and see how it does compare to the optimized results.

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

kevinkdog
 No problem. Was it optimized at all, or is it something you just came up with, and on the first try that is what the equity curve looked like? If it was optimized at all, the interesting thing to do then is to "incubate" it - let it run for the next month or 2, and see how it does compare to the optimized results.

No I always optimize based on a hybrid of a perfect equity curve, a measure of avg profit-to-drawdown ratio, looks for a well balanced long/short system (not lopsided), plus other things I can't remember.

I know the system is not viable. I guess I was just wanting people to know they can't just look at a performance report and take it at face value, and was thinking you could tear it apart for everyone to see how bad it really is

Mike

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

Kevin,

Tons and tons and tons of different fitness and can all be tweaked to exactly what is important to you, and you can create your own formulas if the metric you wan't doesn't already exist.

Mike

 The following user says Thank You to Big Mike for this post:

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike
 No I always optimize based on a hybrid of a perfect equity curve, a measure of avg profit-to-drawdown ratio, looks for a well balanced long/short system (not lopsided), plus other things I can't remember. I know the system is not viable. I guess I was just wanting people to know they can't just look at a performance report and take it at face value, and was thinking you could tear it apart for everyone to see how bad it really is Mike

Without originally intending to, I think I did just tear your system apart - not by analyzing your results, but by asking you how you got those results. Based on your answers (no walkforward, no out of sample, optimized over all data set), I would have no need to look further at your results. They might be spectacular results, but optimized results almost always are. Doesn;t mean jack going forward.

Where it would get tricky is if you lied and said it wasn't optimized, or that it had x% out of sample results at end. How would I be able to tell you were lying? One way I know is the "if it looks too good to be true, it probably is" test. Another way would be watching it in real time (incubation). Maybe someone else has a better approach they would like to share.

I think this is why it is so hard for people who buy strategies or systems. How do they know if the system results are "real," or if they are just some optimized BS? Most times, a vendor who sells optimized systems would probably lie to you about how he got the results anyway. I know, because sometimes for fun I converse with vendors of these super duper systems, and 9 times out of 10 it is clear to me that
a) the results are garbage
b) the vendor doesn't know what he is talking about
c) the vendor's personality is some kind of combination of unethical, immoral, irresponsible and feeble minded.

I propose you keep tracking the system, and let's see how it does a few months from now. That should be very revealing...

 The following 2 users say Thank You to kevinkdog for this post:

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

Kevin, out of curiosity have you watched @NJAMC's webinar "An Intro to Machine Learning"?

NJAMC has been working on his project for a while now, he and @Luger were also involved in the Artificial Bee Colony thread, the predecessor to Greg's ML thread:

I tried to help where I could using my Wave59 license, but in the end those guys were just too far ahead of me. That's what I get for being a high school drop out, it sucks to be a trader that is poor at math (if only I had known...)

I bring it up because I was thinking of maybe getting together a consortium of sorts, like maybe John Ehlers, Suri Duddella, Manesh Patel, and @NJAMC, @Luger, @Fat Tails and some other fine folks who I can't think of right now, and see if maybe some sort of collaborative project could be started to further the work that Greg (NJAMC) is doing on the ML front. I figured maybe a few 1-hour conference calls or skype sessions might be enough to get some really great brainstorming done to make some real headway in this area.

Mike

 The following user says Thank You to Big Mike for this post:

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike
Kevin,

Tons and tons and tons of different fitness and can all be tweaked to exactly what is important to you, and you can create your own formulas if the metric you wan't doesn't already exist.

Mike

I need sleep too (I drove home in middle of night last night after watching my Michigan Wolverines beat Notre Dame - biggest crowd in NCAA football history), but I'll try to keep up with you.

Tradestation I know is trying to work on better fitness functions. Most of the standard ones are weak. I usually use Net Profit, for simplicity, and typically, the good net profit cases have low drawdowns.

One other big issue is that using drawdown as part of a fitness function with walkforward can give very misleading results, if the walkforward window moves with time. I will try to explain when I answer an earlier post on walkforward.

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike
Kevin, out of curiosity have you watched @NJAMC's webinar "An Intro to Machine Learning"?

NJAMC has been working on his project for a while now, he and @Luger were also involved in the Artificial Bee Colony thread, the predecessor to Greg's ML thread:

I tried to help where I could using my Wave59 license, but in the end those guys were just too far ahead of me. That's what I get for being a high school drop out, it sucks to be a trader that is poor at math (if only I had known...)

I bring it up because I was thinking of maybe getting together a consortium of sorts, like maybe John Ehlers, Suri Duddella, Manesh Patel, and @NJAMC, @Luger, @Fat Tails and some other fine folks who I can't think of right now, and see if maybe some sort of collaborative project could be started to further the work that Greg (NJAMC) is doing on the ML front. I figured maybe a few 1-hour conference calls or skype sessions might be enough to get some really great brainstorming done to make some real headway in this area.

Mike

That's OK, this is good stuff. I have the webinar on my to do list.

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

kevinkdog
 Without originally intending to, I think I did just tear your system apart - not by analyzing your results, but by asking you how you got those results. Based on your answers (no walkforward, no out of sample, optimized over all data set), I would have no need to look further at your results. They might be spectacular results, but optimized results almost always are. Doesn;t mean jack going forward. Where it would get tricky is if you lied and said it wasn't optimized, or that it had x% out of sample results at end. How would I be able to tell you were lying? One way I know is the "if it looks too good to be true, it probably is" test. Another way would be watching it in real time (incubation). Maybe someone else has a better approach they would like to share. I think this is why it is so hard for people who buy strategies or systems. How do they know if the system results are "real," or if they are just some optimized BS? Most times, a vendor who sells optimized systems would probably lie to you about how he got the results anyway. I know, because sometimes for fun I converse with vendors of these super duper systems, and 9 times out of 10 it is clear to me that a) the results are garbage b) the vendor doesn't know what he is talking about c) the vendor's personality is some kind of combination of unethical, immoral, irresponsible and feeble minded. I propose you keep tracking the system, and let's see how it does a few months from now. That should be very revealing...

You make good points and I agree completely.

I see the same stuff vendors are selling, and the worse thing is -- people are buying it. The other real problem with NinjaTrader's summary page is there is no position sizing anywhere. So you can make a system look great by just adding a few zero's to the position size, and it shows more trades, more profit. People then think it is not curve fitted because it has so many more trades.

The other big one is when people use limit orders. That's an easy way to make NT reports inaccurate.

I generally prefer systems that trade 1,000 times a year or more on 1 contract. If I start dipping below 500 trades a year, I get worried about over fitting. Obviously you take a very different approach, and this is just one of many ways we make a market together

I prefer systems that have very few input parameters, like 1 or 2 ideally, where everything else is dynamic and based on volatility and such in the market. That way I can optimize for my best score, and that golden looking equity curve we are all after, by combining that system along with many more into a portfolio and looking at how it benefits or balances the portfolio as a whole.

As for keeping track of the system, I wish I had fixed the code I was using to automatically post charts and trade results into a futures.io (formerly BMT) thread via the futures.io (formerly BMT) API. I just never had time to fix it, and certainly don't want to start now (I am preparing for vacation mode...). So I can't do any kind of live test. Closest I could do would be to shelf it and come back in two months and just run a new report to see how that out of sample data looks, but it would still be historical sim not live.

Mike

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

kevinkdog
 Tradestation I know is trying to work on better fitness functions.

What ever came of their acquisition of Grail Computer? I thought they would be killing it by now with all the work Grail was doing. Have they integrated it into a production product yet?

Mike

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike
 What ever came of their acquisition of Grail Computer? I thought they would be killing it by now with all the work Grail was doing. Have they integrated it into a production product yet? Mike

They incorporated at least some of Grail's capabilities into version 9.0. I think they are continuously more features.

NJAMC
Atkinson, NH USA

Experience: Intermediate

Posts: 1,965 since Dec 2010

Big Mike
 You make good points and I agree completely. I see the same stuff vendors are selling, and the worse thing is -- people are buying it. The other real problem with NinjaTrader's summary page is there is no position sizing anywhere. So you can make a system look great by just adding a few zero's to the position size, and it shows more trades, more profit. People then think it is not curve fitted because it has so many more trades. The other big one is when people use limit orders. That's an easy way to make NT reports inaccurate. I generally prefer systems that trade 1,000 times a year or more on 1 contract. If I start dipping below 500 trades a year, I get worried about over fitting. Obviously you take a very different approach, and this is just one of many ways we make a market together Mike

@Big Mike,

Unfortunately, it is easy to over-fit even a large data-set. It is a matter of having enough degrees of freedom in your "solution" or what I am less familiar with, but have seen none the less is a simple MA crossing that gets lucky. (Likely large drawdown ,MAE, or ETDs)

From the "black magic" side, all you really need to do is give an algorithm enough time and degrees of freedom (input values, products/sums of terms, square's, square roots, etc.) and you can match to the max of degrees (ultimately more good trades) of freedom. Sometime it is difficult to see all the degrees of freedom, but there is likely more than you can see.

This is why Forward or Out of Sample testing is critical. The more "over-fit" a function is, the fast it will fall apart as you test values that were not used to train the system. What you are looking for is a "fit" function. One that does okay, but generalizes the solution into the future. This is the goal of my Genetic Programming investigation.

 Nil per os -NJAMC [Generic Programmer] LOM WIKI: NT-Local-Order-Manager-LOM-Guide Artificial Bee Colony Optimization
 The following 3 users say Thank You to NJAMC for this post:

NJAMC
Atkinson, NH USA

Experience: Intermediate

Posts: 1,965 since Dec 2010

Big Mike
 I looked at his Adap Trade stuff. What turned me off is it's for TradeStation (EasyLanguage) only, and given my falling out with MultiCharts I don't want to go back down that path. I also could not tell what type of machine learning he was really using. Maybe @NJAMC could view his youtube videos and make a guess. My best guess was it is not the type of ML that @NJAMC is working on, or that Wave59 has, but instead just a Genetic Optimization type formula. I was also a bit turned off by the "brute force" method of the app, basically trying all kinds of crazy indicators to find a good fit. This clearly is a concern for overfitting.... Anyway, his MSA software is another matter, and I think is quite good for analysis. Mike

@Big Mike,

I do believe the Adaptrade Software is using Genetic Programming based upon configuration and function. I certainly cannot attest to the ability it has to solve the problem at hand.

As discussed above, it is very easy to "over-fit" a solution using any Machine Learning approach or (manually this is trickier but possible). I can tell you that I have many systems that will over-fit a solution to the training set. This is quite easy. The difficult part is to figure out how to get a ML system to create a Generic solution without access to the Out of Sample data.

This is a bit of a paradox for me right now. I want to create a solution that fits my training set (in sample data), but the second requirement is that it needs to also fit data I have not seen yet (Out of sample data). I can test the OOS data, but then it is touched and now should be considered "In Sample" data. It is unclear in systems like the Adaptrade system when it shows the OOS data on its charts, has it reviewed this data. It should be okay to calculate the answer (Profit or EC), but that data cannot be brought back into the Engine to affect the genetics of the system without it now being considered In Sample data again.

My current approach is to simply store anything that looks profitable. I then use a 2nd analysis which is manual for me to accept the strategy as generic. What I do is run all the "looks profitable" through NT (I use the optimizer, but only to load the different strategies for analysis) and check for nice EC curves and profitability OOS.

In the future, I may add more logic to truly analyse the "looks profitable" before storing. What may not be overly obvious is the longer you run a ML algorithm on a training set, the more likely it will specialize (over-fit) the training dataset. So the "looks profitable" strategies need to be pealed off as soon as they start to look good before over-fitting starts.

 Nil per os -NJAMC [Generic Programmer] LOM WIKI: NT-Local-Order-Manager-LOM-Guide Artificial Bee Colony Optimization
 The following 2 users say Thank You to NJAMC for this post:

treydog999
seoul, Korea

Experience: Intermediate
Platform: Multicharts
Broker: CQG, DTN IQfeed

Posts: 896 since Jul 2012

NJAMC
 @Big Mike, I do believe the Adaptrade Software is using Genetic Programming based upon configuration and function. I certainly cannot attest to the ability it has to solve the problem at hand. As discussed above, it is very easy to "over-fit" a solution using any Machine Learning approach or (manually this is trickier but possible). I can tell you that I have many systems that will over-fit a solution to the training set. This is quite easy. The difficult part is to figure out how to get a ML system to create a Generic solution without access to the Out of Sample data. This is a bit of a paradox for me right now. I want to create a solution that fits my training set (in sample data), but the second requirement is that it needs to also fit data I have not seen yet (Out of sample data). I can test the OOS data, but then it is touched and now should be considered "In Sample" data. It is unclear in systems like the Adaptrade system when it shows the OOS data on its charts, has it reviewed this data. It should be okay to calculate the answer (Profit or EC), but that data cannot be brought back into the Engine to affect the genetics of the system without it now being considered In Sample data again. My current approach is to simply store anything that looks profitable. I then use a 2nd analysis which is manual for me to accept the strategy as generic. What I do is run all the "looks profitable" through NT (I use the optimizer, but only to load the different strategies for analysis) and check for nice EC curves and profitability OOS. In the future, I may add more logic to truly analyse the "looks profitable" before storing. What may not be overly obvious is the longer you run a ML algorithm on a training set, the more likely it will specialize (over-fit) the training dataset. So the "looks profitable" strategies need to be pealed off as soon as they start to look good before over-fitting starts.

I have played with the Adaptrade software for a while. It was interesting, but not my cup of tea. In regards to the contamination of data set of OOS since it plots that in the chart. There is a thread in the googlegroup for them. As far as I understand it, it remains untouched as the results are not recycled. Basically all results that are used for the GP/GA optimization and next generation are based on the in sample. Then that goes to seed the next generation. The OOS results are shown but are not used to generate the next generation. But as a user you can see whats going on. You can go check the google group to be sure or ask questions that are answered by the programmer/founder of adaptrade himself.

 The following 2 users say Thank You to treydog999 for this post:

liquidcci
Austin, TX

Experience: Master

Posts: 866 since Jun 2011

kevinkdog
 I have never used the product above, but Mike Bryant has some other good tools, for free (through his newsletters) and for pay. He has some very good ideas on trading system development.

@kevinkdog I find Adaptrades MSA is an excellent piece of software. If you have a solid system on 1 contract you can plug your test results in and really see how position size using different methods affects your outcome. Very powerful tool. Also has a nice portfolio feature and some other features that are good for analyzing a system. There a few things I wish it would do better but overall it is some of the best money I have ever spent. It is really inexpensive for what it does.

 "The day I became a winning trader was the day it became boring. Daily losses no longer bother me and daily wins no longer excited me. Took years of pain and busting a few accounts before finally got my mind right. I survived the darkness within and now just chillax and let my black box do the work."
 The following user says Thank You to liquidcci for this post:

 (login for full post details) #160 (permalink) kevinkdog      Posts: 3,004 since Jul 2012 Thanks: 1,598 given, 5,971 received In an earlier post, I shared with you one way I track a trading strategy. Here is another tracking tool I use... When I used to work in aerospace (or the "real world" as I sometimes refer to it), our small company (\$250 million annual sales) would have a weekly meeting called "How We Doin." Incorrect grammar aside, it was an excellent way for the managers of the company to quickly see how sales were for the month and quarter, what quality and production problems were occurring, and just a general sense of where the company currently stood. Now, fast forward a few years. I am trading full time, working alone. But, I still want to see at a glance "how I'm doin" with my strategies and trading. Obviously, my account statements and equity curve tell the overall story, but that is not enough detail for me. What strategies are doing good? Which are underperforming? Of strategies I am incubating, how do they look? Should I make some changes in what I am trading? This "How I'm Doin" report can help me answer all of these questions. I developed a little spreadsheet (sorry, I am not sharing it, but it is easy enough to do yourself) to help me with this task. It tells me at a glance how my strategies are doing, and I can easily drill down and see detail if I need to. First, there is a summary page. I include every strategy I am trading live. I also include in another section the strategies I currently am incubating. This summary sheet collects all the data I am interested in (of course, if you did this yourself, you'd likely pick different metrics than I did). This summary sheet gets the data from the individual sheets, which I will describe a bit later. To keep things simple, I base everything on one contract being traded, even though that is usually not what I am actually trading. Why? My goal with this spreadsheet is to see how my strategies are doing compared to how I thought (calculated) they'd be doing. If I included position sizing, it would muddy up the view for me. Of all the numbers on this sheet, I am primarily interested in two columns: 1. Return Efficiency - How am I doing, compared to my expectations? That is how I define return efficiency, and it is simply my actual return divided by my expected return. If my strategy is performing exactly as I had calculated, it will be 100%. Obviously, I want this to be close to or above 100%. Typically, when I take all the strategies together, I find my efficiency is somewhere between 70-100%. So, this says that if my historical testing says I should make \$10 a year, I am actually making somewhere between \$7-10. 2. Drawdown Efficiency - This is how I am doing with regards to drawdown. Just like with return efficiency, I calculate this as my actual drawdown divided by my expected drawdown. I then subtract the result from 1, to make the number 100% the ideal value. It is a bit backwards to do this, but I do it that way so that both efficiency numbers have 100% as their ideal value. Then, the closer to efficiencies get to zero, the worse off things are. Once a month, I go through and update each of the individual system sheets with performance data, and that automatically updates the main sheet. In the next post I'll show an individual sheet.
 The following 8 users say Thank You to kevinkdog for this post:

 (login for full post details) #161 (permalink) kevinkdog      Posts: 3,004 since Jul 2012 Thanks: 1,598 given, 5,971 received Last post I showed my Monthly Summary sheet to reveal my current performance. It gets data from the individual strategy sheets. I have one page of my spreadsheet for each of the strategies I am currently trading or incubating. The screenshot at the bottom shows the individual strategy summary sheet. It is pretty simple, but pretty effective. I can see at a quick glance how a strategy is performing, compared to my expectations (which of course are based on historical performance). When you have 30-50 strategies to keep track of, a quick summary like this is really invaluable. In the sample I show below, you can see that the NGEC system (2 strategies) are currently underperforming my expectation by a bit (return efficiency is 92%), but the trend in general is pretty good. For many strategies I trade, that is all the information I need - a quick view at performance. If something catches my eye, I can always dig deeper. On a monthly basis, the only number I have to update is in the "actual" column. This repsresents the actual profit or loss for the strategy for that particular month. It can be taken from trading statements, after adjusting for the number of contracts, or it can be taken from the strategy performance report. I typically do the latter. The expected numbers can all be obtained from your historical testing. In the sample I show below, the annual profit trading one contract of the NGEC system is \$12,264. I obtained this value from the Monte Carlo simulations I run prior to going live. This \$12,264 equated to \$1,022 monthly return. The max drawdown is obtained from the strategy report. Note that this is an intraday value, where the drawdown the spreadsheet calculates is on a monthly basis. This obviously is not totally correct, as ideally you would want to compare drawdowns over the same length of time. But for my purposes it is adequate. With these individual sheets for each strategy, once I fill the data in, it takes me 15 minutes or so to quickly run through each sheet, and assess its performance. It is a great way to quickly see "How I'm Doin."
 The following 12 users say Thank You to kevinkdog for this post:

Jura

Posts: 774 since Apr 2010

kevinkdog
 (...) Now, fast forward a few years. I am trading full time, working alone. But, I still want to see at a glance "how I'm doin" with my strategies and trading. Obviously, my account statements and equity curve tell the overall story, but that is not enough detail for me. What strategies are doing good? Which are underperforming? Of strategies I am incubating, how do they look? Should I make some changes in what I am trading? This "How I'm Doin" report can help me answer all of these questions.

This is a great idea, thanks for sharing.

kevinkdog
 (...) To keep things simple, I base everything on one contract being traded, even though that is usually not what I am actually trading. Why? My goal with this spreadsheet is to see how my strategies are doing compared to how I thought (calculated) they'd be doing. If I included position sizing, it would muddy up the view for me.

Sorry for being nosy, but are those expected returns also based on a position size of one contract? (Just to get an idea of what is possible in the light of the 'how much capital is enough to get started with automated trading'-thread).

 The following user says Thank You to Jura for this post:

kevinkdog

Posts: 3,004 since Jul 2012

Jura
 This is a great idea, thanks for sharing. Sorry for being nosy, but are those expected returns also based on a position size of one contract? (Just to get an idea of what is possible in the light of the 'how much capital is enough to get started with automated trading'-thread).

Thanks for the nice words.

All the values I show are based on one contract being traded. Where it gets a bit confusing is in the percentage returns. I use "notional capital" for those calculations. I use a notional capital number that is as if I was trading the strategy by itself. It is based on a drawdown probability I feel comfortable with having to endure.

So, I would not put any faith in those percentage returns, since they really are based on my personal likes/dislikes.

The dollar figures should be accurate.

 The following 3 users say Thank You to kevinkdog for this post:

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

NJAMC
 @Big Mike, Unfortunately, it is easy to over-fit even a large data-set. It is a matter of having enough degrees of freedom in your "solution" or what I am less familiar with, but have seen none the less is a simple MA crossing that gets lucky. (Likely large drawdown ,MAE, or ETDs) From the "black magic" side, all you really need to do is give an algorithm enough time and degrees of freedom (input values, products/sums of terms, square's, square roots, etc.) and you can match to the max of degrees (ultimately more good trades) of freedom. Sometime it is difficult to see all the degrees of freedom, but there is likely more than you can see.

Completely agree. In the early days I wrote strategies that had so many options you had to scroll the page to see them all.

Now I try to stick with very simplistic strategies that adjust based on market behavior, internally. Think of it like being ATR or volatility based, that type of stuff. I've found it to be a good compromise.

Quoting
 This is why Forward or Out of Sample testing is critical. The more "over-fit" a function is, the fast it will fall apart as you test values that were not used to train the system. What you are looking for is a "fit" function. One that does okay, but generalizes the solution into the future. This is the goal of my Genetic Programming investigation.

Completely agree again. Still, I have a special place in my heart reserved for you and your machine learning project, I guess I hold out hope that there is a "better way"

Mike

 The following 4 users say Thank You to Big Mike for this post:

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

kevinkdog
 Of all the numbers on this sheet, I am primarily interested in two columns: 1. Return Efficiency - How am I doing, compared to my expectations? That is how I define return efficiency, and it is simply my actual return divided by my expected return. If my strategy is performing exactly as I had calculated, it will be 100%. Obviously, I want this to be close to or above 100%. Typically, when I take all the strategies together, I find my efficiency is somewhere between 70-100%. So, this says that if my historical testing says I should make \$10 a year, I am actually making somewhere between \$7-10. 2. Drawdown Efficiency - This is how I am doing with regards to drawdown. Just like with return efficiency, I calculate this as my actual drawdown divided by my expected drawdown. I then subtract the result from 1, to make the number 100% the ideal value. It is a bit backwards to do this, but I do it that way so that both efficiency numbers have 100% as their ideal value. Then, the closer to efficiencies get to zero, the worse off things are.

Kevin,

Do you pay attention to things like MAE or MFE, average time in trade, win percentage, number of trades per day, etc when comparing results? Naturally, if these things are awry then it is likely to effect the drawdown and net profit that you mentioned.

Mike

 The following 4 users say Thank You to Big Mike for this post:

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike
 Kevin, Do you pay attention to things like MAE or MFE, average time in trade, win percentage, number of trades per day, etc when comparing results? Naturally, if these things are awry then it is likely to effect the drawdown and net profit that you mentioned. Mike

Not at this point in the process - the monitoring phase. So, I already assume that I am comfortable to trade the system with whatever those metrics you mentioned and others come out to be.

I have never looked at using other metrics as an "early warning" system, and I believe that is what you are getting at. Maybe for example, if the number of trades per day goes up or down significantly from history, that could be a warning that the market has changed.

When I get some time, I may just go back and look at some failed systems of mine - were there warning signs that could have saved me some cash?

 The following 4 users say Thank You to kevinkdog for this post:

NJAMC
Atkinson, NH USA

Experience: Intermediate

Posts: 1,965 since Dec 2010

Big Mike
 Completely agree again. Still, I have a special place in my heart reserved for you and your machine learning project, I guess I hold out hope that there is a "better way" Mike

You and me both! I have to say I am getting closer, just need more time to focus upon my task. Some of us 9-to-5ers only have nights and weekends to work on special projects. This would be so much easier if I was retired, but if I was retired, I probably wouldn't need this system.... Another paradox....

 Nil per os -NJAMC [Generic Programmer] LOM WIKI: NT-Local-Order-Manager-LOM-Guide Artificial Bee Colony Optimization
 The following 2 users say Thank You to NJAMC for this post:

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

kevinkdog
 Not at this point in the process - the monitoring phase. So, I already assume that I am comfortable to trade the system with whatever those metrics you mentioned and others come out to be. I have never looked at using other metrics as an "early warning" system, and I believe that is what you are getting at. Maybe for example, if the number of trades per day goes up or down significantly from history, that could be a warning that the market has changed. When I get some time, I may just go back and look at some failed systems of mine - were there warning signs that could have saved me some cash?

Exactly - I mean if a system is operating outside the norm, whether it is number of trades, average per trade profit, or whatever - I want to know it because, well, that means it's not normal! And while sometimes "not normal" can be a good thing (more profit), it usually isn't free -- meaning it carries increased risk, so again, I want to know it...

Mike

 The following user says Thank You to Big Mike for this post:

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike
 Exactly - I mean if a system is operating outside the norm, whether it is number of trades, average per trade profit, or whatever - I want to know it because, well, that means it's not normal! And while sometimes "not normal" can be a good thing (more profit), it usually isn't free -- meaning it carries increased risk, so again, I want to know it... Mike

Here is a great example of what you said - performance that is too good being a bad thing.

I started incubating this strategy a while back, and it took off...

It was "too good to be true" - way above its historical norm. For that reason I decided to keep incubating it. Here are the next few months...

Now the strat is in line with historical norms, but the standard deviation of monthly performance is a KILLER (look at the down month size). I looked into it further, and saw that the system was not acting normal. So I decided to keep incubating. Here is what happened...

So, this is a good example of 1) performance that is too good being a bad thing, and 2) standard deviation of results being an early warning sign that things were not quite right.

Epilogue: I am still incubating it, but did not trade it with real money.

 The following 11 users say Thank You to kevinkdog for this post:

record100
Toronto, CA

Experience: Intermediate
Platform: NT
Broker: IB

Posts: 105 since Jun 2009

Market conditions are changing over time. I have a feeling that requirement of having system that consistently shows profit over the period of 3 years is not valid. What we all looking for is for some hints when to stop trading certain system, and switch to another one.
Incubation does not serve any purpose, just confirmation that market conditions are the same, but it is not a guarantee that conditions could not change at the moment we start trading real money.

 The following 3 users say Thank You to record100 for this post:

kevinkdog

Posts: 3,004 since Jul 2012

record100
 Market conditions are changing over time. I have a feeling that requirements of having system that consistently shows profit over the period of 3 years is not valid.

OK, I understand your point of view. Let's discuss your position, since I know you are not alone in thinking this way. What timeframe of profitability do you think is appropriate, and why?
• Profitability for longer than 3 years
• Profitability for somewhere between 0-3 years
• No history at all - just look forward
• Losing history for x years (with the theory being that most systems are mean reverting, and many historically losing systems will eventually become profitable)
• Some other criteria entirely

record100
 What we all looking for is for some hints when to stop trading certain system, and switch to another one.

In my mind, this requires historical results of your system. So maybe your system does well in only bull markets. Then, your criteria should be to quit when market is in bear market. Or maybe your system thrives in volatility. In that case, turn off the system when it is low volatility.

Is that what you are thinking about? The key is to do this up front in your development, not at the end. I've seen people create a system, and then in an attempt to make it "better," create new rules. For example, they don't trade on Mondays, since Mondays are net losing days. This creates a better backtest, but may be awful going forward.

 The following user says Thank You to kevinkdog for this post:

kevinkdog

Posts: 3,004 since Jul 2012

record100
 Incubation does not serve any purpose

I disagree 100%. It will quickly reveal any big flaws in your testing methodology. Try it with a system that is optimized until today, and see how it does the next few months. That is one flaw incubation would reveal.

record100
 just confirmation that market conditions are the same.

I agree, and if you historically test it over a lot of market conditions, that should give you more confidence. If market conditions change to something never seen before during incubation and your system falls apart, well you've learned something about your system and you've likely saved yourself some trading capital. If you pass incubation, you'll still run the risk of a totally different market making havoc of your system. But I think this is true of ANY system, historically tested or not.

record100
 but it is not a guarantee that conditions could not change at the moment we start trading real money.

I agree 100%. I wish it was a guarantee. But I have saved a lot of money by not trading systems I thought were good, but that failed incubation. If I had immediately started trading instead of incubating, I would have lost a lot (more).

 The following user says Thank You to kevinkdog for this post:

record100
Toronto, CA

Experience: Intermediate
Platform: NT
Broker: IB

Posts: 105 since Jun 2009

In my opinion, Incubation gives some comfort, mental readiness to go live. This is it . Bad Quality of the system should not unexpectedly be revealed at this stage. I agree this is important step before investing real money.

 The following 2 users say Thank You to record100 for this post:

kevinkdog

Posts: 3,004 since Jul 2012

record100
 In my opinion, Incubation gives some comfort, mental readiness to go live. This is it . Bad Quality of the system should not unexpectedly be revealed at this stage.

Ideally, yes. And as a developer gets more experience, the unexpected happens less and less. But it does happen. It could be something subtle you do in the development, that you don't even realize you did.

Maybe your system is for T-Bonds, and one of your trading rules was coded wrong, and you unintentionally favor long trades. That probably would have tested great over the past 20 years, but if you were incubating over the first half of 2013, maybe incubation, with poor results, alerted you to a flaw.

Of course, if you incubated in mid 2012, incubation would not help you uncover the mistake. So it is not foolproof, that is for sure.

 The following user says Thank You to kevinkdog for this post:

record100
Toronto, CA

Experience: Intermediate
Platform: NT
Broker: IB

Posts: 105 since Jun 2009

On 5 min chart I am considering trading system that is profitable over the period of one month. Of course system is tested over much longer period of time, that includes Bear and Bull markets and transitions in between . You should be confident that quality and the robustness of the system is high.

 The following 2 users say Thank You to record100 for this post:

 (login for full post details) #176 (permalink) Luger  Fiddler Nashville, TN   Experience: Intermediate Platform: NinjaTrader Broker: IB Trading: NQ ES   Posts: 468 since Feb 2011 Thanks: 323 given, 543 received Incubation may have tradeoffs, but for those reading the thread and learning how to develop a system, I think it is highly valuable. Being able to see a system perform, see a system perform for a while then breakdown, and see a system fail immediately is part of the experience. Doing it rationally without monetary fear also allows for a better learning experience. Atleast there will be some realtime preparation before letting the emotions out on a live system. Without knowing your personal system building success ratio over time it would be hard to quantify whether incubating is good or not. So in the end, I think it is up to the system builder to determine what value is added. Though I personally want to see some live/sim action before throwing money at a system.
 The following 5 users say Thank You to Luger for this post:

kevinkdog

Posts: 3,004 since Jul 2012

record100
 On 5 min chart I am considering trading system that is profitable over the period of one month. Of course system is tested over much longer period of time, that includes Bear and Bull markets and transitions in between . You should be confident that quality and the robustness of the system is high.

Thanks for sharing those specifics. I have some probing questions, not meant as criticism, but to help everyone (including myself) understand your approach...

Is one month of profitability your criteria?

How many trades occurred in that time period, and is that at all important to you?

When you say you tested over longer period of time, what influence (if any) does this play in your decision to go live or not? It sounds like you place much more emphasis on the last month.

 The following user says Thank You to kevinkdog for this post:

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

Given all the flaws of the trading software and backtesting engines being used today by 99% of readers of this thread, incubation is quite simply a requirement.

It took years for me to learn how to overcome "flaws" in the trading engine. Anyone can write a system in 5 minutes that makes millions of dollars, if you don't realize you are hitting on a flaw in the engine. Backtesting alone will never tell you.

Mike

 The following 4 users say Thank You to Big Mike for this post:

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike
 Given all the flaws of the trading software and backtesting engines being used today by 99% of readers of this thread, incubation is quite simply a requirement. It took years for me to learn how to overcome "flaws" in the trading engine. Anyone can write a system in 5 minutes that makes millions of dollars, if you don't realize you are hitting on a flaw in the engine. Backtesting alone will never tell you. Mike

And sometimes incubation won't catch these flaws either. In those situations, trading 1 contract live usually will, though.

 The following user says Thank You to kevinkdog for this post:

Jura

Posts: 774 since Apr 2010

kevinkdog
 Ideally, yes. And as a developer gets more experience, the unexpected happens less and less. But it does happen. It could be something subtle you do in the development, that you don't even realize you did. Maybe your system is for T-Bonds, and one of your trading rules was coded wrong, and you unintentionally favor long trades. That probably would have tested great over the past 20 years, but if you were incubating over the first half of 2013, maybe incubation, with poor results, alerted you to a flaw. Of course, if you incubated in mid 2012, incubation would not help you uncover the mistake. So it is not foolproof, that is for sure.

kevinkdog
 And sometimes incubation won't catch these flaws either. In those situations, trading 1 contract live usually will, though.

Interesting points since if the primary goal of incubation is to uncover errors, wouldn't a market replay option be just as usable for this goal (perhaps with some random latency build in to account for the non-instant order execution of real-time trading)?

record100
 On 5 min chart I am considering trading system that is profitable over the period of one month. Of course system is tested over much longer period of time, that includes Bear and Bull markets and transitions in between . You should be confident that quality and the robustness of the system is high.

I think this could work if you have a lot of trades (>1000) and different types of days (volatile, non-volatile, up trending, down trending) in that month, but otherwise, why not test on a longer time period and/or different instruments?

Oh speaking of different instruments, Kevin, do you believe that a system that works good on one instrument should also work good on similar, related instruments? Or are you more inclined to believe that every instrument is somewhat unique?

kevinkdog

Posts: 3,004 since Jul 2012

Jura
 Interesting points since if the primary goal of incubation is to uncover errors, wouldn't a market replay option be just as usable for this goal (perhaps with some random latency build in to account for the non-instant order execution of real-time trading)?

I have never tried that, but it could be a cheaper option.

Jura
 Oh speaking of different instruments, Kevin, do you believe that a system that works good on one instrument should also work good on similar, related instruments? Or are you more inclined to believe that every instrument is somewhat unique?

I used to be in the camp of "one strategy, all markets." I used that approach when I had success a few years back in trading contests. Now I tend to be in the "one strategy, one market" because I like the diversification aspect. So, I have no preference against either approach. I do think the "one strategy, all markets" will be tougher to develop, since once it fails in one market, you theoretically have to ditch the system.

 The following 3 users say Thank You to kevinkdog for this post:

 (login for full post details) #182 (permalink) rk142  Elite Member Atlanta, GA   Experience: Intermediate Platform: Ninjatrader Trading: N/A   Posts: 260 since Dec 2011 Thanks: 117 given, 327 received The issue of incubation is an interesting Rorschach test. Let's say you have a gut aversion to it. Forget about the intellectual reasons you would give for that aversion and consider the emotional motivations for that aversion instead. I am instinctively in favor of it, because I am risk averse. I don't like trading, in itself, very much, so anything that puts a control between me and risk exposure always looks good to me. (confession: reading this thread, and making this post even, is a diversion - I am presently fighting the desire to close a system trade early with a small profit. Get behind me Satan!) -RK
 The following user says Thank You to rk142 for this post:

treydog999
seoul, Korea

Experience: Intermediate
Platform: Multicharts
Broker: CQG, DTN IQfeed

Posts: 896 since Jul 2012

rk142
 The issue of incubation is an interesting Rorschach test. Let's say you have a gut aversion to it. Forget about the intellectual reasons you would give for that aversion and consider the emotional motivations for that aversion instead. I am instinctively in favor of it, because I am risk averse. I don't like trading, in itself, very much, so anything that puts a control between me and risk exposure always looks good to me. (confession: reading this thread, and making this post even, is a diversion - I am presently fighting the desire to close a system trade early with a small profit. Get behind me Satan!) -RK

I mean I think that can be avoided if you have numbers based and pre-set benchmarks. it only becomes a Rorschach test if you are making the judgements purely on a whim rather than a set of pre defined criteria. I personally use my monte carlo results and historical std dev and expectancy to determine my bounds of system performance if it exits these bounds then it turns off. but not until then. I think that's a very important aspect that can not be ignored.

 The following 2 users say Thank You to treydog999 for this post:

 The following 5 users say Thank You to kevinkdog for this post:

 (login for full post details) #185 (permalink) kevinkdog      Posts: 3,004 since Jul 2012 Thanks: 1,598 given, 5,971 received @Big Mike was kind enough to run analysis with MSA software. He ran both strategies of the NGEC system separately, and then together as a portfolio. Before are the portfolio results. The biggest concern I see from this analysis is that the results are very dependent on outliers - big winning trades. Without those big winners, the system is not so good. This is consistent with my experience in the Combine, where I waited and waited for a big winner that never came until after the Combine. A BIG thanks to Big Mike for doing this!!! 100k starting, s + c included, combine #1 and #2 Correlation Equity Monaco, after having combined the systems into a portfolio Eliminate top 10% outliers:
 The following 3 users say Thank You to kevinkdog for this post:

 (login for full post details) #186 (permalink) kevinkdog      Posts: 3,004 since Jul 2012 Thanks: 1,598 given, 5,971 received As the MSA analysis performed by @Big Mike showed, eliminating the outlier trades from the history makes a huge difference in the results. In the history, there are 614 trading days represented. There are 20 days of profits greater than \$1000. In 1 year of trading, I'd expect to see 5 of those "big" days. If they don't come, the system on average becomes only slightly profitable. Below are some graphs that show the impact of eliminating the top X winning days. My conclusion is that I am in deep trouble without those big winning days. The question is: Is there a reason why I should not expect these kinds of days in the future? Maybe my system rules and variables basically were curve fit to find these big trades. With 10-20 big trades, I suppose that is a distinct possibility. On the other hand, it is not like these trades are due to a data anomaly or some backtest issue. Strategy #2 was specifically setup to let profits run, and not to cap it. If I saw only a handful of large profit trades, I might suspect some sort of data or backtest issue. One other interesting question: since I am relying on these "outliers" to generate most of the profit, how likely is it that I even see many of them in a given year? I'll look at that on the next post.
 The following 4 users say Thank You to kevinkdog for this post:

 (login for full post details) #187 (permalink) kevinkdog      Posts: 3,004 since Jul 2012 Thanks: 1,598 given, 5,971 received Since I know that the performance of my NGEC system is going to be driven by large winning trades (outliers), I thought it would be interesting to see how many of these I could expect in a trading year. The results are shown in the table and chart below. Here's what I found: 1. In a year's time, I am likely to see 4-6 large winning trades. That is only 1 large trade every other month! 2. There is less than 10% (actually, 6.6%) chance of seeing 8 or more large winning trades in a year. 3. There is a 13.6% chance that I will only have 0, 1 or 2 big winners in a year's time. This analysis is a bit sobering, and it does make one thing crystal clear: if I am to succeed with this system, I have to take every single trade, because the one I miss just may be the big winner that only comes around once in a while. Based on this data, my expectation for the system is a lot of flat to slightly up or slightly down periods, punctuated by a large winner every once in a while. Why is this important to know? Having proper expectations is crucial to long term success. With the NGEC system, I can't get discouraged or lose confidence in the system when I am not immediately making money. Knowing what to expect will help me a great deal as I watch very little happening day to day.
 The following 5 users say Thank You to kevinkdog for this post:

Jura

Posts: 774 since Apr 2010

Thanks Kevin, as a host, and the others, as visitors, for this thread and your posts; very insightful.

kevinkdog

Hopefully not too much off-topic, but how should I read this chart? It shows the confidence interval widening as the number as trades grows (as should be expected since the larger the sample size, the wider the range of possible end values). But how come the confidence interval shrinks near the end of the chart? Or does the confidence interval plot something different than ending equity?

kevinkdog

Posts: 3,004 since Jul 2012

Jura
 Thanks Kevin, as a host, and the others, as visitors, for this thread and your posts; very insightful. Hopefully not too much off-topic, but how should I read this chart? It shows the confidence interval widening as the number as trades grows (as should be expected since the larger the sample size, the wider the range of possible end values). But how come the confidence interval shrinks near the end of the chart? Or does the confidence interval plot something different than ending equity?

Thanks for the kind words. I hope it is giving people some ideas and tools. I know I am getting quite a bit out of it.

As far as the curves converging to the equity curve at the last point, perhaps someone familiar with MSA can explain.

My explanation is that it looks like he is doing a Monte Carlo type simulation without replacement. So, the equity path can vary along the way (as shown by the upper and lower curve bounds), but in the end it will converge to the same final equity value as each trade is picked once.

 The following user says Thank You to kevinkdog for this post:

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

Correct, MSA Monte Carlo randomizes the order but the start and end point equity are the same.

That is why I also posted Equity Monaco's version. Kevin, that image is missing from your post because you put URL bbcode instead of IMG bbcode for the opening code before the URL.

Mike

 The following 2 users say Thank You to Big Mike for this post:

record100
Toronto, CA

Experience: Intermediate
Platform: NT
Broker: IB

Posts: 105 since Jun 2009

Is it OK to trade this system in coming months? I am tryinig to argue in favor of yes answer. But beware of possible drawdowns and change in Market conditions. Using approach discussed in this thread, it need to be rejected, at least incubation period should not give you warm and fuzzy feeling, even if it gives you good performance.

record100
Toronto, CA

Experience: Intermediate
Platform: NT
Broker: IB

Posts: 105 since Jun 2009

record100
 Is it OK to trade this system in coming months? I am tryinig to argue in favor of yes answer. But beware of possible drawdowns and change in Market conditions. Using approach discussed in this thread, it need to be rejected, at least incubation period should not give you warm and fuzzy feeling, even if it gives you good performance.

Also, frequency distribution for the profit. See spike, at the location of the stoploss.

Attached Thumbnails

kevinkdog

Posts: 3,004 since Jul 2012

record100
 Also, frequency distribution for the profit. See spike, at the location of the stoploss.

When did your incubation period start?

record100
Toronto, CA

Experience: Intermediate
Platform: NT
Broker: IB

Posts: 105 since Jun 2009

kevinkdog
 When did your incubation period start?

August 1

Big Mike
Data Scientist & DevOps

Platform: Custom solution

Posts: 50,094 since Jun 2009

Jura
 Thanks Kevin, as a host, and the others, as visitors, for this thread and your posts; very insightful. Hopefully not too much off-topic, but how should I read this chart? It shows the confidence interval widening as the number as trades grows (as should be expected since the larger the sample size, the wider the range of possible end values). But how come the confidence interval shrinks near the end of the chart? Or does the confidence interval plot something different than ending equity?

BTW, in a recent conversation with Mike Bryant, the author of MSA, he said:

Quoting
 If you’re not using position sizing, you’ll get the same ending equity because MSA uses selection without replacement for the standard Monte Carlo analysis. That means it’s effectively randomizing the series of trades without changing the trades themselves – same trades and number, just in a different order. However, once you add position sizing to your market system, you’ll get different ending equity values in general.

One strength of MSA is its very in-depth position sizing options. It is not something I have played around with much only because of the complexity of implementing them into live strategies. Probably something I should spend more time on.

Mike

 The following 2 users say Thank You to Big Mike for this post:

kevinkdog

Posts: 3,004 since Jul 2012

Big Mike
 BTW, in a recent conversation with Mike Bryant, the author of MSA, he said: One strength of MSA is its very in-depth position sizing options. It is not something I have played around with much only because of the complexity of implementing them into live strategies. Probably something I should spend more time on. Mike

The position sizing alternatives are definitely nice to examine and think about. Of course, these can be optimized too, just like strategies, so caution is important here, too.

 The following user says Thank You to kevinkdog for this post:

kevinkdog

Posts: 3,004 since Jul 2012

record100
 August 1

Does the strategy meets your goals and objectives for profit, and for drawdown?

Do you feel satisfied with the amount of history you have tested, and what that history looks like?

Are you happy with the incubation performance, and the time of incubation?

Those are all questions you need to answer before going live.

 The following user says Thank You to kevinkdog for this post:

 (login for full post details) #198 (permalink) kevinkdog      Posts: 3,004 since Jul 2012 Thanks: 1,598 given, 5,971 received Week 4 of trading the NGEC system with actual money is now complete. Every 4 weeks or so, I will review the current performance of the system, and answer some standard questions. This information may be useful should the performance of the system become erratic - maybe there was something I could have seen earlier, or something I just plain out missed. Summary: Well, after 4 weeks of trading this system live, I am right where I started - breakeven. Am I surprised at this result? Absolutely not. It is well within expectations. Am I disappointed in the results so far? Yes. Anytime I start a new strategy I want to make money at the beginning. Are results in line with expectations? Yes. The current profit is below the average I expect, and it is above the lower 10% line. So, while it is underperforming currently, I see no reason for alarm. Also, I have had 2 winning weeks, and 2 losing weeks. Over time, I expect about 60% of my weeks to be profitable, so the performance is just as I expect. Are fills and trades live comparable to Tradestation strategy report? Yes, in fact in most cases my fills are better than what I had anticipated. Slippage is usually less than I had expected. Do I see any reason to stop trading this system? No. Do I see any reason to change my position sizing plan, i.e. reduce or increase my risk? No. Pretty boring so far!
 The following 8 users say Thank You to kevinkdog for this post:

 (login for full post details) #199 (permalink) Yoyo4830  Elite Member St. Louis Missouri   Experience: Intermediate Platform: TradeStation,MultiCharts, TWS Trading: Oil   Posts: 13 since May 2013 Thanks: 17 given, 9 received Thank you Kevin. This has probably been one of the most helpful threads I have read yet. I now realize that I am made numerous errors in developing the system that I am currently trading live. I am not sure what to do at this point because although the system is profitable (+38% in 15 weeks) I corrupted my data by using my entire data set to backtest and optimize. This is a rookie mistake but now I am trying to think through how to determine if I should continue to trade this system. On the one hand: the system is performing in line with the (now corrupted) historical backtesting results - has had drawdowns that are smaller than the largest historical drawdowns- has been performing within my risk tolerance levels - and has what I consider to be a very respectable profit. On the other hand: Since the system was optimized using 5 years of data and no walk forward testing was done - essentially my live trading serves as the walk forward and that may reveal the systems instability - once it's already to late. Any advice or pointers?
 The following user says Thank You to Yoyo4830 for this post:

kevinkdog

Posts: 3,004 since Jul 2012

Yoyo4830
 Thank you Kevin. This has probably been one of the most helpful threads I have read yet. I now realize that I am made numerous errors in developing the system that I am currently trading live. I am not sure what to do at this point because although the system is profitable (+38% in 15 weeks) I corrupted my data by using my entire data set to backtest and optimize. This is a rookie mistake but now I am trying to think through how to determine if I should continue to trade this system. On the one hand: the system is performing in line with the (now corrupted) historical backtesting results - has had drawdowns that are smaller than the largest historical drawdowns- has been performing within my risk tolerance levels - and has what I consider to be a very respectable profit. On the other hand: Since the system was optimized using 5 years of data and no walk forward testing was done - essentially my live trading serves as the walk forward and that may reveal the systems instability - once it's already to late. Any advice or pointers?

At least so far, the market has been telling you that your system is good. Who am I to argue? The process I use, which you state you didn't follow, isn't the only way to skin a cat. Heck, it might not even be the right way, or the best way - I have strategies that pass my process, but fail in the real world.

I'd say keep a close eye on the performance, and if and when it starts to degrade, be ready to pull the plug and stop trading it. Hopefully that will never happen, but when it does, just be prepared to take action.

 The following 3 users say Thank You to kevinkdog for this post:

 futures io > Taking a Trading System Live

Last Updated on January 6, 2016

Upcoming Webinars and Events

Ongoing

Elite only

## Our 12-year anniversary w/ \$\$,\$\$\$ prizes (check soon)

June

 Copyright © 2021 by futures io, s.a., Av Ricardo J. Alfaro, Century Tower, Panama, Ph: +507 833-9432 (Panama and Intl), +1 888-312-3001 (USA and Canada), info@futures.io All information is for educational use only and is not investment advice.There is a substantial risk of loss in trading commodity futures, stocks, options and foreign exchange products. Past performance is not indicative of future results.
no new posts