I wonder --- what kind of info do you need to dump into matlab to produce these heat maps? Is the exported ninja csv from strat analyzer sufficient? If so, would you be willing to take a look at one of mine and give me back the results?
Due to time constraints, please do not PM me if your question can be resolved or answered on the forum.
Need help? 1) Stop changing things. No new indicators, charts, or methods. Be consistent with what is in front of you first. 2) Start a journal and post to it daily with the trades you made to show your strengths and weaknesses. 3) Set goals for yourself to reach daily. Make them about how you trade, not how much money you make. 4) Accept responsibility for your actions. Stop looking elsewhere to explain away poor performance. 5) Where to start as a trader? Watch this webinar and read this thread for hundreds of questions and answers. 6) Help using the forum? Watch this video to learn general tips on using the site.
If you want to support our community, become an Elite Member.
No the heat map is generated from system analysis within matlab, in other words the strategy must be coded in matlab (or you could have matlab call your ninja strategy but that is another can of worms)
As I mentioned, the example I posted is from a matlab webinar about algorithmic trading in matlab, if interested you can download the data and code to reproduce it on the mathworks site (Aly Kassam is the author).
I am currently in the process of converting my strategies and indicators to matlab to facilitate more advanced analysis such as this (as well as to make more use of CUDA).
I will still use ninjatrader as an interface to brokers but ninja strategies will just be a shell which calls out to matlab for all the dirty work.
Slightly OT but... have any of you played with R instead of Matlab? The R evolution package coupled with the R Metrics package seems pretty cool... for free. It is worth the extra $ to go down the Matlab path vs. the R path? Looking at basic time series analysis and possibly factor models for a project later this year.
I use both, they both have their strengths and weaknesses. Obviously the primary strength of R is that its free, and its repository system which offers automatic grabbing of packages is very cool. (Actually matlab just implemented an easy interface to their online repository as well in R2009b which was just released a few days ago)
The best feature of matlab IMO is their inline help system and very thorough documentation, which is something sorely lacking in most aspects of R. For instance in the toolbox addons for matlab the documentation includes not only api reference but examples of how you can use the toolbox on real world problems, and they often include GUI demonstrations you can play with, very helpful.
Also, while R is very widely used in the statistics community, Matlab has a much broader audience spanning the various engineering disciplines, econometrics and other forms of applied mathematics so there is more published research code available for matlab. (Mathematica is another option, generally favored by theoretical mathematicians due to its symbolic manipulation capabilities)
Overall, while matlab is definitely not cheap, I think it is worth the money. They offer a 90 day demo so you can try it out and get a good feel for it before you commit.
I read (most of) a book entitled Trading Systems That Work: Building and Evaluating Effective Trading Systems by Thomas Stridsman. Chapter 7 gives an example of creating one of those heat maps using Excel. May be worth checking out.
Indeed, to illustrate the point that you can use MAs to generate profits using any data, here are the results for a system of two SMAs applied to ENTIRELY RANDOM PRICE MOVEMENTS. I generated data bars for a hypothetical instrument with a fictional price value such that the bar-to-bar changes in value were random. The individual bars each have a OHLC value generated randomly; by random, I mean that the probability of each successive close being higher or lower than the previous close is equal (i.e. p=0.5 for a higher close, p=0.5 for a lower close).
The first chart is the (random) price data. The inset is a higher mag view of a sample of the data in the full series of price values. Interesting to note how "trends" develop even in this random data -- a pretty graphic illustration of a random walk
The second chart is the equity curve for trading a simple moving average crossover (not optimized), assuming the individual bars (OHLC, remember these are random!) represent 1 min bars. Note the positive return over time. Note also that this was a WALK FORWARD analysis covering the 1 month period of data -- i.e. the MAs were not curve-fitted to produce the positive ROI.
The third pic is a summary of the trading stats for this system.
The following 2 users say Thank You to Autobot for this post:
"My question is this: I'm now working on combining these 3 into one signal (short, flat, long). I've tried two different approaches to do this:
1 - I use the optimal parameters that I determined on each indicator individually
2 - I re-optimized all parameters together.
#1 seems to be more realistic, with the acknowledgment that the performance will not be the same as the backtests, due to the performance of each system not being the same. This I know. So the final results will probably not be as good. But this seems robust.
#2 - Seems to be more optimal, with an even stronger acknowledgment that the results will not be as good as the backtest. However there is a greater risk of curve fitting due to the increased rules and degrees of freedom. In defense of the optimization I will say that lots of attempts produced unacceptable results so I believe that if optimization finds something good say PF > 3.0 then it's very likely to be positive in forward testing even though the PF will most likely be less.
I would lean heavily against #2 since you have geometrically multiplied the optimization/curve-fitting quotient, i.e. with 3 variables combining with each other instead of 3 you now have 9. Way too many.
If it is true trend-following system - which most MA-type systems are by definition almost - then bull/bear shouldn't make all that much difference, although it will because many markets - especially stocks/equity indexes - behave differently. Stock markets tend to accumulate steadily for some time before a climax - if any - whereas sell-offs tend to lurch more throughout with far more rapid sell-offs/climaxes.
That said, you mentioned something I found of interest: namely the volatility change. My general approach with optimization is only to optimize different types of algorithm.
For example, the TS code I posted a while back soliciting translation to Ninja (no takers) featured:
a fast MA
a slow MA
a band around the slow MA
an ATR multiplier
a R-R ratio factor
along with several different signals and the ability to change profit targets in order to recover from a losing streak.
In optimizing ( one year one minute charts) I did not touch the band or the MAs. The band was based on logic that wanted to just make the MA value slightly responsive to volatility and not as simple as a single price above/below which entries/exits might be triggered, i.e. in a 200 tick chart the band around the MA would vary between about 2-4 ticks, so upside penetration would be the slow MA +2 ticks versus just upside price cross of the MA. So that logic was clear and no need to fiddle with it.
Same with the MA's. The fast was about 5 times faster than the slow which was 21 bar, the only length I ever tried it with. Simple.
So parameter-wise the only things I optimized were the ATR ratio and the Risk-Reward ratio because they are two different things.
The first adjusts the system not only to market volatility, but also gives it a general loose or tight which is especially relevant in terms of stop placements relative to the underlying market.
The profit-to-loss ratio is money management. So once the stop value is set basis the volatility, then the profit is optimized against that giving a whole other level of information. And again my approach here was not to optimize both at once - though I am sure I did for fun - rather first optimize the ATR multiplier with a basic R-R ratio (say 1 - 1.618) so as to get a clear read on the ATR alone and look for the clusters mentioned earlier in this thread versus stellar outliers. In fact, if the method doesn't produce a general bell curve profile or at least several waves of bell curves with more than one or two in each wave, I regard the method as useless.
(I also did not optimize the underlying ATR number, just sticking with around 100 period if I remember correctly, i.e. just to get the basic typical volatility of whatever timeframe/chart was being studied versus trying to turn the ATR into yet another variable parameter to fine-tune. The ratio was what was tested, i.e. placing stops at 1* ATR, 1.5* ATR etc.)
I believe that most robust systems using optimization will end up relying mainly on these two types of algorithm (volatility and money management) to fine-tune what must otherwise be basically good rules that are based more on logic/common sense than mining for magic variable quotients. There can be very sophisticated rules in a system based on no end of different logics (SR, trend-following, oscillators, random entry, time of day, wave counts etc. etc.) but ideally those rules should explain themselves as such pretty well and not need to be optimized. Optimization helps in terms of finding the best ways to adapt to changing market conditions which is why I suspect ATR is such a critical one for most systems unless that is built into the rules in some other fashion.
Finally, when working with more than one or two parameters, it is best that they have different referents, i.e. not both based on close, not both being MA's or oscillators etc., ideally have two entirely different purposes (such as my ATR multiplier and R-R ratio example above) otherwise the tendency to over curve-fit is greatly encouraged to the point of being unavoidable.
I have always wanted to get into walk-forward testing but have yet to have a platform that can do it. I gather Ninja can do it but I have not managed to work with strategies in Ninja and not sure I want to try because their outputs are so clunky lumping everything together in a central text field the way they do and in any case I can't code anything intelligent into Ninja from original ideas so for me that's a barrier I haven't overcome yet.
However, walk-forward or no, one has to put a lot of thought into the nature of the algorithms being tweaked and I believe the key is to use algorithms that have different logics, use different referents and therefore have minimal inter-relationship with each other.