Can someone please point me to a research of some sort that discuss the following:
Suppose you are optimizing a strategy. You have 2 optimizing variables.
variable 1 = and indicator (any indicator)
variable 2 = data series (10 min, 20 min 1 hour etc)
You run the optimizer and the program spits our top 10 results:
result 1)
variable 1 value = 0.0001
variable 2 value = 1 min
result 2)
variable 1 value = 0.001
variable 2 value = 1 hour
and so on..
Then you drill into the actual trades and notice, that, the "entries" that were executed during result 1 and result 2 are essentially executed during the same time (say within 5 minute or so). So what happened here? Basically, the "best" entry indeed was during that time frame, and the program found the "best" values for the indicator during two different optimization (1 min and 1 hour).
In other words, the goal of the optimizer was to find the best trade to take, and it basically equated the two timeframes and found the values for the indicator that worked.
I want to learn more about this. Because I feel that this adds to the noise, because result 1 and two are basically the same output. How can I equate the two so that I can avoid such results?