NexusFi: Find Your Edge


Home Menu

 





STF discretionary spot Forex system development journal


Discussion in Trading Journals

Updated
      Top Posters
    1. looks_one bnichols with 249 posts (152 thanks)
    2. looks_two Big Mike with 17 posts (4 thanks)
    3. looks_3 Adamus with 14 posts (3 thanks)
    4. looks_4 NW27 with 6 posts (0 thanks)
      Best Posters
    1. looks_one sam028 with 4 thanks per post
    2. looks_two bnichols with 0.6 thanks per post
    3. looks_3 Big Mike with 0.2 thanks per post
    4. looks_4 Adamus with 0.2 thanks per post
    1. trending_up 55,098 views
    2. thumb_up 167 thanks given
    3. group 18 followers
    1. forum 292 posts
    2. attach_file 182 attachments




 
Search this Thread

STF discretionary spot Forex system development journal

  #31 (permalink)
 
bnichols's Avatar
 bnichols 
Dartmouth NS
 
Experience: Intermediate
Platform: MC, MC.Net, NT, TWS
Broker: IB / IQFeed / Kids
Trading: Forex, stocks
Posts: 637 since Feb 2010
Thanks Given: 64
Thanks Received: 460

Since the last post I've been trading less and working on the system more.

Briefly, the dictionary-with-bit-interleaved-key approach to storing the probability density works well enough and so far seems a highly efficient way to find a particular data point among (literally) millions, but I'm becoming a victim of my project management approach ("Let's just make a start and see what happens") and undisciplined programming style--spaghetti & unreachable code technique. The upshot is that data reduction--an automated process whereby raw parameter data output dumped by the NT backtest strategy is converted to density maps--is time consuming, perhaps up to 18 hours (running in debug from Visual Studio IDE; standalone release version runs about 4x faster) to produce the STF (short time frame fractal) data set. The application could exploit multithreading better than it does but this should have been worked out at the planning stage (what planning stage?) and would now mean unraveling the Gordian Knot of the core process.

The last few days have been mainly focused on administrative tasks--data structures and code entry points needed to interface with NinjaTrader. Among other things this requires finalizing the data object that represents the observed behaviour of parameters ("learned behaviour" is too probably strong a phrase at this stage). At the moment this "object" comprises 2 dictionaries--the Density dictionary (essentially a database of raw 2D pseudo-probability density functions) and the Parameters dictionary, the latter a database containing information about each 2D function in the Density dictionary.

Before that I spent some time looking at the shape of the functions and studying the way test parameter vectors migrate between buy & sell clusters, still looking for confirmation that price behaviour is predictable in some sense between highs and lows as a sanity check but (hoping for the best) also a way to describe variance (retraces between entry and target) for trailing stop control. Today while watching test vectors migrate between buy/sell clusters and watching EUR/USD oscillating around 1.3200 for a while before jumping to oscillate around 1.3250 it occurred to me the math of strange attractors might provide a way to quantify the behaviour in density space as well as in price-time space (i.e., on the chart) around "magic numbers" like 00 and 50 (in particular their numerical approximation), since the Morton number dictionary key probably lends itself to that, but decided that life is too short--will have to wait until the strategy is making money.

Hope to update this post with details & some illustrations ASAP but in the meantime here is an N-dimensional bit deinterleaving algorithm implemented in C#--seems to work 100% of the time but as usual no guarantees. It's been handy as a debugging tool, since so far in practice keys need only be computed, not decoded. The fiddle with the BigIntegers is because conventional long unsigned Int overflows during shifting for these bit lengths. As before the array a will contain the nb parameter vector coordinate values after decoding. Global variable bKey is the Morton number to be deconstructed.

 
Code
        private void deinterleaveN(byte[] a, int nb)
        {
            BigInteger bT = bKey;
            BigInteger bit = new BigInteger(0);
            BigInteger u1 = new BigInteger(1);
            for (int k = 0; k < nb; k++)
            {
                a[k] = 0;
                for (int i = 0; i < 8; i++)
                {
                    bit = (bT & (u1 << k + i * nb)) >> k + i * (nb - 1);
                    a[k] |= (byte)bit;
                }
            }
        }

Visit my NexusFi Trade Journal Started this thread Reply With Quote

Can you help answer these questions
from other members on NexusFi?
NT7 Indicator Script Troubleshooting - Camarilla Pivots
NinjaTrader
REcommedations for programming help
Sierra Chart
Trade idea based off three indicators.
Traders Hideout
Better Renko Gaps
The Elite Circle
MC PL editor upgrade
MultiCharts
 
Best Threads (Most Thanked)
in the last 7 days on NexusFi
Spoo-nalysis ES e-mini futures S&P 500
29 thanks
Just another trading journal: PA, Wyckoff & Trends
25 thanks
Tao te Trade: way of the WLD
24 thanks
Bigger Wins or Fewer Losses?
22 thanks
GFIs1 1 DAX trade per day journal
17 thanks
  #32 (permalink)
 
bnichols's Avatar
 bnichols 
Dartmouth NS
 
Experience: Intermediate
Platform: MC, MC.Net, NT, TWS
Broker: IB / IQFeed / Kids
Trading: Forex, stocks
Posts: 637 since Feb 2010
Thanks Given: 64
Thanks Received: 460

Regarding visualization, which consumes more coding effort right now than writing data reduction algorithms, shown below are 4 images of a 2D probability density plot (left hand side) next to a snapshot of an 1800 tick chart for EUR/USD between 01:52 AM and 11:02 AM AST, November 15 2011 (right hand side), corresponding to the Long Time Frame fractal.

The first image shows on the left side the projection of a buy condition parameter vector on the 2D density for StochasticsK (vertical axis) vs MACD (horizontal) filtered for a 16 pip profit target. The cursor in the chart on the right hand side marks the corresponding relative low price. Green lines connect the fields in the GUI with both the selected 2D Density (StochsK vs MACD) plot and the selected test vector with the corresponding projection onto the density plot (green square) and the event in the chart (marked by the cursor in the chart).



The second image shows on the left side the projection of a sell condition parameter vector on the 2D density for StochasticsK (vertical axis) vs MACD (horizontal) filtered for a 16 pip profit target. The cursor in the chart on the right hand side marks the corresponding relative high price. Green lines connect the fields in the GUI with both the selected 2d Density plot (StochsK vs MACD) and the selected test vector with the corresponding projection onto the density plot (green square) and the event in the chart (marked by the cursor in the chart).



The third image shows the projection of a buy condition parameter vector on the 2D density for the difference between 15EMA and Close price (vertical axis) vs MACD (horizontal) filtered for a 16 pip profit target on the left hand side of the image. The cursor in chart on the right hand side marks the corresponding relative low price. Green lines connect the fields in the GUI with both the selected 2D Density (15EMADiff vs MACD) plot and the selected test vector with the corresponding projection onto the density plot (green square) and the event in the chart (marked by the cursor in the chart).



The fourth image shows the projection of a sell condition parameter vector on the 2D density for the difference between 15EMA and Close price (vertical axis) vs MACD (horizontal) filtered for a 16 pip profit target on the left hand side of the image. The cursor in chart on the right hand side marks the corresponding relative high price. Green lines connect the fields in the GUI with both the selected 2D Density (15EMADiff vs MACD) plot and the selected test vector with the corresponding projection onto the density plot (green square) and the event in the chart (marked by the cursor in the chart)..



Notes

In the Density GUI the "Profit Target" textbox contents apparently no longer refer to anything (shows 32--should be 16). "Button1" redraws the Density map minus the centroids (yellow squares) or test vector projections (green squares).

We've used a radius of "50" in scaled probability space, which translates into a smoothing function approximately 1/10th the size of the Density plot, which accounts for the circular appearance of outliers. We should base the radius on estimated density (number of samples in the space divided by the volume of the space).

Disks (notably white disks) with chunks bitten out in the density plot are an artifact of the method used to combine buy and sell clusters and thus e.g. of outlier sell cluster data interfering with outlier buy cluster data and typically have a low probability of occurrence, so is not significant & ignored at the moment. They remind me of Pac Man.

The centroids of the buy (blue to pink color scheme) & sell (white/red/black color scheme) clusters in the density plots are indicated by yellow squares.

The projection of a single test vector onto both density planes during buy and sell events is shown as a green square (circles are more interesting to draw from scratch on a bitmap than I ought to make time for right now ). In both cases test vector projections (green squares) happens to fall squarely in the "strong buy"or "strong sell" area of the density plot.

The good news is, from the EUR/USD chart these events happens to coincide with 2 points at which one could have made money respectively entering a long or short trade if one exited at the right spot. Exits are a work in progress, bound up with trailing stop control.

Note however that these 2 events were chosen from the same dataset used to compute the density functions and is more of a sanity check on the implementation than proof that the system "works" in any sense.

In particular this example is for illustration only; we can't rely on the fact that 2 cherry-picked test vectors projected onto one or 2 density functions happen to meet expectations proves anything at all. There are (or will be) 325 x 4 x 3 = 3900 such density functions in the Density dictionary (the number of combinations of 23 parameters taken 2 at a time, times the selected number of profit target comprising the ZigZag filter, times the number of time frame fractals) and countless "test vectors".

It should be reiterated perhaps at this point the time and effort invested in this first attempt to code the system won't be required to keep the "knowledge base" of density functions updated, or to encode a new knowledge base with different parameters (and therefore in theory extensible without much effort to any system). It's expected however the "glue" (fuzzy propositions to come, that generate buy/sell signals from the combined outputs of bar-by-bar convolutions of parameter vectors appropriate to the system with the knowledge base) will require input from a trader familiar with the particular system. Not sure yet what form that will take.

Concluding Remarks
That said, the numbers generated by convolving a test vector with (what is essentially) an N-dimensional probability function will make sense to a computer (eventually) and is the best way I can think of to expose a computer to trading probabilities without resorting to conventional AI techniques, and having inspected a lot of these projections it's possible to say performance so far meets expectations.

In other words--its not perfect. I can find instances where test vectors in a suite of vectors corresponding to a price turning point that is broad relative to the time frame (or fractal)--low level chop--appear to give successive contradictory buy/sell signals if it were left up to them. This is to say problems that may plague trailing stop control may affect entry, but we won't be sure until we attempt to combine outputs of the convolution of an instantaneous parameter vector with the knowledge base (i.e. until we interface with NinjaTrader). This may be evidence of the worst-case scenario mentioned earlier (at least in the first instance that parameter transition from entry to exit is not well behaved), but IMO just overall more proof one does not trade a single probability function any more than one trades a single indicator, or trades chop, and that proof is always in the pudding. Some kind of low lag filter on these instantaneous parameter vectors will likely be required, and I'm assuming fuzzy logic will perform that function (make sense of the outputs). At least it's the hypothesis I'm sticking to at the moment.

Next work

A data structure to represent the output of a test vector (the "instantaneous parameter vector" generated when each bar closes in NinjaTrader) convolved with the Density dictionary (the knowledge base) is required, which means code to visualize it. This will initially take the form of a 3 fractal by 4 profit target by 325 probability value structure (one probability value corresponding to each pair of parameters projected onto the buy/sell probability plane of every combination of 23 parameters considered 2 at a time). It's not clear at the moment if the structure requires (or would permit) further reduction to make manipulation by fuzzy propositions viable. I'm hopeful because the system so far is nothing more than a somewhat queer shape function mapping measurements to fuzzy input values.

The density object part of the design still doesn't account for S/R levels, probably because while they vary they tend to be discrete (don't have 1st and 2nd differences). I'm still wondering to what extent the other 23 defined parameters account for price behaviour in the vicinity of S/R levels, and in fact how much redundancy is already in the selected parameters. In any event S/R levels, including magic numbers, make natural profit targets and can buttress entries

There may be evidence some parameters (typically differences between SMA and EMA and close price) routinely map to the opposite (wrong) buy/sell cluster, which makes no sense to me at the moment. Have to determine whether it's a bug or, if genuine, resign myself to using negative correlations in the fuzzy propositions that combine function outputs.

Time is included in the parameter vector but no attempt has been make to incorporate it the way some systems e.g. use time to cluster (in principle) the same traders, and presumably their styles, in a given session. There is some evidence of behaviours at certain hours of the day, in particular the possible tendency of various markets to make a beeline for the floor pivot.

Finally, rather than weight function outputs prior to synthesizing a buy/sell signal from combined probabilities will likely simply remove parameters that result in weak separation between buy/sell probability clusters. Overall IMO rules, including how much attention one pays to a given indicator depending on circumstances (e.g. stochastics during a breakout) ought to be embodied in fuzzy propositions, which is where the system's actual trading expertise resides. This is seen to be a particular problem for small profit targets (e.g., scalping during a period of chop, or range trading a channel), where any method that generates random entries or range trades the price histogram can profit if sound money management principles are followed, given some will argue "any currency trade is a range trade".

Visit my NexusFi Trade Journal Started this thread Reply With Quote
  #33 (permalink)
 
bnichols's Avatar
 bnichols 
Dartmouth NS
 
Experience: Intermediate
Platform: MC, MC.Net, NT, TWS
Broker: IB / IQFeed / Kids
Trading: Forex, stocks
Posts: 637 since Feb 2010
Thanks Given: 64
Thanks Received: 460


For a little comic relief, 2 images below illustrate the one trade I made so far today (while busy with other things :-/)

What interested me about the trade once it started to go south was that it provided an opportunity to study what I hypothesize to be "impulse response" price action (that a lot of people describe in Fib terms) after a relatively strong move across a significant S/R (in this case the floor pivot).

The first image shows the state of the trade just as price is about to hit an adjusted profit target, after a second entry to haul the trade out of the fire was necessary, when price unexpectedly ran back over me (twice). Stop loss was moved a couple of times and then out of the way altogether.

While I should not have been trading at all (distracted, trying to do 3 or 4 things at the same time), or expecting price to cross a significant support level to reach my (single) target or trading from the short time frame chart (200 ticks) the decision to make a 2nd entry took place only after I'd managed to focus and consulted longer time frame charts. Generally however this tactic (averaging) is a good way to score a full boat loss.



And as usual, what price inevitably did ...




[Edited to add] 2nd (spite?) trade, underway as we speak. Trading for pocket change at this time of the day for practice, since there is little or no energy in the market. Aside from spite (to get a piece of what I missed) this trade was entered because there is a decent resistance level fairly close by above, trend (50SMA) on 200 and 600 tick charts is down, momentum is negative and 200 and 600 tick stochs were peaking at the time of entry.





OOPS........darn again..........200 Tick chart at the close....gambling pennies missed the close (penny wise pound foolish as they say). All orders are GTC so the thing will wake from the dead in 15 minutes Will follow up once it concludes.

Note this is an example of how the NT/IB combo can go haywire if a trade is active at the close (apparent target has jumped in this case, and very often the apparent entry does as well )

Visit my NexusFi Trade Journal Started this thread Reply With Quote
  #34 (permalink)
 
bnichols's Avatar
 bnichols 
Dartmouth NS
 
Experience: Intermediate
Platform: MC, MC.Net, NT, TWS
Broker: IB / IQFeed / Kids
Trading: Forex, stocks
Posts: 637 since Feb 2010
Thanks Given: 64
Thanks Received: 460

Preamble

Since I last posted I've done some trading mostly on paper, continuing to gain confidence with the method, in particular how indicators behave when a trend turns into consolidation and vice versa (unfortunately at the moment not enough hours in the day to post these and discuss). In general end of week results are break even to respectably profitable on hundreds of trades.

Otherwise have been spending every waking hour, and some sleeping hours it seems, working on bot implementation details, where (as they say) the devil resides.

This entry will be at first a written summary of notes from my pen and paper journal, scribblings on backs of envelopes and rough diagrams the bot code is rising from, but intend to supplement with illustrations later

State of development

The main stumbling block has been how to cope with the volume of data comprising the knowledge base (the probability densities) that needs to be preprocessed, reduced, interpreted and made available as an indicator or to the bot during operation, which volume is proportional to the number of parameters being considered. While a minimum number of parameters derived from indicators particular to the method (i.e., trend via 50MA Slope, momentum, cycle, S/R and fractal) must be included, ideally the bot should also embody sometimes hard to define nuances of price and indicator action that optimize entries and exits to minimize MAE and maximize ETD.

After puzzling too long about what these nuances might be, to get things moving I decided the top priority at this stage is to develop data management and interpretation tools and to write a basic API (application program interface) in order to get the system off the ground. Once NT has been interfaced and is producing results then performance & hence final parameter selection will become the main focus.

Therefore the parameter vector is restricted to 16 variables (including first differences as a proxy for "nuance" but still excluding consideration of S/R levels) at the moment, which generates a large enough knowledge base to guide development but not so large as to be unmanageable. Characteristics of the implementation at present therefore are as follows:

1. 3 fractals corresponding to 200-, 600- and 1800 tick bars
2. 4 profit targets (8, 16, 32 & 64 pips)
3. 16 parameters (indicator outputs and their first derivatives)
4. 120 2D probability density functions (256 x 256 = 65536 density values) per fractal, per profit target, for a total of 1440 2D (256x256 = 65536 values each) density functions
5. Total non-zero density values distributed among the density functions: approximately 54 million based on a 3 month price data sample over 3 time/tick frames.

Implementation details

At first the idea was to store the knowledge structures (density dictionary & its support structures) in a database (MS SQL 2008 Express) and have the bot interact with them as required via calls to a DLL, which in turn would either interface directly to SQL server or with it via a Windows Service. It soon became apparent however that a simple SQL server installation is not equipped to deliver the necessary volume of data at adequate rates (i.e., 96 minutes simply to initialize the data structure in memory reading from database). Therefore preprocessing now includes a step to serialize the density data structure as a binary file, reducing load time from an hour and a half to the few seconds required to deserialize a 90 megabyte binary file on disk to a structure in memory.

Also in the interest of performance--necessity, not elegance, the mother of invention--I scrapped the too-clever interleaved density dictionary keys in favour of ad hoc keys constructed by concatenating pertinent variables. That is, the density dictionary is still a list of projections of points comprising a (potentially) 60 dimensional probability object onto planes defined by parameter axes taken 2 at a time, but the dictionary key is of the form

ftttXXYYMMMNNN

where
f = 1 digit identifying the fractal (0, 1, 2) corresponding to 200-, 600- and 1800-tick data respectively
ttt = profit target derived from the ZigZag indicator used to filter the raw data into buy/sell clusters underlying the 60-D probability "object"
XX = index into 60 item array of potential parameters, hence associated with (what I imagine as) an axis of the 60-dimensional space and corresponding to the first of 2 parameter axes defining the plane upon which a point in 60-D space is projected
YY = index into the same 60 item array of potential parameters, also associated with (what I imagine as) an axis of the 60-dimensional space and corresponding to the second of 2 parameter axes defining the plane upon which a point in 60-D space is projected
MMM = coordinate of the 1st parameter (identified by XX) along the XX axis
NNN = coordinate of the 2nd parameter (identified by YY) along the YY axis.

Testing and the prelude to interface design

Once the system was able to project a point (selected by a parameter test vector) in the 60-D probability object onto selected planes the ability to read & synchronize test vectors from disk 3 fractals at a time was added, so that it became possible to play historical data from disk through the system as a precursor to data being delivered by NT in real time.

At the same time a means to visual system activity was conceived as an array of 12 (3 fractal x 4 target) 60x60 pixel bitmaps, the color of a single pixel corresponding to the probability value associated with the projection of a test vector onto a given 2D probability density function. In AI terms we might humour ourselves by interpreting the changing pattern of colors of the bitmap assembly as each test vector is processed to be a view of sensory neuron activity as our proto-bot reacts to price action. But mainly (as mentioned before), visualization aids debugging, gives a better idea of what the system is doing and helps decide how to proceed next.

After visualizing system response to corresponding chart price action for a while 2 things became apparent; namely,

1. there was an encouraging observable correlation overall of buy/sell probability and price extrema, with exciting across-the-board consensus when a long term trend was setting up
2. there was sometimes discouraging randomness in between, including apparent occasional inexplicable breakdowns in correlation.

Following the logic that progress is made only by explaining anomalies, and regarding Item 2 (the discouraging part), as useful as it may be the bitmap assembly represents 1440 variables whose values are constantly changing and is hard to read during chop. In at least one case I caught myself comparing a system snapshot with the wrong part of the charts. In addition, each pixel color was finally & deliberately pinned "hard buy" (blue) or "hard sell" (red) no matter what the actual probability of a buy or sell condition, to make them easier to see to aid debugging.

The importance of Item 2 is that it suggested the next step; namely, to give the 1440 individual probability values represented by pixels in the bitmap assembly voting rights, whether to advise the fuzzy logic master logic to enter a trade, how long to stay in a trade, trade direction as well as position size a matter by some means the will of the majority. Rather than suffer a democracy however (one node--one vote) it seemed more appropriate to assign each the same rights as newbie traders in a prop shop, and discipline them according to performance.

Such a node's activity is constrained by rules and consists in entering a paper trade when probability of profit exceeds a threshold, maintaining position size commensurate with evolving probability that the direction of the trade (long or short) will be profitable after commission, accumulating profits and losses as the position is liquidated accordingly and voting according to its "beliefs" in "elections" that influence real-money buy/sell/hold decisions made by the fuzzy logic "expert system".

As a consequence the following structure & set of rules was devised to aggregate system response in a form accessible to the fuzzy logic system--the real-money buy/sell/hold decision-making component--as well as to permit adaptation & learning:

1. The construct handling each of the 1440 probability values (each microcosmic assessment of price action, whether it represents a buy, sell or hold condition within some scope or context) was abstracted as a node in a larger decision engine and assigned an activation function & system of weights, similar to neuron function, bias & weighting in a neural net (or constraints on a trader in a prop shop)
2. Each node was further assigned membership in 3 peer groups: fractal (its time frame trading peers), target (its profit target peers), and "global" (the system)
3. Each node is free initially to enter or exit a "paper" trade, or modify its position; any entry, exit or adjustment intended to maintain a position consistent with the evolving probability that the action will generate profits or preserve capital.
4. Each node maintains a cumulative P/L record of these paper trades.
5. Each node adds its voice to consensus (votes for buy, sell, hold) by making its state available during a poll at each bar close, including position status & cumulative profit
6. Each node reports results immediately after each private paper trade (as explained below), and accepts the consequences of its actions via modifications to its decision making process, including its assumptions (i.e., the way the underlying probability is interpreted)

The node-level "paper trail" concept was adopted both as a means to weigh a node's input into the real-money trading process and to track the contribution of a given parameter pair to the performance of the system for structural audit purposes.

Thus while each node operates autonomously the scope of its activity and its influence on others must be regulated as a consequence of its actions, and subject as well as the opinions of its peers and of its "boss" (the overall system). Behaviour is regulated by a simple weighting mechanism intended to reward decisions that increase profits and discourage or reverse habitual decisions that decrease profits. In general

1. nodes that habitually do well relative to other nodes in its peer groups tend to carry bigger position sizes and their vote carries more weight. Successful nodes are able to discount the opinions of less successful nodes and even an idiot boss (a system that exhibits overall performance inferior to the node). Ideally by rewarding success system behaviour ought to come to mimic successful node behaviour.
2. nodes that do poorly,
2.1 first, to reduce negative impact on profitability:
2.1.1 develop poor confidence (reduce position sizes & trading frequency)
2.1.2 have the weight of their vote reduced (ability to influence others and real-money decisions)
2.1.3 must pay more attention to the opinion their peers regarding entry decisions and position sizes
2.2 second, to correct behaviour:
2.2.1 have their assumptions examined (i.e., the construct generating the probability underlying their decision making is modified, even to the extent of reversing the buy-sell dependence on the underlying probability)

Finally, at the "atomic" level each of the 65536 values comprising the 2D probability density functions corresponding to active parameters selected 2 at a time are assigned both a weight and a bias, which are modifiable under program control. This was done to provide a way for the system to mutate so to speak, beyond simple adaptation. Depending on the algorithm the possibility exists for the way the density function is interpreted (what the system "hears") to differ significantly from the original composition ("what the probability function says").

Notes

As mentioned I'll try to add illustrations and details of critical algorithms (for e.g., weighted entry thresholds and position sizing) as they become available.

The system stage described tries to blend basic trading principles with simple neural net design & training practice. If so it's unavoidable and convenient: the concepts are trivial to implement in code and yet the approach avoids the frustration of generic trial & error network design.

Given the number of degrees of freedom implicit in the weighting system I'm curious whether the system will stabilize in a desirable (profit-generating) state. IMO if the underlying trading method has a positive expectation (unlike e.g. gambling) then it should.

Theoretically in this design nodes (representing pairs of parameter choices) can become inactive, whereupon a "structural audit" will be indicated (pruning, or substituting other parameters).

Despite continuing encouraging signs there is still no reason to believe the system will work. The first indication should come once the algorithms (paper trading, voting, adaptation) described above are finished, debugged and tested. My wildest desire is it turns out better than flipping a coin :-/ If nothing else I've learned there's a big difference between sitting behind a desk wearing pretty clothes and telling software architects & engineers what to do (my previous lot in life), and doing it one's self.

Edited to add: In the cold light of day after a few hours sleep it now seems it may make no sense from a real world trading perspective to keep records ("paper trade") at the node level, since that would amount more or less to trading 2 indicators--a bare minimum for this (TopDog) approach and could introduce a negative bias that unfairly penalizes the node. It may be trading has to occur in the subset of nodes defined by the intersection of fractal and target peer groups--a team of 120 nodes, which might suggest a simple neural net and use of backpropagation to "discipline" individual nodes. Not much of a compromise since net architecture would be predefined--perhaps no messing around to find an optimum structure. In any event concept needs more thought.

Visit my NexusFi Trade Journal Started this thread Reply With Quote
  #35 (permalink)
 
bnichols's Avatar
 bnichols 
Dartmouth NS
 
Experience: Intermediate
Platform: MC, MC.Net, NT, TWS
Broker: IB / IQFeed / Kids
Trading: Forex, stocks
Posts: 637 since Feb 2010
Thanks Given: 64
Thanks Received: 460

This weeks' trades so far

It's easiest to comment on one's trading activities when things are going relatively well, and noticing I'm sitting at 95% profitable trades for the 1st 2 days of the week (21 trades) will use this opportunity (taking a break from system development) to do so

I've been using a small account (typically $6000-$10,000) and trading 80,000 unit lots, usually no more than 2 at a time (odd lots possible with Interactive Brokers as long as the post leverage USD amount is at least $30,000) because something about this account size seems right--perhaps short time frame, small pip targets = small account. Mainly I've adopted a per-trade risk 2% or less and because of an understanding with my significant other a 2% loss will cost us a bottle of scotch rather than several months groceries. Leverage is approximately 29:1. This may pale compared to 400:1 available from some Forex brokers who cater to small accounts, but with a background in stocks it seems more than adequate to me.

As mentioned previously I've traded EUR/USD spot currency exclusively for a couple of years now so as not to be distracted by issues with other currencies, in the hope that its price action would become second nature--hypothesizing first it's possible to learn price action instinctively and second it is experience alone that allows us to master a currency, that it's better to master one currency than be a "jack of all currencies, master of none". IMO short time frame spot currency is an excellent test of the hypothesis since initially at least the market can seem frustratingly unpredictable.

I'm using Barry Burns' (TopDog) chart trading method, the claim to fame strictly that it happens to agree with my temperament, not because I think it's superior to any other method, and because it's cheap (invested approximately $700 in materials, not all of which were necessary). Some will argue any money spent on information that is freely available and essentially rehashes what we already know (or should know) is wasted,--like paying for a diet plan--the teacher/self taught debate. I think we teach ourselves in any case and any guru should be the first to admit that--but IMO however a method pays for itself if it speeds up the learning process, gives what we know a little form and structure, puts 1000 facts into some sort of context, especially lets us understand exceptions to the rules (e.g., when to trust Stochastics and when not), and thereby lessens the number of costly mistakes during apprenticeship.

The underlying principle is the same as any other method--enter high probability trades or be prepared to suffer the consequences--and the characteristics of high probability trades are simple:

1. trade with the trend, (longer term trend when trading ranges, which IMO together with short term price action encapsulates sentiment)
2. trade with momentum
3. trade with the cycle (e.g., enter on retraces)
4. enter & exit at or near S/R levels
5. for perspective bracket your medium time frame chart with longer and shorter time frame charts (BB likes factors of 3 for reasons that may sound vaguely new age but that's what I do)

In addition to knowing what the rules are, to trade the method successfully one must

1. follow the rules (easier said than done, since it means mastering our impulses and overcoming bad habits)
2. trade as much as possible to learn what the rules actually mean in light of price action
3. try to avoid thinking we "already know all this stuff" until we are consistently profitable
4. never quit, keeping in mind that when we fail it's our fault, not the system, and that we can overcome our faults with experience.

It's not necessary to use indicators to trade the method manually if you have "the gift", either by birth or extensive experience. I use them because bots require them, and I find it helps to think like a bot when writing bots. Edited to add: as a retired executive with a Type A personality, I began trading believing the market would "implement the memo" I sent, meaning make me profitable. It turns out the markets don't get memos. Instead it reduces all of us, we start out all over again in the mail room. Therefore how long it takes to become profitable depends on how long it takes to accept that, and how hard one is prepared to work to begin a new career. In my case extra detention to unlearn everything I thought I knew before learning a bunch of new stuff.

To summarize, the 2 images below show chart setup (last few trades) and NinjaTrader Account Performance Summary (excluding commission of $2.48/order) for this week's trading so far. MAE in the Account Performance summary is nothing to brag about :-/ The buy (blue) and sell (arrows) on the charts more or less correlate with indicators at the bottom of each chart and S/R levels as might be expected (buy when MACD is positive and or rising and Stochastics are bottoming, sell when MACD negative and/or falling and Stochastics are peaking, backstop with S/R) and can be deduced as entries and exits from the simple fact the one unsuccessful trade is not shown (wandered away from the computer while a trade was active--bad, bad idea).

I depend on magic numbers (00, 50, 90 and any others that materialize out of thin air) and Murrey Math for near term S/R levels, have come to respect Ultimate Support/Resistance when it appears at the same price on 200-, 600- and 1800- tick charts. Longer term I keep an eye on Prior Day OHLC and pivots (PP, S1, S2, R1, R2), which more often than not seem to become magnets at certain times of the day.

I expect corresponding price action when longs and shorts bail at the end of a session (e.g., Europe closing during the US market hours).

Other than the charts I keep an eye on real time DOW Industrials for departures from correlation when the US market is open, since differences in behaviour tend to be more informative than agreement, with the assumption these days the DOW & EUR/USD are in sync at the open (i.e., the open is built into EUR/USD at that moment) and tend to track each other for the first 30-60 minutes. For example, in this time if the DOW decides to close any gap up or down EUR/USD tends to follow. I also keep a TV tuned to CNBC during US market hours for news that might explain (or rarely, forewarn of) significant price movements but ignore the commentary almost 100%.





System Development

Just a note--for the time being I've decided to go with the neural net approach to fuzzifying probabilities generated by mapping parameter vectors onto each of the 1440 probability density planes. The structure of the net derived from the planes comprises several layers, the first (input layer) being 12 groups of 120 input nodes (neurons) accepting a raw probability (0 - 1, that a buy or sell condition with specified profit expectation is present for a given fractal), each group of 120 nodes defined by a common fractal and profit expectation. I may adopt e.g. a sigmoidal activation function for these inputs and weight the outputs of these input layer nodes as usual, but IMO to be summed for input to nodes in subsequent layers must be combined according to the rule for combining disjoint probability estimates that a given event will occur; i.e,;

 
Code
SUM = (P1 * P2 * P3 * ... * Pn)  / ( P1 * P2 * P3 * ... * Pn + (1-P1)*(1-P2)*(1-P3)*...*(1-Pn))
where the Pi (i=1,....,120) are input probabilities and SUM is the consensus probability that the buy/sell condition is in fact present.

Edited to add: Forgot to mention the one major achievement reflected by the last couple of days' earnings: keep profit expectations small. Double the investment and place the first profit target inside the closest S/R level. Listen to your inner voice when it comes to distributing the remaining targets at S/R levels further removed, and be aggressive about preserving capital.

Visit my NexusFi Trade Journal Started this thread Reply With Quote
  #36 (permalink)
 
Adamus's Avatar
 Adamus 
London, UK
 
Experience: Beginner
Platform: NinjaTrader, home-grown Java
Broker: IB/IQFeed
Trading: EUR/USD
Posts: 1,085 since Dec 2010
Thanks Given: 471
Thanks Received: 789


bnichols View Post
I expect corresponding price action when longs and shorts bail at the end of a session (e.g., Europe closing during the US market hours).

Pardon my ignorance but what would that corresponding price action be?

I look at the Europe session start and wonder what is going on there a lot - I think it's a well known phenomenon that the market often goes to either the Asian high or the Asian low and then reverses the other way for the rest of the session. I sometimes speculate why and I think maybe it's the Japanese closing their positions out.

Is the same sort of idea you are thinking about for the European close? What time would that be? 17:00 London time?

You can discover what your enemy fears most by observing the means he uses to frighten you.
Follow me on Twitter Visit my NexusFi Trade Journal Reply With Quote
  #37 (permalink)
 
bnichols's Avatar
 bnichols 
Dartmouth NS
 
Experience: Intermediate
Platform: MC, MC.Net, NT, TWS
Broker: IB / IQFeed / Kids
Trading: Forex, stocks
Posts: 637 since Feb 2010
Thanks Given: 64
Thanks Received: 460


Adamus View Post
Pardon my ignorance but what would that corresponding price action be?

I look at the Europe session start and wonder what is going on there a lot - I think it's a well known phenomenon that the market often goes to either the Asian high or the Asian low and then reverses the other way for the rest of the session. I sometimes speculate why and I think maybe it's the Japanese closing their positions out.

Is the same sort of idea you are thinking about for the European close? What time would that be? 17:00 London time?

Hi Adamus! Thanks for calling me out on this one So far for me the issue of what explains market movement is still a matter of debate and I appreciate your observations about the European market open--will look for that.

For me European market close at present is 11:30 EST (12:30 AST--my time, or 16:30 GMT)

I crave explanations of price action as much as anyone but have come to think of explanation as a crutch--or at least a bad habit--and I should try to avoid contributing to the problem. I (perhaps all of us) interpret explanations as predictive of market bias or sentiment under certain circumstances even though experience has taught me otherwise (i.e., I've lost a lot of money predicting what price will do before it does it). These days my only edge is recognizing when price action has passed some point of no return, or momentum & stochs some point of capitulation at a price extremum, and capitalizing on it before it's too late. All I should have said was "I'm wary of the European market close".

In this case, what I meant is when any price hiccup associated with the last 10-20 minutes of the close of the European market is in the opposite direction of the general trend, I suppose it's a consequence of day speculators exiting positions consistent with the trend (e.g., shorts exiting a downtrend buying the Euro and selling the US dollar) and assumes speculators are present who en mass or otherwise have positions significant enough to move the market. If so, the movement would be more pronounced on Fridays, when the effect is combined with weekly position holders exiting positions. In fact any such move could be due to any number of reasons, including my obsession with movements of 10 pips or more about that time, and at end of the day (literally) the reason matters not--either the market moves and we're in it on the right side of the trade or we're not (or it doesn't move ). This prejudice of mine may be inherited from my stock trading days. The bottom line is--as you already know--be prepared for price movement when markets open or close, and brace for disappointment when they don't

Visit my NexusFi Trade Journal Started this thread Reply With Quote
  #38 (permalink)
 
Adamus's Avatar
 Adamus 
London, UK
 
Experience: Beginner
Platform: NinjaTrader, home-grown Java
Broker: IB/IQFeed
Trading: EUR/USD
Posts: 1,085 since Dec 2010
Thanks Given: 471
Thanks Received: 789

Wasn't trying to catch you out, just asking what you meant. I have never even looked for repeated PA patterns at the European close, although I will do now!

You can discover what your enemy fears most by observing the means he uses to frighten you.
Follow me on Twitter Visit my NexusFi Trade Journal Reply With Quote
  #39 (permalink)
 
bnichols's Avatar
 bnichols 
Dartmouth NS
 
Experience: Intermediate
Platform: MC, MC.Net, NT, TWS
Broker: IB / IQFeed / Kids
Trading: Forex, stocks
Posts: 637 since Feb 2010
Thanks Given: 64
Thanks Received: 460

Sorry -- "calling me out" was a probably poor choice of words Suspect my communication skills are getting rusty day after day talking only to my computer screen.

Interestingly after all that I got slammed today by exactly this--the rise in EUR/USD in the 45 minutes prior to the close today when price made a ~ 50 pip beeline for the floor pivot after trading downward most of the day. I was short at the time, not an issue usually since I know how to recover from such a situation--just have to be quick. This time I was groggy (party with family friend last night :-/) and simply did not react, made every rookie mistake in the book, immediately quit trading for the day.

Visit my NexusFi Trade Journal Started this thread Reply With Quote
  #40 (permalink)
 
bnichols's Avatar
 bnichols 
Dartmouth NS
 
Experience: Intermediate
Platform: MC, MC.Net, NT, TWS
Broker: IB / IQFeed / Kids
Trading: Forex, stocks
Posts: 637 since Feb 2010
Thanks Given: 64
Thanks Received: 460


Quick update regarding system development.

Having given more thought to the notion of using a neural network to condition ("fuzzify") inputs to the bot's fuzzy inference engine an issue arose that has awoken a bigger issue.

Briefly, the neural net approach came up because I decided it would be a good idea to automate the process of updating the system's "knowledge base"-- the collection of probability density functions derived from relatively current price action--and if update its knowledge base then why not update the way it interpreted the knowledge? There are well documented ways of doing this in the Artificial Intelligence literature, neural nets one simple and time honoured means.

Training a net e.g. by backpropagation requires adjusting internal weights to distribute the difference between desired outcome and outcome produced by the network in such a way as to minimize the difference. Implicit in this is that we know what the desired outcome is.

In the real world--trading--there is a big difference unfortunately between desired outcome and actual outcome.

Regarding the desired outcome, this project started out naively, supposing we could combine a number of probability estimates derived from a snapshot of price action (more precisely, a snapshot is a measurement of a number of parameters derived from certain indicators at a moment in time) to generate profitable order signals. Beyond concerns the selected parameters were necessary and sufficient it also bothered me that by definition a snapshot may not necessarily embody price dynamics in the moments approaching that magical "point of no return" (commitment to a direction with some amount of momentum, corresponding to an entry or exit), even if the approach may capture the context in which price action is occurring.

In other words, time is not a factor in a snapshot, and the "desired outcome" was conceived to be simply some combination of probabilities that conditions indicating a buy or sell with an associated profit expectation were present (inherited from filtering with the ZigZag filter). This is why first differences are present in the parameter vector (as the inventors of calculus might agree they are a convenient way to take a snapshot of velocity--rate of change of price--hopefully a measure of the "nuance" referred to in a previous post).

In the beginning I hadn't given any thought to how such a system might learn and adapt--it didn't seem relevant since the learning was embodied in the probability density "knowledge base".

The problem now with asking such a system to learn is it takes time for price to move--to reveal the "actual outcome"--either wiggling it's way toward a profit target (success) or stop loss (failure). Over time, as traders, through trial and error we want to reinforce decisions (hence actions) that lead to success and discourage decisions (hence actions) that lead to failure, and we should expect nothing less of the bot. Otherwise it will repeat the same action based on its probabilistic input over and over, expecting the same outcome, which may bear no resemblance to the actual outcome--one definition of insanity and perhaps why traditionally we're wary of robots.

This last description is characteristic of a class of AI devices known as automata, that have little or no memory of the past and have limited ability to learn.

The problem we face here is, if we're going to assign credit or blame to any data point in the knowledge base, or decision, or action that led to a given position (long or short some number of units), we have to associate the point at which the prediction proved successful or failed with the data, decision and action(s) in question no matter how long it takes. As they say in politics--we need a paper trail that allows us to finger-point.

On that topic, having given over my ability to earn a living by accepted means to my ability to predict price many years ago, I'd acknowledge the contribution of taxpayer funded academics who still maintain it's impossible, that price is random. Put your money where your mouth is--I'll take the wager. At least I die honourably

A somewhat more thoughtful approach to the machine implementation takes into account decisions and actions in the past that can be associated with a present successful or unsuccessful outcome. Marvin Minsky and others pioneered work on this in the 1950's & '60's with the notion of reinforced learning, although developments have ebbed and flowed since then with the availability of funding. More recently Richard Sutton (PhD 1984, "Temporal Credit Assignment in Reinforcement Learning") put more effort into it with a series of papers, including "Learning to Predict by the Methods of Temporal Differences" (1987) and "TD models: Modeling the world at a mixture of time scales" (1997), whose results I may adapt. Still reading.

The bottom line is straightforward neural nets fill the bill--can be used for prediction of future events--if we're prepared to finesse them, but I've finessed enough neural nets to want something different this time, and for once have the time to look at alternatives.

In other news, will likely adopt a softmax activation function for the nodes rather than the sigmoid function, since upon inspection the softmax function more likely preserves inputs as posterior probabilities and we're dealing with probabilities, assuming we can treat buy/sell/hold conditions as categories in AI parlance.

In still other news, stumbled across Mahalanobis distance as a better alternative to the pseudo Euclidean radius function I've been using to create the probability density function. Amazing to me that if you can conceive such a thing someone's already worked out the details. This means I have to go back and recompute the density functions--no big deal but wishing I still had an assistant

Visit my NexusFi Trade Journal Started this thread Reply With Quote




Last Updated on May 7, 2013


© 2024 NexusFi™, s.a., All Rights Reserved.
Av Ricardo J. Alfaro, Century Tower, Panama City, Panama, Ph: +507 833-9432 (Panama and Intl), +1 888-312-3001 (USA and Canada)
All information is for educational use only and is not investment advice. There is a substantial risk of loss in trading commodity futures, stocks, options and foreign exchange products. Past performance is not indicative of future results.
About Us - Contact Us - Site Rules, Acceptable Use, and Terms and Conditions - Privacy Policy - Downloads - Top
no new posts