NexusFi: Find Your Edge


Home Menu

 





FOREX ATS - Development and Deployment


Discussion in Trading Journals

Updated
    1. trending_up 3,139 views
    2. thumb_up 5 thanks given
    3. group 3 followers
    1. forum 16 posts
    2. attach_file 0 attachments




 
Search this Thread

FOREX ATS - Development and Deployment

  #11 (permalink)
ajespy
Perth Australia
 
Posts: 15 since Nov 2012
Thanks Given: 1
Thanks Received: 4

Since writing my last post I have done further research into big data issues. Turns out variable reduction is a big issue and one that is quite easily solved. I was on the right track with reducing my variables based on the correlation matrix but my methods were crude. It turns out that a method known as variable clustering provides a much more powerful means of getting the job done.

I have since clustered my variables based on the 4 different correlation matrics and again settled on 15 variables selected under the hoeffding. In a few hours I was able to complete what I was struggling to do within the last two weeks!

Now that I have the vars selected I will be able to move forward onto quotients with price, investigating custom vars through modeling the series itself, calculating and smoothing WOEs and preparing a datamart.

Reply With Quote

Can you help answer these questions
from other members on NexusFi?
Better Renko Gaps
The Elite Circle
Exit Strategy
NinjaTrader
MC PL editor upgrade
MultiCharts
ZombieSqueeze
Platforms and Indicators
Trade idea based off three indicators.
Traders Hideout
 
Best Threads (Most Thanked)
in the last 7 days on NexusFi
Just another trading journal: PA, Wyckoff & Trends
34 thanks
Tao te Trade: way of the WLD
24 thanks
GFIs1 1 DAX trade per day journal
17 thanks
Vinny E-Mini & Algobox Review TRADE ROOM
13 thanks
My NQ Trading Journal
12 thanks
  #12 (permalink)
ajespy
Perth Australia
 
Posts: 15 since Nov 2012
Thanks Given: 1
Thanks Received: 4

Wow Ive been busy and have thus neglected my blog. After shortlisting the 15 vars I joined them to my outcome data and ran a prelim model estimation using the raw vars. Instead of a combination of log-reg models I opted for a single model of an NN type. The results were very promising. Following this I began constructing the full datamart. I opted to follow 4x different forex pairs and downloaded the data and indicators for this. I have modified my outcome code and have just completed this to join it all together. Whilst reviewing outcomes I decided to check a less sophisticated outcome based on a fixed duration. If I had a strategy that was 100% accurate I would trade more often and make more money with this alternative outcome but on a per trade basis this outcome performs worse than outcome1. When I look to test my model deployment I may revisit this second outcome. I now have to put together the full datamart with outcome combinations.

Reply With Quote
  #13 (permalink)
ajespy
Perth Australia
 
Posts: 15 since Nov 2012
Thanks Given: 1
Thanks Received: 4


So at this point I have a basic model which I could begin to deploy. The model consists of 5 vars:
- Envelope on 2hr chart
- OBV on 2hr chart
- ADXW +signal on 2hr chart
- RVI on 2hr chart
- last closing price on 5min chart

I have had limited success optimising a sas version of the model so Im not convinced of its robustness or profitability. I will, However, look to optimise a simple NN using these vars for EURUSD if I cant derive a more robust system. I guess this is my fall back model.

Currently I am continuing to refine my trend analysis which I hope will allow me to select a better set of 'classes'. Once I have a few potential profitable classes (especially historically profitable for Oct-Jan period) I will begin to look at some further pattern analysis with the aim of creating some custom indicators.

For me, modelling is an iterative process. Every point of analysis brings ever more clarity to the ideas.

Reply With Quote
  #14 (permalink)
ajespy
Perth Australia
 
Posts: 15 since Nov 2012
Thanks Given: 1
Thanks Received: 4

Ive been doing a lot of research on different classification models and I keep coming back to log-reg.
During my outcome analysis work I have settled on a very promising concept. I will apply a filter to the data which I think is like something in trading called a zig-zag but in engineering is a bandpass filter. by varying the bandwidth of the bandpass filter I can remove varying degrees of noise from the price signal. I then create the empirical distribution of the duration of the resulting trends and settle on the bandwidth that gives me reasonable lengths of trend. I will settle on something like 2-4hrs.

I can then class each data observation as belonging to either a long or short trend (I will look to extend this to include sideways at a later date).

Once I have the trends marked out I can take a reasonable sized window across where the trends change. I will use 30-60mins on each side of the inflection points. I will class the observations within the windows as LS, SL or N for Long->Short, Short->Long and Neutral respectively.

I will then model these classifications with a log-reg!

Hopefully I will be more confident with this resulting model than the last one...

Reply With Quote
  #15 (permalink)
ajespy
Perth Australia
 
Posts: 15 since Nov 2012
Thanks Given: 1
Thanks Received: 4

As I have refined the outcomes I will be looking to predict I need to go back to variable selection. I am starting with a basic list of 200 different indicators spanning 1,5,10,15,30,60 & 120 minute time horizons. I have then combined these variables relative to the alternate time horizons within the same variable classes to create 2000 possible indicators. An example is:

5min MA / 1 min MA = new var determining if the faster MA is diverging/converging relative to the slow.

I will be ranking the value of these vars by multinomial concordance and maximum log-likelihood. This will allow me to focus on a subset of vars to transform from continuous to discrete. I should have the shortlist by the end of the weekend.

Reply With Quote
  #16 (permalink)
Strato
Houston, TX/USA
 
Posts: 11 since Jul 2013
Thanks Given: 11
Thanks Received: 3

I can follow along with your approach so far and I think it looks interesting. One thing you may consider as an alternate or additional step is to use K means clustering which is an unsupervised learning technique. Unsupervised learning can be a cheaper alternative to a supervised logistic regression and it could uncover important data relationships automatically. Plus you could take the results of K means, classify the results, and then feed it into your logistic regression and perhaps have a better model or get there quicker than having to classify your data by hand.

Reply With Quote
Thanked by:
  #17 (permalink)
ajespy
Perth Australia
 
Posts: 15 since Nov 2012
Thanks Given: 1
Thanks Received: 4

Hey Strato,
That is an interesting idea. Let's see if I have it right.
1 - classify each possible event using a NN, something like a k nearest neighbour.
2 - feed that classification into the log-reg?

My work using ordinal log-reg has been very promising so far. I wanted to use a multinomial log-reg straight off the mark but have an issue with calculating the gini coeff for it so settled with the much simpler ordinal. I am currently in the process of reducing degrees of freedom by collapsing var buckets, checking coeffs are ordered correctly, and outright removal of vars to maximise gini. This is a very time consuming process.

Reply With Quote
Thanked by:




Last Updated on August 11, 2013


© 2024 NexusFi™, s.a., All Rights Reserved.
Av Ricardo J. Alfaro, Century Tower, Panama City, Panama, Ph: +507 833-9432 (Panama and Intl), +1 888-312-3001 (USA and Canada)
All information is for educational use only and is not investment advice. There is a substantial risk of loss in trading commodity futures, stocks, options and foreign exchange products. Past performance is not indicative of future results.
About Us - Contact Us - Site Rules, Acceptable Use, and Terms and Conditions - Privacy Policy - Downloads - Top
no new posts