Correct, they are yours forever, you can even download them. Which I did in case UDEMY goes belly up.
"The great Traders have always been humbled by the market early on in their careers creating a deep respect for the market. Until one has this respect indelibly engraved in their makeup, the concept of money management and discipline will never be treated seriously."
The following 2 users say Thank You to SodyTexas for this post:
Just be careful when using random forests or any decision tree method. Like NN approaches, over-fitting the training data is quite easy to achieve. RF models tend to minimize this somewhat by spreading the error across many "trees" in the forest. But this doesn't mean it's most accurate.
Also, in your machine learning quest, you're most likely to come across ARMA/ARIMA as well as GARCH models for time series data. While these do a pretty good job in non-financial time series applications, random effects models tend to our-perform these times series models when we're trying to model ts data that have alot of "noise" inside them (aka, any equity price series). We can convert the price series to (log) returns, but this (surprisingly) doesn't benefit the ARIMA/GARCH models alot.
You'll be able to build JUST as powerful of a model if you focus on simple linear regression models (at first) and spend time understanding "what drives your predictions". Sometimes we call this "feature selection" or "feature engineering". You'll likely find that you get more milage out of providing a decently capable model with excellent predictive input rather than trying to create a more complex model.
Just a few notes from the field.
Best of luck,
The following 3 users say Thank You to dpaschall for this post:
I've actually found that increasing the minimum leaf size, while decreasing the accuracy of the in sample prediction, actually increases the accuracy in the out of sample data (reduces over-fitting I suspect). At least in the Kaggle/Titanic case.
I'm familiar with Garch Volatility models, but not with theory behind them. Thanks for the heads up.
One of the reasons I started the Kaggle challenge was to get some real world experience in feature engineering, something I've enjoyed and learnt from. Your comments regarding linear models and understanding what is driving the predictions are interesting. In the coursera.org course Andrew Ng talked about making additional features by combining and/or transforming current features. I have tried this a couple of times, and not surprisingly the more features I add, the more accuarte the prediction becomes. The problem is this seems to reek of over fitting. I can see how at times a log transformation etc can help, but I don't see how generalizing 'y ~ a + b +c' to 'y ~ a^3 + a^2b +a^2c + ab^2 + ac^2 + abc +... etc' makes sound logical sense.
Due to time constraints, please do not PM me if your question can be resolved or answered on the forum.
Need help? 1) Stop changing things. No new indicators, charts, or methods. Be consistent with what is in front of you first. 2) Start a journal and post to it daily with the trades you made to show your strengths and weaknesses. 3) Set goals for yourself to reach daily. Make them about how you trade, not how much money you make. 4) Accept responsibility for your actions. Stop looking elsewhere to explain away poor performance. 5) Where to start as a trader? Watch this webinar and read this thread for hundreds of questions and answers. 6) Help using the forum? Watch this video to learn general tips on using the site.
If you want to support our community, become an Elite Member.