Better to let someone that uses NinjaTrader answer.
Due to time constraints, please do not PM me if your question can be resolved or answered on the forum.
Need help? 1) Stop changing things. No new indicators, charts, or methods. Be consistent with what is in front of you first. 2) Start a journal and post to it daily with the trades you made to show your strengths and weaknesses. 3) Set goals for yourself to reach daily. Make them about how you trade, not how much money you make. 4) Accept responsibility for your actions. Stop looking elsewhere to explain away poor performance. 5) Where to start as a trader? Watch this webinar and read this thread for hundreds of questions and answers. 6) Help using the forum? Watch this video to learn general tips on using the site.
If you want to support our community, become an Elite Member.
Flat text file-based storage with some clever file naming convention and maybe a directory hierarchy to split contracts by markets and types (i.e. futures from stocks from options, etc...) is a more scalable solution than a traditional DB. Most of the time when you work with historical data (time series) - you don't need to query data, or update /delete records, you just load it up as a sequence of data points, and reading files line-by-line works great. Most languages have support for streaming, so you don't need to load all data into memory at once..
So there is often no need for a SQL-based (or even a non-SQL based) database. Market price data, once obtained and cleaned, is static. You can zip it to save more space, and it's easy to archive. In addition, no DB means one less system to setup, maintain, backup, worry about fail-over, etc... Flat files do not consume CPU power or memory until they are used, unlike a DB engine. You can also easily load comma- or tab- separated files in Excel, or any other tool of your choice.
By the way found a really great tool for getting stocks/futures/options/Forex historical data as csv text files from Interactive Brokers, highly recommend it. You can forget about pacing violation or request/response size limits , it splits large data requests into many smaller once automatically behind the scenes: