Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
How much Money for faster Optimization Backtesting: Willing to Pay.
so that's the thing - the strategy builder on nt generates the code for a strategy that can be uploaded to a chart or even applied to the strategy analyzer. unfortunately it actually is precisely the same code
Can you help answer these questions from other members on NexusFi?
I thought that some years ago, and splashed a chunk of change on a 44C/88T dual xeon workstation, with 1TB of DDR4-2400Mhz ECC RAM and a 4TB PCIE NVME Raid array, for my personal use with NT8. Sadly I found that no matter what I did (I'm not a coder, just a technically aware trader that works with a team of pro C/C# coders), NT8 still only works about 6 or 7 of those 44 cores at anywhere near capacity (typically about 20% of available total cpu time) . So chucking cores at it doesn't really work well. I know the issues with using large amounts (eg. >5yrs) of tick data across multiple symbols. Its not easy to cut down the run-time. Its even harder if you have to use NT8 as your backtest/optimisation platform. My firm built its own platform at a cost of several £MM, largely to address these issues. So IMHO unless you're working towards a large, scalable automated system with its own OMS/EMS platform, you're better off developing an NT8 algo that has to be re-tuned every few weeks/months, rather than seeking to build a self-tuning algo using years of tick data, "chunking" it, running each chunk in parallel and then stitching it back together. For NT8 I suggest using a box with high performance characteristics on single-core benchmarks, e.g. an i9-13900HX with a generous helping of the fastest DDR5 RAM or similar. You do still benefit from multiple cores, but NT8's multi-threading in backtest/optimisation work is poor, partly due to its architecture, and partly due to the "tyranny of numbers" issue when working with time-series. At least this is my experience with it over the last 6 years or so.