Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
I got my MicroCenter paper in the mail today (you know, that thing that people used before email), and saw they had the new Intel Haswell processors available. @Big Mike, are you a taker on this one?
I will be looking at the i7-4770K. MicroCenter has it on sale in-store for $280.
Specifically, this CPU has much better graphics, and even has native support for 3 monitors, and so it's possible I will not need an external GPU. I read a review that they overclocked it to 4.7GHz, which would be a very nice upgrade over my current i5 2.67GHz.
Thanks Mike--aside from cost, can you see a reason to go with an ivy bridge i7-3770K, versus the new haswell i7-4770K? The socket is different, which means it should be more upgradeable in theory. However, I did read that the Haswells run a little hotter, and I read that an Ivy Bridge can OC about 200MHz higher due to this.
Asked another way, if you were building a brand new PC from scratch, would you go with a top-of-line haswell at $280, or with a top-of-line Ivy at $250? Or something else?
If I were building brand new today, I would probably try to not build brand new today. Wait just a bit. Better motherboards will come out, better rev's of CPU's will come out once they refine the manufacturing process -- especially important if you really want to push to limits on an overclock.
Check out overclock.net and read up or ask questions there, I simply don't spend the time on it any more that I used to so don't keep up as much.
I would also ask why are you overclocking? Are you playing games on this PC? Are you running any type of video encoding? Are you backtesting?
I overclocked mine simply for the backtesting and simply because it's a hobby. 99% of the time, the overclock serves no usable purpose. I also undervolted mine so that 99% of the time it is running lower voltage when it is idle.
Since you have an i5, really an i7 like mine, or an ivy bridge, or haswell, they are all going to be nice boosts for you. But I would be certain that you've got a new SSD as well (within 12 months old, tech has come a long way) because SSD probably makes a lot more difference than CPU in most things people do (aside from backtesting). I would also make sure you've already got at least 8GB of memory, if not 16GB. My 32GB is overkill, but it's what I do. Memory is dirt cheap, so I would say get the 16GB if you can, most of it will be used as a file system cache.
I don't play games so on-board GPU is first thing I disable just to get it out of the way. The fact the new boards can run 3xDisplayPort is pretty cool for the mainstream user though, but wouldn't be of value in my particular case. If you have three monitors then I would say it's a pretty good deal.
Long story short --- are you itching to upgrade? Is your current config really that slow? In what way? Make sure you've upgraded the components around the CPU first, and I would do mainboard+cpu last to try to buy yourself some time if possible. Buying right when the new tech comes out should be avoided in terms of prices, quality control (bios revisions, etc).
If it's me, I would consider going with more monitors instead of the new CPU right now. I would find three monitors pretty constraining. I don't know about you naturally, and many people trade with tiny screens on a notebook. I am reminded of this just today, because Al Brooks is one of them - his screen resolution is so bad that I have to spend hours in the video encoding post production to try and zoom in to make it legible for those of us in the 24" 1920x1200 world. So to each his own. But my point is, is CPU really your limiting factor right now?
BTW, I really never consider prices when building computer related stuff. I tend to get the best that makes the most sense for my situation, but again I am not a normal user. I mean that even as traders -- because I am a tech guy, a geek guy, a hardware guy, so my "needs" (wants) are usually just for my own guilty pleasures, like 32GB of ram on a workstation, 2xSSD raid 0 on a workstation, etc etc
I do have an i5-3470 @ 3.6ghz for my media server, with 10x3TB (30TB raw) in a raid 6, and 32GB memory. I would add more memory to that system if I could (can't, 32GB is limit for this class CPU) before I would upgrade the cpu to a faster one...
First example--take for example, when I change a chart in SC that is tick-based, that has about 200 days of data. Currently it takes 36 seconds to load 200 (calendar) days of tick data (so about 143 trading days) for ES. I have a year of tick data right now, and I like having this much. Since this data is already in resident memory, and since I have an SSD even if there was HD access, I think it's safe to say that the CPU is the limiting factor here. CPU usage goes up to about 30% in SC, as reported by the task manager, compared to a normal 3% idle, 20-25% during high incoming data situations. It may be a "duh," but I don't know how CPU usage is reported these days in windows and the fact that it's 30% may mean that a faster CPU won't do much.
Second example--fast market updates. Yes, data feed latency is an issue, as is network latency and bandwidth. But mine are both decent, and I see some occasional slowdowns when I have lots of charts open. Specifically, I mean prints in the DOM (enable the cumulative last traded size column). A DOM only open is quite fast, whereas having several charts open makes the overall responsiveness much less. A solution is to distribute the workload by running 2 copies of SC. But, I still want all copies to be running as fast as possible, and the lags I'm seeing are clearly not lags in the data stream, nor are they network lags--they are lags from SC having to update 20 charts and spreadsheets every 40ms. You can really only see this with a DOM in fast market conditions, and it still is much faster than other programs, but I'm trying to streamline this.
I got an SSD a couple of months ago. Very good for overall system responsiveness, but as far as composing a chart using 3GB worth of data already in RAM, or updating 20 charts, with probably 50 studies total on all charts, with shading (which is more on the GPU), the SSD is not a factor. I have 8GB right now, and think that is very sufficient. Clearly the SSD is not being used for swap as I'm not using even half of the 8GB, so I think I'm okay on this.
My ATI Radeon 5750 is not the fastest, and when I watch a full-screen 1080p, it has visible jitter and lag, and even though it may seem minimal, the transparency code in SC is very GPU-intensive. For example, if you load 200 days of tick data, make it full screen, and do daily profiles, with the transparency setting turned on, there is a tremendous lag, and turning off transparency immediately makes it fast again. I would think that the GPU is pretty important here, but I'm not very up to date in this category.
In a nutshell, based on the above, I think it is the limiting factor. However, I certainly would like to add another 24" to have 4, as I would love to have the resident real estate for my S&P sector charts intraday. But, when it comes to having my pulse on the market, it is very hard for me to concentrate when I do not have an ultra-responsive setup.
I have a ton of charts open in Sierra and never notice any slow down at all. But I also want to point out that more CPU will not necessarily solve your issue.
When I first load Sierra, which I only do once a month or so when I reboot, all 8 cores (HT) are at 100% loading charts. So Sierra is obviously good at making use of this. I really never load new charts during the day so can't comment on speed here.
Not to rag on another platform, but to make a point -- in Ninja, it never uses 100% cpu so more cpu isn't really going to do a whole lot. I mean clock for clock cycles you will get more work done on a faster CPU, even if it isn't being taxed at 100%, but if you look at the speed difference between your i5 and a new i7, I can't imagine it being more than 25-30% faster or so (guessing, you can google this for benchmark numbers). Is that really going to be a night and day difference for you?
When I first load Sierra, my memory consumption is about 10GB in Windows. Once charts are done loading, that goes back down to only 1GB. I reported this issue to Sierra a couple of years ago and long story short, they think it is not possible or the fault of Windows and not Sierra. My point is, there are definitely some times when memory usage in Sierra can be very high, like at first startup (loading new charts) which might be what you are seeing. Your 8GB might be the limiting factor.
The way to find out for sure would be to just use a simple widget or task manager, resource monitor, etc and pay close attention.
Keep in mind, Sierra stores data on the file system using Windows File Compression. That means that to access the data Windows must first decompress it. My @ES#C file is 8GB in size but compresses down to under 2GB, other instruments are also multiple GB, so you can imagine when loading 50 charts on startup and decompressing all that data, there is a lot of memory and CPU usage. So that is a lot of time spent right there if the entire file needs to be read, because it has to be decompressed first. This is almost certainly why memory usage is so high for me when I first start Sierra. I pointed this exact thing out to them a couple of years ago and like I said, response was not favorable. I requested they make it so I could turn off the forced file compression because I run Sierra on a VPS as well with lower memory, and startup sees huge memory usage balloon effect. In the end I don't think they ever did this and I was tired of arguing with them. Sierra is great in almost every way, but if they think they are right and you are wrong, there is usually nothing you can do.
Anyway, off on a tangent there so long story short, just be sure you really know what your limiting factor is first. At the end of the day if you are itching to build a new system, you certainly don't need my permission to do it. Just do it and have fun!