Another good point. I'm sure most well written code would outperform crap code in even C++.
Does the vast amount of RAM offset the coding? I would imagine that the coding runs at X speed regardless of how much can be computer at a given time. So the same program written in both R and C++, 128GB Ram and 64GB ram respectively, I would still see the C++ program execution being faster.
Good points both, and there's the rub(s), the I/O and memory performance bottlenecks are often overlooked completely. Even if those are well scoped, most modern language and architecture choices are CPU pipeline stallers and multi-level cache destroyers par-excellence. To such an extent that good looking code may not actually be achieving anything like the available ooomph in the box.
A long time ago I used to write software opcode processing modules because it was far faster to process sequential chunks of in-cache data than it was to dispatch function calls and object accesses. Modern system architectures have made these sort of trade-offs even more complex, though usually they are ignored due to the high level of functionality required, which is hard enough to get right anyway.
Once you have some ideas of what you want to do, and to what sort and size of data, you can't beat writing a few harnesses and tests, and it's good fun too.
Last edited by ratfink; May 29th, 2014 at 04:01 PM.
The following user says Thank You to ratfink for this post: