So you went from "salesmen" (your words) to complete oppositionist ?
Yes of course, I change my mind if I have to, and I am a fan of everything I use, but when I discover something newer and better I recognize it and recommend it instead of the old thing, and change my mind, in this case is Corona vs GPU renderers
I don't see where your facts come from. Both CPUs and GPUs had periods in which their architecture was powerhungry, and when less. Current nVidia's Maxwell is extremely efficient, overclocked versions of their top range (980; TitanX,etc..) are cards attaining 300W draw in full power (from default 165W, which isn't much more than 140W for top i7), which is a lot, but it's identical to 5960X i7 for example.
So you don't think 140w vs 300w is a huge difference? now remember you have to run your GPU plus a CPU, my facts come from my personal experience and energy invoices from the company, whenever I did GPU rendering the invoice was up in the air, with CPU rendering I barely noticed it, why? I'm not sure because I'm not sure how to measure the real power compsumption of a computer in real time to make calculations, but if you know how tell me please :)
Regarding ram, GPU renderers, even those that can't memory cycle/reach out of core, require less memory than CPU counterpart, as they're mostly build with memory preserving as their crucial feature. 12GB Vray is easily up to 18GB of conventional CPU raytracer, and with memory cycling (like with Redshift), this point becomes even more moot. In those case, 12GB is a lot given that framebuffer and textures don't need to reside inside either. From what I've read, proxies aren't issue anymore (Redshift).
This is if you don't use things like motion blur wich can multiply your memory requirements quite a lot, and in the end you are limited to that memory, no matter if they optimize it, they will be slowed down if you use features such as proxys, plus redshift is a completely biased render engine, different approach, valid for some people, not valid for me, nothing personal, it's just I don't like those kind of render engines anymore, I speak about Unbiased render engines or nearly unbiased (iRay, Octane or Arion)
RedShift can be pretty fast but I don't like it's workflow or results, bear in mind that I'm not imposing my conclusions to anyone, I'm just stating what I decided and why I decided this, but there can be people that love redshift, vray (I never liked it even when I know it's an industry standard and it's probably the most powerful engine) or mental ray, I don't like that kind of workflow anymore, I rather prefer to spend more time in machine time than in human time, and that is what unbiased renderers give me, maxwell was out of the equation because it's render times are ridiculously big, and unbiased over CPU was out of the question until Corona arrived.
Why would Titan-X be worth it only if you acquire 4-5 of it ? Each has 3072 stream (cuda) cores, as opposed to 2048 of 980, which has single-point precision computation power roughly 3-4 times higher than 580GTX, the best workhorse of past. That makes Titan-X at least 5 times more powerful, speaking dirty approximated numbers.
That is for me, and that is because with that amount of GPU's it can be a difference against what I have in CPU right now, anyway the investment is pretty high for something specifically dedicated to just render and for something that I need to put inside a system, the CPU's could be used for much more things, like simulation for example, I repeat, I'm talking about my situation, maybe you don't need to do anything more than rendering with the farm, not my case :)
I don't know of what awesome performance you are getting with your 5820k, but I am not getting anything boast-worthy with my 200+ cores. So there is always big room for improvement.
Something is wrong with your 200+ cores if you are not getting great performance XD
what does the phrase "great performance" stands for you?
I have currently no fan in GPU renderers. Octane is moving at speed of....no speed, all attention is given to cloud, Brigade or some other funny gimmicks they are doing. Redshift has the strangest philosophy possible and took the worst Vray traits ever. It's exactly what happens when great engineers fails to understand their public. The UI, options, their material,....lol-worthy. So for now, the card can be used as excellent powerhorse for Unreal Engine visualizations. But the tide is getting equal again.
Unreal engine is still working flawlessly in my GTX580, no need for a Titan X or Z for that hehehe
And those things you said about GPU renderers, why are they so slow in their evolution?, if you ask to iray developers they are always saying "no can do with GPU because <add your preferred GPU limitation here>", they are slow because they can't do anything if the hardware don't change, if they can't communicate with the system and practically make the GPU's work like additional CPU's, or to be more exact, if they can't make the system work as fast as the GPU's internally, because the slowdown in those renderers come from communicating with the system outside the GPU.
IMHO GPU rendering could be the holy grail of rendering as much as CPU could be in the future, Intel did the Embree technology to demonstrate the CPU can be as efifcient as the GPU's, we are all here in the Corona forum, CPU based, why is that even when the first Titan came out some time ago? hehehe
Intel can make changes to it's architecture as much as Nvidia can do it for the newer GPU family from the next year :)
You and I have different visions of the things, you prefer to invest in one 5960X while I prefer to invest in two 5820k for the same money, for me the additional performance of an additional cpu is better than the increased performance of a single super powerful computer, IMHO they are different POV's, both valid for each of us, it's just a matter of what kind of business do we have to run :)
Cheers!