Author Topic: Corona GPU rendering  (Read 84418 times)

2015-03-08, 15:06:06
Reply #15

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
We are open to all kinds of stuff.

funny trivia: I am programming a GPU renderer sideproject right now ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-03-08, 15:29:04
Reply #16

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I'll shut up then :)

The thing is that the big cost with GPU rendering is not just the GPU (you need also a computer ) but the energy cost, it is much higher than with CPU rendering, and that is a major drawback for me, great to render a few stills, it's pretty expensive to render a long animation, and 12Gb are filled in no time, and with slow downs when it comes to use proxys, so still a lot of problems IMHO :)

BTW Ondra, more information about that?

Cheers!

2015-03-08, 16:18:33
Reply #17

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 8779
  • Let's move this topic, shall we?
    • View Profile
    • My Models
funny trivia: I am programming a GPU renderer sideproject right now ;)

Furry...? ;]
I'm not Corona Team member. Everything i say, is my personal opinion only.
My Models | My Videos | My Pictures

2015-03-09, 03:04:20
Reply #18

philippelamoureux

  • Active Users
  • **
  • Posts: 218
    • View Profile
I wish I could say something like...I'm coding a gpu renderer in my spare time. hehe

funny thing...the postman delivered my gtx 980 during the reveal of the titanX on stream...I was like...meh.

2015-03-09, 04:16:44
Reply #19

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
funny thing...the postman delivered my gtx 980 during the reveal of the titanX on stream...I was like...meh.
The whole titan lineup seems so out of place.
Quote
Double-precision
Extra large Vram
1000+ $
It's a gaming card.
wat?
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )

2015-03-09, 10:34:07
Reply #20

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
What's out of place about it ?
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-09, 11:42:35
Reply #21

tomislavn

  • Active Users
  • **
  • Posts: 706
  • Lightbringer
    • View Profile
    • My 3docean Portolio
What's out of place about it ?

Probably gaming and double precision in the same sentence :)

But then again they can't market it as a workstation card because of Quadro line - which is a good thing price-wise since Quadro of that power would cost 10 times more.
My 3d stock portfolio - http://3docean.net/user/tomislavn

2015-03-09, 12:33:10
Reply #22

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
Everyone kept begging for card with capacity of quadro/teslas (high vram), unlocked precision if needed but tailored to mainstream pricetag. And they did deliver it in second time, and extremely well done package.
What's there to complain ? They have to market it as gaming otherwise they would severely limit its audience to niche professionals. If they label it as part-gaming card, it will open up to much bigger audience of enthousiast builders,
making it profitable.
Everyone benefits from the deal.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-09, 13:11:51
Reply #23

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I still don't see it as a great value, I'm reaching the 24Gb limit with ease, and this is using proxys, so GPU rendering has become unavailable to me under good pricing, also I would like to see a comparison between the GTX580 (pretty old, but good carg, pretty limited to 3Gb, I know) and the Titan X,  I'm sure the 580 is several times slower, but I would like to see how much to make some calculations, maybe the Titan X is worth the money if you acquire 4 or 5, but this could be for very specific projects, ram limited, and I would have to think in the electricity invoice, wich has been historically high using GPU rendering :P

I think the day the GPU's could be used with the system ram interchange, and the day the GPU render engines could work with proxys without significant performance reduction, this will be the day when the GPU makes sense for me again, as of today, I sitll prefer Corona + 5820k as render nodes, awesome value, 790€+vat per full node with 16Gb of ram, and awesome performance :) (and upt to 64Gb and 4 PCI-E, just in case in the future the GPU could be back hehehe)

Cheers!

2015-03-09, 13:59:02
Reply #24

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
funny trivia: I am programming a GPU renderer sideproject right now ;)

Furry...? ;]

stop with the bad jokes ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-03-09, 14:26:39
Reply #25

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
I still don't see it as a great value, I'm reaching the 24Gb limit with ease, and this is using proxys, so GPU rendering has become unavailable to me under good pricing, also I would like to see a comparison between the GTX580 (pretty old, but good carg, pretty limited to 3Gb, I know) and the Titan X,  I'm sure the 580 is several times slower, but I would like to see how much to make some calculations, maybe the Titan X is worth the money if you acquire 4 or 5, but this could be for very specific projects, ram limited, and I would have to think in the electricity invoice, wich has been historically high using GPU rendering :P

I think the day the GPU's could be used with the system ram interchange, and the day the GPU render engines could work with proxys without significant performance reduction, this will be the day when the GPU makes sense for me again, as of today, I sitll prefer Corona + 5820k as render nodes, awesome value, 790€+vat per full node with 16Gb of ram, and awesome performance :) (and upt to 64Gb and 4 PCI-E, just in case in the future the GPU could be back hehehe)

Cheers!

So you went from "salesmen" (your words) to complete oppositionist ?

I don't see where your facts come from. Both CPUs and GPUs had periods in which their architecture was powerhungry, and when less. Current nVidia's Maxwell is extremely efficient, overclocked versions of their top range (980; TitanX,etc..) are cards attaining 300W draw in full power (from default 165W, which isn't much more than 140W for top i7), which is a lot, but it's identical to 5960X i7 for example.
Regarding ram, GPU renderers, even those that can't memory cycle/reach out of core, require less memory than CPU counterpart, as they're mostly build with memory preserving as their crucial feature. 12GB Vray is easily up to 18GB of conventional CPU raytracer, and with memory cycling (like with Redshift), this point becomes even more moot. In those case, 12GB is a lot given that framebuffer and textures don't need to reside inside either. From what I've read, proxies aren't issue anymore (Redshift).

Why would Titan-X be worth it only if you acquire 4-5 of it ? Each has 3072 stream (cuda) cores, as opposed to 2048 of 980, which has single-point precision computation power roughly 3-4 times higher than 580GTX, the best workhorse of past. That makes Titan-X at least 5 times more powerful, speaking dirty approximated numbers.

I don't know of what awesome performance you are getting with your 5820k, but I am not getting anything boast-worthy with my 200+ cores. So there is always big room for improvement.

I have currently no fan in GPU renderers. Octane is moving at speed of....no speed, all attention is given to cloud, Brigade or some other funny gimmicks they are doing. Redshift has the strangest philosophy possible and took the worst Vray traits ever. It's exactly what happens when great engineers fails to understand their public. The UI, options, their material,....lol-worthy. So for now, the card can be used as excellent powerhorse for Unreal Engine visualizations. But the tide is getting equal again.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-09, 14:56:18
Reply #26

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
So you went from "salesmen" (your words) to complete oppositionist ?
Yes of course, I change my mind if I have to, and I am a fan of everything I use, but when I discover something newer and better I recognize it and recommend it instead of the old thing, and change my mind, in this case is Corona vs GPU renderers
Quote
I don't see where your facts come from. Both CPUs and GPUs had periods in which their architecture was powerhungry, and when less. Current nVidia's Maxwell is extremely efficient, overclocked versions of their top range (980; TitanX,etc..) are cards attaining 300W draw in full power (from default 165W, which isn't much more than 140W for top i7), which is a lot, but it's identical to 5960X i7 for example.

So you don't think 140w vs 300w is a huge difference? now remember you have to run your GPU plus a CPU, my facts come from my personal experience and energy invoices from the company, whenever I did GPU rendering the invoice was up in the air, with CPU rendering I barely noticed it, why? I'm not sure because I'm not sure how to measure the real power compsumption of a computer in real time to make calculations, but if you know how tell me please :)

Quote
Regarding ram, GPU renderers, even those that can't memory cycle/reach out of core, require less memory than CPU counterpart, as they're mostly build with memory preserving as their crucial feature. 12GB Vray is easily up to 18GB of conventional CPU raytracer, and with memory cycling (like with Redshift), this point becomes even more moot. In those case, 12GB is a lot given that framebuffer and textures don't need to reside inside either. From what I've read, proxies aren't issue anymore (Redshift).

This is if you don't use things like motion blur wich can multiply your memory requirements quite a lot, and in the end you are limited to that memory, no matter if they optimize it, they will be slowed down if you use features such as proxys, plus redshift is a completely biased render engine, different approach, valid for some people, not valid for me, nothing personal, it's just I don't like those kind of render engines anymore, I speak about Unbiased render engines or nearly unbiased (iRay, Octane or Arion)
RedShift can be pretty fast but I don't like it's workflow or results, bear in mind that I'm not imposing my conclusions to anyone, I'm just stating what I decided and why I decided this, but there can be people that love redshift, vray (I never liked it even when I know it's an industry standard and it's probably the most powerful engine) or mental ray, I don't like that kind of workflow anymore, I rather prefer to spend more time in machine time than in human time, and that is what unbiased renderers give me, maxwell was out of the equation because it's render times are ridiculously big, and unbiased over CPU was out of the question until Corona arrived.

Quote
Why would Titan-X be worth it only if you acquire 4-5 of it ? Each has 3072 stream (cuda) cores, as opposed to 2048 of 980, which has single-point precision computation power roughly 3-4 times higher than 580GTX, the best workhorse of past. That makes Titan-X at least 5 times more powerful, speaking dirty approximated numbers.

That is for me, and that is because with that amount of GPU's it can be a difference against what I have in CPU right now, anyway the investment is pretty high for something specifically dedicated to just render and for something that I need to put inside a system, the CPU's could be used for much more things, like simulation for example, I repeat, I'm talking about my situation, maybe you don't need to do anything more than rendering with the farm, not my case :)

Quote
I don't know of what awesome performance you are getting with your 5820k, but I am not getting anything boast-worthy with my 200+ cores. So there is always big room for improvement.

Something is wrong with your 200+ cores if you are not getting great performance XD
what does the phrase "great performance" stands for you?

Quote
I have currently no fan in GPU renderers. Octane is moving at speed of....no speed, all attention is given to cloud, Brigade or some other funny gimmicks they are doing. Redshift has the strangest philosophy possible and took the worst Vray traits ever. It's exactly what happens when great engineers fails to understand their public. The UI, options, their material,....lol-worthy. So for now, the card can be used as excellent powerhorse for Unreal Engine visualizations. But the tide is getting equal again.

Unreal engine is still working flawlessly in my GTX580, no need for a Titan X or Z for that hehehe

And those things you said about GPU renderers, why are they so slow in their evolution?, if you ask to iray developers they are always saying "no can do with GPU because <add your preferred GPU limitation here>", they are slow because they can't do anything if the hardware don't change, if they can't communicate with the system and practically make the GPU's work like additional CPU's, or to be more exact, if they can't make the system work as fast as the GPU's internally, because the slowdown in those renderers come from communicating with the system outside the GPU.

IMHO GPU rendering could be the holy grail of rendering as much as CPU could be in the future, Intel did the Embree technology to demonstrate the CPU can be as efifcient as the GPU's, we are all here in the Corona forum, CPU based, why is that even when the first Titan came out some time ago? hehehe

Intel can make changes to it's architecture as much as Nvidia can do it for the newer GPU family from the next year :)

You and I have different visions of the things, you prefer to invest in one 5960X while I prefer to invest in two 5820k for the same money, for me the additional performance of an additional cpu is better than the increased performance of a single super powerful computer, IMHO they are different POV's, both valid for each of us, it's just a matter of what kind of business do we have to run :)

Cheers!
« Last Edit: 2015-03-09, 15:06:50 by juang3d »

2015-03-09, 15:09:39
Reply #27

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
Quote
So you don't think 140w vs 300w is a huge difference?

That's not what I wrote. The default draw is 165W for 980, and 140W for 5960X. Both reach 300W in full (overclocked, 100perc. use) draw. The current generation of nVidia cards (only nVidia) are not powerhungry, and quite opposite.

Regarding existing limitations, Redshift was able to bypass all of them pretty elegantly. I don't like that engine either, and don't plan to use it. But what iRay or Octane do, is irrelevant, if Redshift can.

Quote
and that is because with that amount of GPU's it can be a difference against what I have in CPU right now

OK, but that is completely not relevant comparison again. You're basically comparing what you have already accumulated, to what you would have to buy to outweight it.
That makes it economically innefficient solution for your personal case, but that can't be used as actual economical comparison. So why even write that ? We're not discussing personal situations.

Quote
Unreal engine is still working flawlessly in my GTX580, no need for a Titan X or Z for that hehehe

It's also working perfectly fine on my laptop with GM750... But you will not fit full-fledged architectural scene into your 3GB memory and run it 60fps.

Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-09, 15:20:52
Reply #28

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
OK, but that is completely not relevant comparison again. You're basically comparing what you have already accumulated, to what you would have to buy to outweight it.
That makes it economically innefficient solution for your personal case, but that can't be used as actual economical comparison. So why even write that ? We're not discussing personal situations.

You are not doing it, I am since the first post I wrote, I always say that what I say is from my personal POV and situation, you want to make data unbiased and as much isolated from subjective opinion as you can, I'm fine with that, but I'm not speaking in those terms, to do that I would have to do a massive amount of tests and have access to all that hardware, I don't, so I speak from my personal situation, and I always did.

And bout the " economically innefficient", elaborate a bit more because in other case this could be tagged as you tag my words, not relevant, because you don't add any objective data here.

It's also working perfectly fine on my laptop with GM750... But you will not fit full-fledged architectural scene into your 3GB memory and run it 60fps.

Give me a real demo and I'll tell you it's performance.


Cheers!

2015-03-09, 15:29:23
Reply #29

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
I am not really trying to make actual comparison where Titan-X or current GPU rendering stands. I don't use it, or own any of these cards and I can only extrapolate from others, and I don't like to do that.
I simply wanted to counter-argue against dismissive statements that these cards are expensive, contribute to economically unviable solution and that gp-gpu rendering is still in limited infancy.

That really isn't anymore the case. Perpetuating these notions can lead to cult mentality on forums as lot of people just easily parrot these opinions without really researching them themselves.
I argued against gp-gpu (I wrote a huge mostly negative response on Redshift on cgarchitect, and numerously times debunked false claims) when it was clear it was far worse solution to be had, but I will also defend it when it starts becoming viable alternative.
And as matter of fact, that slowly starts happening now.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!