Author Topic: Corona render speed  (Read 43563 times)

2014-09-19, 17:05:05
Reply #45

zzubnik

  • Active Users
  • **
  • Posts: 124
    • View Profile
... and iRay development seems to have stalled at the moment.

2014-09-19, 22:56:59
Reply #46

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
In the last release they added some render elements, but they are a bit different from standard elements, it's true that regarding performance and advanced rendering features it lacks a lot of evolution.

Sometimes I feel that companies like Nvidia make the development speed a lot slower so they can take leverage of our wallets as much time as they can :P

Cheers.

2014-09-20, 12:49:03
Reply #47

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
Just to illustrate my point, why GPU rendering has lost it's opportunity, at least for the moment, this is the improvement (benchmark based) of the maxwell seires vs the previous tech :P

Cheers.

2014-09-20, 12:55:56
Reply #48

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
wow! This is miserable improvement. Marketing at its best. Just like cosmetic upgrades of 3dsmax.

2014-09-20, 12:56:10
Reply #49

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
GPU performance stalling (if it is a real thing, I dont know) would not mean the end of GPU renderers. They will just have to fully depend on own optimizations (like the CPU folk), because simply just waiting a year for  faster GPUs will no longer be an option.

This would probably mean the end for bad renderers that do not put in any effort, but instead just use optix and expect it be be fast because GPU (oh hello hairy sphere!) ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2014-09-20, 13:14:19
Reply #50

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I don't mean that it's the end of GPU render engines, what I mean is that Nvidia had an opportunity to fagocitate a market, if they push up the performance as it was expected, if you search in google for "fermi maxwell graphic" you'll see their expectations, of course they say GFlops per Watt, I'm not sure if they are as efficient as they say, but performance speaking... Nvidia has become weak, they don't evolve at all.

The problem is that GPU render engines evolution is SLOW, and the reasons for that are the limitations that the GPU has like working with the system ram instead of the GPU ram, suddenly the GPU render becomes SLOOOOOW.

Maybe this will resucitate at some point, but for the time being I don't see Octane or iRay having an amazing evolution regarding their rendering speed, the latest Octane video was an exterior, why do they always show up the render engine with an exterior scene? that's not a real threat... why don't show up a relly complex interior scene, that is the most common scenario if you are outside the product viz market.

And here Corona it's astonishing regarding speed/quality ratio, of course I would like to see corona still faster hehehe but I will ever want Corona to be faster no matter how fast it is, maintaining quality of course, that's a professional deformation hahaha

Cheers!

2014-09-20, 13:25:36
Reply #51

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
Captain Obvious what do you mean with results aren't great? you are referring to render time or to final result?
Both, I guess? iray can produce really good results, but it suffers from some pretty severe workflow limitations. Limited texturing, it's always limited by GPU memory (unlike Redshift, and Octane is also going in that direction), and it really is quite slow. For iray to be really fast you need a big cluster of GPU machines, and preferably Quadro-cards to get additional memory (which you will need), so the whole thing is going to be massively expensive. If you spent the same amount of money and just rendered in Corona or Maxwell, or hell even V-Ray, you'd probably get better results in less time.

One of the biggest strengths of Corona is that it's extremely well-integrated into 3ds Max -- more so than iray -- which means you don't need to adapt your workflow. It's not limited by GPU memory, it supports all (more or less) native textures, material blending, render elements, etc etc etc.



Just to illustrate my point, why GPU rendering has lost it's opportunity, at least for the moment, this is the improvement (benchmark based) of the maxwell seires vs the previous tech :P
I respectfully but strongly disagree. If you look at the new generation of Nvidia cards, the GTX 970 basically gives you Titan-level performance (albeit with 1/3rd less memory) for a significantly lower cost, at a significantly lower power level. The Titan was released about a year and a half ago, cost $999 and consumed 250 watts. Now, the GTX 970 gives you about the same performance for $329 and about 150 watts. In terms of performance per TCO* -- which is what really matters for a farm -- the 900-series is a huge improvement over the previous generation. Performance isn't vastly improved, but the power reduction and price reduction means you can buy more of them.

Additionally, the fact that they're able to get such great performance out of a card using a relative low amount of power, despite being manufactured using the same 28 nm fab, means that they have plenty of headroom to grow.



Honestly, in terms of hardware performance evolution, GPU rendering has never looked better. The new Haswell-E is a good improvement as well, but mostly for cost reasons. They're not much faster than the previous generation, but they are much cheaper.

* Total Cost of Ownership

The problem is that GPU render engines evolution is SLOW, and the reasons for that are the limitations that the GPU has like working with the system ram instead of the GPU ram, suddenly the GPU render becomes SLOOOOOW.
You really need to have a look at Redshift :-) it's not a perfect render engine, but it definitely shows that fast development and a plethora of features is possible on the GPU as well.

2014-09-20, 14:09:48
Reply #52

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
Both, I guess? iray can produce really good results, but it suffers from some pretty severe workflow limitations. Limited texturing, it's always limited by GPU memory (unlike Redshift, and Octane is also going in that direction), and it really is quite slow. For iray to be really fast you need a big cluster of GPU machines, and preferably Quadro-cards to get additional memory (which you will need), so the whole thing is going to be massively expensive. If you spent the same amount of money and just rendered in Corona or Maxwell, or hell even V-Ray, you'd probably get better results in less time.

I never had problems with textures in memory with iRay, but I had with goemetry, A LOT of problems, you may not have some materials, but the main problem is in Nvidia's hand, that is what I said before

Quote
One of the biggest strengths of Corona is that it's extremely well-integrated into 3ds Max -- more so than iray -- which means you don't need to adapt your workflow. It's not limited by GPU memory, it supports all (more or less) native textures, material blending, render elements, etc etc etc.
Agree, specially the memory limitation thing.

Quote
I respectfully but strongly disagree. If you look at the new generation of Nvidia cards, the GTX 970 basically gives you Titan-level performance (albeit with 1/3rd less memory) for a significantly lower cost, at a significantly lower power level. The Titan was released about a year and a half ago, cost $999 and consumed 250 watts. Now, the GTX 970 gives you about the same performance for $329 and about 150 watts. In terms of performance per TCO* -- which is what really matters for a farm -- the 900-series is a huge improvement over the previous generation. Performance isn't vastly improved, but the power reduction and price reduction means you can buy more of them.
The picture I showed you show a 980 against a 780ti, where did you saw those benchmarks that put the 970 at the same performance level as a Titan? I'm really interested, and I'm also interested in a 980 vs Titan benchmark, I wasn't able to find anything that is not related to real time gaming performance, but the only GPGPU benchmark I found is that one.

Quote
Additionally, the fact that they're able to get such great performance out of a card using a relative low amount of power, despite being manufactured using the same 28 nm fab, means that they have plenty of headroom to grow.

Honestly, in terms of hardware performance evolution, GPU rendering has never looked better. The new Haswell-E is a good improvement as well, but mostly for cost reasons. They're not much faster than the previous generation, but they are much cheaper.

* Total Cost of Ownership
You really need to have a look at Redshift :-) it's not a perfect render engine, but it definitely shows that fast development and a plethora of features is possible on the GPU as well.

You are right, and here is where the greed comes out, there used to be two kinds of GPU's, as they are two kinds of CPU's, the ones with great power comsumption and the ones with great performance, now they pick the low powecomsumption series and name it as the best performance series and that's the flag, think if they still maintain the high performance series (wich does not exists at least publicly) we could be speaking about an astonishing difference for rendering performance.

Also they maintain 4Gb as the GPU memory... soooo great scenes will you be able to fit there... specially when you don't have instancing, and no matter what octane people says, if you use instancing in GPU you lower it's speed by a high factor, depending onw how many times do you use it, and iRay doesn't has instancing so... welll 4Gb is't nothing, so no spectacular opportunity for GPU rendering with those limitations.

Regarding Redshift, I don't like mental ray/vray render engines anymore, and I won't invest in a GPU farm to have a biased render engine going on when Corona has demonstrated that a biased render engine could be optimized to deliver an incredible speed, it just have to be worked out, for me such level of bias is a big NONO nowadays, and such level of complexity in configuration is also a big NONO also, that's why redshift has never been an option for me, also it's been said in this forum before, RedShift has great speed but it comes with a quality cost, I'm sure Corona could be able to deliver same speed with a similar quality cost, but IMHO it's just not it's pursued target.
Of course don't take my thoughts about RedShift as someting TRUE in capital letters, specially because I have not tried it personally, but I really don't have time to deal with another complex-to-configure render engine, I prefer to spend my time in creativity, modelling, texturing, animation, etc... :) and that is what Corona gives me.

Cheers!

2014-09-20, 14:35:17
Reply #53

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
My point about Redshift was mostly that it demonstrates that GPU memory isn't necessarily a major impediment. I see no reason why it wouldn't be possible to implement exactly the same rendering algorithms that Corona uses, but with Redshift's GPU engine. I don't really like their approach, frankly. Aiming to replicate V-Ray or mental ray seems like a step backwards. It does, however, show that GPU rendering is a valid approach for high-end production work, and it really is massively fast. Even with my lame-ass Quadro 2000M, Redshift is significantly faster than V-Ray, iray, Octane and even Corona. I still prefer Corona because it gives better quality and has a better workflow, but in terms of performance... No, there is no way Corona could compete with Redshift on speed, if you had a machine with a GTX 980/970 or two. Not even on big heavy scenes. Especially not on big heavy scenes, in fact!

As I said: I still prefer Corona. But if the Redshift team manages to match Corona's rendering methodology, then I'm not so sure any more.


Edit: anyway, it's a moot point at this stage. Redshift is what it is. It's a great choice for some people, but for me Corona is a better choice. The only reason I brought it up is because it shatters a lot of myths about GPU rendering (memory limitations, limited to naive path tracing, lousy integration with the host software, etc).
« Last Edit: 2014-09-20, 14:39:31 by Captain Obvious »

2014-09-20, 15:26:02
Reply #54

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
The thing is that Redshift, as far as I know, don't store the scene in the GPU memory, and that's because there are parts of the calculation that are being managed and done by the CPU in the system memory, that is why it does not have the GPU memory limit, and I think algorythms like the ones used in iRay, Octane or even Corona cannot be applied in that regard.

Anyways, I don't use RedShift, so I cannot speak a lot about it, but it's a complex render engine that can suffer from things like flickering and other problems, just like Vray or mental Ray, and that's a con for me, that's why I loved iRay, when I launched an animation it was rendered once and just once, beucase there was no render glitched on it, with Corona there could be a bit of flickering, but if you configure the HD cache settings (three settings!) with a simple number you will know that you will avoid flicker, and it still remains nearly unbiased, and that's why I loved Corona, even when the animation part is not worked out as much as it will be in the future, I like to render my projects once, and I like not having to re-render because the render engine introduced a glitch in some place becuase I configured it slightly lower to gain some speed.

Yes, GPU can be used for rendering in biased render engines, OFC, but what I think is that biased render engines as we know them are out of time, and Corona biased technique is the winner, at least in my opinion of course :)

Cheers.

2014-09-20, 16:43:10
Reply #55

Stan_But

  • Active Users
  • **
  • Posts: 526
    • View Profile
    • https://www.behance.net/archdizs
Maybe we will do comparing all render engines in this topic?)
As scene could be the "Corona bench scene".

2014-09-20, 17:22:03
Reply #56

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
The thing is that Redshift, as far as I know, don't store the scene in the GPU memory, and that's because there are parts of the calculation that are being managed and done by the CPU in the system memory, that is why it does not have the GPU memory limit, and I think algorythms like the ones used in iRay, Octane or even Corona cannot be applied in that regard.
Not sure what you mean here. Redshift has memory buffers on the GPU. It stores the entire scene representation in GPU memory, and then it has buffers for things like triangles and textures. If the geometry or the textures exceed the buffers, it will dynamically offload stuff, but if the scene description itself can't fit, it'll fail to render.

Also, Redshift supports temporal interpolation for both the irradiance point cache (HD cache equivalent) and the irradiance cache, meaning you can blend values from several nearby frames. That stops flickering quite efficiently. V-Ray supports the same thing. It does mean you have to pre-render the pre-passes, but it's easy enough to automate if you have a render farm and easier still if you're rendering locally.

2014-09-20, 20:47:48
Reply #57

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I was referring to this from their FAQ:

"Since Redshift is a GPU renderer, it mostly depends on GPU performance. There are, however, certain processing stages that happen during rendering which are CPU or disk dependent. These include extracting mesh data from Softimage/Maya, loading textures from disk and preparing the scene data for use by the GPU. Depending on scene complexity, these processing stages can take a considerable amount of time and, therefore, a lower-end CPU can 'bottleneck' the overall rendering performance. While Redshift doesn't need the latest and greatest CPU, we recommend using at least a mid-range quad-core CPU such as the Intel Core i5."

Since they do swaping when the scene does not fit in the scene the CPU has an important role when you don't have enough memory, anyways I could have been understood this wrong, anyways I'm not interested in GPU rendering for the time being anymore hehehe it has a huge cost vs Corona render, I can get more CPU nodes than GPU nodes for the same price, in the end to support 2 to 4 GPU's you need a proper motherboard and an specific PSU, so you need a proper node.

With Corona I have an outstanding node for 800€, with any GPU render engine this price is impossible, I have some GPU nodes in my farm, but I prefer the CPU nodes, and I can assure you that I've been the most powerful defender of GPU rendering... until I tried Corona XD

Cheers!

2014-09-20, 20:49:00
Reply #58

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
headoff to make a plausible comparison we have to invest a lot of time, properly configuring materials between render engines... this takes time, I'm all about doing it but I don't have the time right now :)

Cheers!

2014-09-21, 18:23:01
Reply #59

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
The picture I showed you show a 980 against a 780ti, where did you saw those benchmarks that put the 970 at the same performance level as a Titan? I'm really interested, and I'm also interested in a 980 vs Titan benchmark, I wasn't able to find anything that is not related to real time gaming performance, but the only GPGPU benchmark I found is that one.
Ask and ye shall receive. It's for a 980 rather than a 970 but it should be relatively easy to extrapolate performance based on clock speeds and core counts. The 980 is consistently around 50 % faster than the 780 Ti, which itself is roughly comparable to a Titan in terms of performance. The 970 should give roughly 75 % of the 980's performance, putting it firmly ahead of the Titan for single-precision computing. Double-precision is a different matter, but ray tracing is all single-precision.