Author Topic: Corona GPU  (Read 56296 times)

2015-06-21, 21:06:24
Reply #30

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
Here are scenes done in 3dsmax 2015. I havn't got Vray so demo version was used as You can see from pictures. So default settings are saved I think. But I made no changes from open to render it so You can simply open it and render with default settings saved (lightcache as secondary GI saved I think)

Update:
I had to check files to upload right versions.

Update 2:
Uploaded right versions of max scenes. Uploaded VrayRT image just rendered in less than 5 min at 2xGTX580 from VrayRT scene.

Update 3:
Uploaded image just rendered from attached scene in Corona 1.0 on i7-3930k@4.1GHz in 15 min
« Last Edit: 2015-06-21, 21:34:51 by fobus »

2015-06-21, 21:46:12
Reply #31

daniel.reutersward

  • Active Users
  • **
  • Posts: 310
    • View Profile
I tried your scenes just for fun.. :)

System:
2x Xeon E5-2697 v3
64gb ram
1 Geforce GTX 980

With V-Ray I set the Max paths/pixel to the same amount you had: 20007
With Corona I set the same amount of passes you had: 3962

With my system the V-Ray RT Gpu version was not that much faster.
With Corona 6min 58s and with V-Ray RT Gpu 6min 14s.

2015-06-22, 00:10:08
Reply #32

dfcorona

  • Active Users
  • **
  • Posts: 292
    • View Profile
This is a great comparison, and when I say great I mean not for the vray.  Corona has a more efficient and better GI system.  It gets great time in this, this is the type of test corona will excel at more.  But remember we are comparing corona CPU to vray GPU, two totally different renderers. Now just imagine what the time would be if it was Corona GPU.

1st image) First is Vray straight comparison, you had two options checked that slow down final render and are only good for fast preview for interactive. - 4min32sec

2nd image) This is with those incorrect settings turned off - 4min6sec

3rd image) this is with just a little 2 second tweak - 3min29sec

1 GTX Titan X, now imagine just adding one more card.

If anyone has a larger more detailed scene to try that would be great to do.
« Last Edit: 2015-06-22, 00:14:23 by dfcorona »

2015-06-22, 05:33:02
Reply #33

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
System:
2x Xeon E5-2697 v3
64gb ram
1 Geforce GTX 980

With my system the V-Ray RT Gpu version was not that much faster.

Look at config and You'll realize that You're comparing $500 GPU with $5000 CPU and you can easily set up to 4x980Ti in one PC with increasing GPU speed nearly 5x from your GTX 980 in just $3000. So it is really possible to reach speed of 10x Xeon E5-2697 v3 in just one PC cheaper than 2x Xeon E5-2697 v3. Of course It will be only 6Gb of GPU RAM, but we are talking about future and in nearly future there will be much more RAM on GPU (8Gb on mid-end ATI http://www.techpowerup.com/reviews/MSI/R9_390X_Gaming/ on its way and, as was written before, nVidia said that Pascal GPU supports up to 32Gb of RAM)

2015-06-22, 05:47:54
Reply #34

dfcorona

  • Active Users
  • **
  • Posts: 292
    • View Profile
System:
2x Xeon E5-2697 v3
64gb ram
1 Geforce GTX 980

With my system the V-Ray RT Gpu version was not that much faster.

Look at config and You'll realize that You're comparing $500 GPU with $5000 CPU and you can easily set up to 4x980Ti in one PC with increasing GPU speed nearly 5x from your GTX 980 in just $3000. So it is really possible to reach speed of 10x Xeon E5-2697 v3 in just one PC cheaper than 2x Xeon E5-2697 v3. Of course It will be only 6Gb of GPU RAM, but we are talking about future and in nearly future there will be much more RAM on GPU (8Gb on mid-end ATI http://www.techpowerup.com/reviews/MSI/R9_390X_Gaming/ on its way and, as was written before, nVidia said that Pascal GPU supports up to 32Gb of RAM)

You have to remember this is also a scene that really favors coronas superior GI engine.  Now you take more well lit scenes, or even the everyday scenes I work on and GPU renders much faster. A scene I'm working on for a client right now is a kitchen scene, just for fun I tried both CPU vs GPU. Cpu was 12min35sec, GPU was 2min9sec. and there was no comparison the CPU was much more noisy so it would of taken even longer to clean up.

2015-06-22, 06:27:05
Reply #35

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
If anyone has a larger more detailed scene to try that would be great to do.

I've got the scene done in Corona Alpha 7.1 a bit optimized for 1.0. Plus the same scene converted to Vray. But I have not Vray 3.2 to render it (demo has so much limitation that makes it impossible to test)

Corona
https://cloud.mail.ru/public/443AVAermhxe/CONF_Test_Corona_01.rar

VRay
https://cloud.mail.ru/public/2GspyBg1jPb9/CONF_Test_Vray_04.rar

2015-06-22, 08:33:29
Reply #36

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I would like to see Juraj to join this conversation about GPU/CPU :)
He usually have some facts, in one or the other direction that can lead to some note resting data on this regard.

Cheers!

2015-06-22, 09:38:46
Reply #37

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
GPUs are growing in compute power much quicker than CPUs in last few years. Scalability is much more efficient. The two major troubles I see at this moment: First for us, users, is a small amount of RAM available to render huge scenes and the Second is for developers - is much more complicated programming for GPUs plus need in completely redone all 3dsmax maps (at least most used) to be compatible for GPUs. The First one is going to be gone in 1-2 years as we see RAM amount are growing fast. So I hope The Second one will not be a barrier to get much more compute power for much more less money.

2015-06-22, 10:50:44
Reply #38

Juraj

  • Active Users
  • **
  • Posts: 4761
    • View Profile
    • studio website
I would like to see Juraj to join this conversation about GPU/CPU :)
He usually have some facts, in one or the other direction that can lead to some note resting data on this regard.

Cheers!

Do I smell irony :- ) ?

I honestly think GPU is also the future, I was never of opposite opinion, but I was always tired of how it is 10-100x faster, when that obviously never was (I could tell given I had Octane). And all the limits GPU engines had (gets much better today but, it took Octane 5 years, and VrayRT GPU is still in puberty, better than infancy though)

Now I honestly don't care when and how that gets implemented given the speed-up in actual scenes can be between 3-4x, and my render times would still be few hours (as I know they are for those few GPU studios that are out there, like DeltaTracing guys),
I would still need render-farms like Rebus as I do now. Because that gives me 5 minute renders for few bucks :- ) I surely ain't buying quad-Titan-X to each in our office when even that won't suffice, just like Dual-Xeons don't suffice when I need to finish 5 8k Finals in single day.
For now my Titan-X is purely Unreal engine beast, but I make give a brief go again to see where all that competition is now (Octane/Redshift/etc..) out of interest when time permits.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-06-22, 11:20:36
Reply #39

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
A little story :)

As I noted before we're faced a big problem in rendering of upcoming big project. It contains 2 mins of exterior shots and 3000+ spherical panoramas of interiors. Our little renderfarm is capable to render all exterior shots in time but it will be 100% loaded. As we started to count the time to render 3000+ images with 5k resolution we're realize that with our lovely Corona it will takes approximately 120 days (4 time more than we have). Of course it is possible to cut rendertimes to 1/4 by decreasing fidelity but this is not our way. Then we're started research how to cut rendertimes in other way. VRay with Bruteforce+LightCache seems to be a real solution till we done some tests from wich we're noticed that it is a 1.5 slower than Corona PT+UHDC. IrMap+LC was roughly the same speed as Corona but with poor quality. So VrayRT was the latest option. And it rocks. Even at old GTX 580 it was much faster than on any our rendernode or workstation. Of course it hasn't got some features from CPU Vray but rendertime was great. So as our farm will be busy all the time we're startetd calculations with options of buying new regular PCs for Corona based rendering, buying GPUs and rendering on external renderfarm. Numbers (rough): Rebus renderfarm - $20000 (400Hrs on 100PCs), Regular PCs - $45000 (30 days on 35PCs!!!) and $9000 for GPU PCs (2PCs with 4x980Ti). As our little farm contains only 16PCs adding 35PCs will kill our energy cables and conditioners. Administration of this huge number of PCs is terrible too. 400Hrs for rendering versus 30 day is delicious but it is wasted $20000.

Cheers

2015-06-22, 11:25:27
Reply #40

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
Now just imagine what the time would be if it was Corona GPU.

Corona GPU would not be better than Corona CPU. It is that simple. It may be faster in synthetic tests, but not in real world, where you have huge scenes, vastly different settings, and limited development resources.
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-06-22, 11:31:31
Reply #41

Juraj

  • Active Users
  • **
  • Posts: 4761
    • View Profile
    • studio website
A little story :)

As I noted before we're faced a big problem in rendering of upcoming big project. It contains 2 mins of exterior shots and 3000+ spherical panoramas of interiors. Our little renderfarm is capable to render all exterior shots in time but it will be 100% loaded. As we started to count the time to render 3000+ images with 5k resolution we're realize that with our lovely Corona it will takes approximately 120 days (4 time more than we have). Of course it is possible to cut rendertimes to 1/4 by decreasing fidelity but this is not our way. Then we're started research how to cut rendertimes in other way. VRay with Bruteforce+LightCache seems to be a real solution till we done some tests from wich we're noticed that it is a 1.5 slower than Corona PT+UHDC. IrMap+LC was roughly the same speed as Corona but with poor quality. So VrayRT was the latest option. And it rocks. Even at old GTX 580 it was much faster than on any our rendernode or workstation. Of course it hasn't got some features from CPU Vray but rendertime was great. So as our farm will be busy all the time we're startetd calculations with options of buying new regular PCs for Corona based rendering, buying GPUs and rendering on external renderfarm. Numbers (rough): Rebus renderfarm - $20000 (400Hrs on 100PCs), Regular PCs - $45000 (30 days on 35PCs!!!) and $9000 for GPU PCs (2PCs with 4x980Ti). As our little farm contains only 16PCs adding 35PCs will kill our energy cables and conditioners. Administration of this huge number of PCs is terrible too. 400Hrs for rendering versus 30 day is delicious but it is wasted $20000.

Cheers


Paying for Render farm is wasted if you think of it as investment. It isn’t, it’s something you should tell your client about and bill him directly. I did so for past year and I couldn’t be happier, I no longer maintain farm, I repurposed dual-xeons into ‘quick-preview’ workstations and everythings goes into cloud.
400 hours seems excessive as they can give you simultaneously much more than 100PCs. Sounds like mistake because that is 17 days, hardly much better than 30 days.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-06-22, 11:33:35
Reply #42

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
Now just imagine what the time would be if it was Corona GPU.

Corona GPU would not be better than Corona CPU. It is that simple. It may be faster in synthetic tests, but not in real world, where you have huge scenes, vastly different settings, and limited development resources.

But GPUs processing power grows much faster and scalability is much easier than CPUs. So Corona GPU can reach more heights faster just by riding on nVidia and ATI progress.

400 hours seems excessive as they can give you simultaneously much more than 100PCs. Sounds like mistake because that is 17 days, hardly much better than 30 days.

Of course they can, but we can't pay (or client) so much as we're have 30 days.

2015-06-22, 11:49:43
Reply #43

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
But GPUs processing power grows much faster and scalability is much easier than CPUs. So Corona GPU can reach more heights faster just by riding on nVidia and ATI progress.

THIS - "you dont have to put in effort, you can just wait for GPUs to become faster" - is exactly why we have so many useless GPU renderers. This is just not true. You have to put in effort - much more than on CPU. You will have to deal with all the limitations and problems. I would rather develop new adaptive sampler to speed things up on CPU than rewrite half of my application every time new GPU architecture is out.
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-06-22, 12:12:56
Reply #44

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
Adaptive sampler - it's really what we need to reach clean DoF and MB. Even VRayRT GPU can't deliver clean picture from my test AA scene.
But overall speed on detail reach images will be affected no that much as GPUs speed as it can't boost calculations overall (I hope that I'm wrong).

From former Octane developer: Adaptive sampling has nothing to do with detailed scene with grass, hairs, displacement. It works fine only on clean spaces like painted walls, ceilings and so on.
« Last Edit: 2015-06-22, 16:26:43 by fobus »