Author Topic: Corona GPU  (Read 56006 times)

2015-06-18, 06:47:48

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
You guys are doing some amazing stuff with Corona. The speed is really incredible.  I am really looking forward to seeing a GPU version.  With GPU's getting so fast now and the capability of having multiple and easy and cheap upgrades, it truly will be a great package.  Right now I'm using Vray RT on one Titan X, and the interactivity and render speed is incredible.  Corona already has such great speed I would love to see it's capability on GPU.  I read back some time that Corona has partnered with AMD for FireRender.  What is the news on this?  It's especially interesting since the Just released there Fury X2 GPU.

2015-06-18, 07:32:04
Reply #1

Lucutus

  • Active Users
  • **
  • Posts: 103
    • View Profile
    • a+m
As far as i know Corona will never be GPU based.

Greetz

Lucutus

2015-06-18, 07:54:31
Reply #2

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
As far as i know Corona will never be GPU based.

Greetz

Lucutus
I read about it off there own blog, a collaboration between corona and AMD.  I hope they are creating a GPU version, it will be years before we have a good increase in CPU power. GPU increases year over year, and is already way faster than CPU and cheaper.

2015-06-18, 08:41:56
Reply #3

Lucutus

  • Active Users
  • **
  • Posts: 103
    • View Profile
    • a+m
https://forum.corona-renderer.com/index.php/topic,69.msg282.html#msg282

This statement from Ondra sounds quite clear to me.....but i dont know if it is maybe "outdated".

Greetz

Lucutus

2015-06-18, 09:16:36
Reply #4

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
What you heard was about a different render engine the team is working on in conjunction with AMD (so OpenCL I presume) but it's not Corona, I hope we never see the GPU's involved with Corona but the CPU's kicking the GPU's asses hahaha

Cheers.

2015-06-18, 09:46:16
Reply #5

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 12711
  • Marcin
    • View Profile
GPU answer: https://corona-renderer.com/features/proudly-cpu-based/

As far as i know Corona will never be GPU based.

Greetz

Lucutus
I read about it off there own blog, a collaboration between corona and AMD.  I hope they are creating a GPU version, it will be years before we have a good increase in CPU power. GPU increases year over year, and is already way faster than CPU and cheaper.
Dfcorona, you clearly misunderstood the blog entry although it was very clearly written to prevent speculations like this.
See: https://corona-renderer.com/blog/render-legion-and-amd-announce-cooperation/

1.:
Quote
Is FireRender a “Corona GPU” or “Corona RT”?

No. Those are two separate render engines, developed by two separate companies, where  FireRender shares some Corona’s technology. For example FireRender is able to directly render Corona materials and lights.

2.:
Quote
Do you plan Corona GPU?

No, not any time soon. But we want to keep staying in touch with the latest development. At the moment CPU is still the preferred option for us.
Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us

2015-06-18, 11:55:05
Reply #6

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
We are observing GPU renderer development, but we are not developing GPU renderer. And we do not plan to do it unless some game-changer GPU architecture appears
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-06-18, 15:41:55
Reply #7

agentdark45

  • Active Users
  • **
  • Posts: 579
    • View Profile
We are observing GPU renderer development, but we are not developing GPU renderer. And we do not plan to do it unless some game-changer GPU architecture appears

What about Nvidia's Pascal line of GPU's which promise to be orders of times faster than today GPU's? DirectX 12 also promises to allow multiple GPU's to pool their memory. GPU power advances are happening a lot quicker than with CPU's, which will remain relatively stagnant for the foreseeable future (at least at the consumer level, or Intel decide to give us a 4ghz capable 12+ core consumer chip).

I've got two 12gb Titan X's in my rendering machine, with DX12 that will mean 24gb of GPU memory will be available. It seems a waste to not have the GPU's contributing to my renders in some some of GPU+CPU hybrid rendering mode. I think most users here would appreciate any rendering speed bump they could get!
« Last Edit: 2015-06-18, 15:46:37 by agentdark45 »
Vray who?

2015-06-18, 16:12:39
Reply #8

kurantransfer

  • Active Users
  • **
  • Posts: 20
  • great software
    • View Profile
Before using Corona, I was using Arion and Octane Render to some extent. They both produce good results but when it comes to efficiency per terms of heat and power consumption my machine would seem to burn when it pushed the boundries of my Titan GPU. I was afraid to put it on a render job for an overnight. ( But the main problems of these renderers in my opinion is there 3dsmax integration. As far as I remember, they did not support more than one uvw channel for example and the procedural maps of max although they had their own kind- Corona's biggest plus effect is its complete max integration. And again in my opinion I was feeling guilty for spending my wallet over nvidia every two years for a display card with more memory (not a faster gpu).
But again, I believe that having some kind of gpu integration would be fine even it can be optional.

2015-06-18, 17:40:46
Reply #9

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
We dont need faster, we need more flexible/easier to develop for ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-06-18, 17:45:38
Reply #10

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
Not sure of the heat issues you are having, sounds like a bad hardware setup. Gpu Rendering has come a long way, I know some complain that gpu rendering doesn't support all features, well neither did cpu rendering when it first came out. I remember using some of the first Corona betas, they where missing quite a lot.  Gpu rendering is no longer in it's infancy, we can work on full production with GPU now.  Even some GPU renderers like Moskito claim to support all features of Max, including all shaders. I have not used that one, but using Octane + Vray RT I can tell you I am not running into any issues on large scenes.  There is a reason why all rendering engines are turning to GPU, and I would hate to see a great GPU renderer like Corona miss it's chance.  CPU's are gaining no ground it terms of performance, there is a tiny increase each year for tons of money.  Now that GPU's have huge amounts of ram and are increasing it speed by incredible rates there is no competition in terms of which platform is really moving forward.  I have tested GPU renderers vs Coronas CPU rendering, Corona does impress in speed for how fast it is on CPU, but doesn't hold a candle to GPU. I see some saying I hope they never make it for GPU, I cannot see any logic in that thinking, why not? why not just to have the option?

2015-06-18, 17:47:06
Reply #11

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
We dont need faster, we need more flexible/easier to develop for ;)

Tell that to people who are sick of waiting for renders, and sick of paying tons to renderfarms.

2015-06-19, 01:42:42
Reply #12

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I would prefer to see corona improved in the CPU side of things, I'll just say this.

Cheers!

2015-06-19, 02:00:02
Reply #13

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
Why not both, there a growing company.

2015-06-19, 11:52:01
Reply #14

agentdark45

  • Active Users
  • **
  • Posts: 579
    • View Profile
Not sure of the heat issues you are having, sounds like a bad hardware setup. Gpu Rendering has come a long way, I know some complain that gpu rendering doesn't support all features, well neither did cpu rendering when it first came out. I remember using some of the first Corona betas, they where missing quite a lot.  Gpu rendering is no longer in it's infancy, we can work on full production with GPU now.  Even some GPU renderers like Moskito claim to support all features of Max, including all shaders. I have not used that one, but using Octane + Vray RT I can tell you I am not running into any issues on large scenes.  There is a reason why all rendering engines are turning to GPU, and I would hate to see a great GPU renderer like Corona miss it's chance.  CPU's are gaining no ground it terms of performance, there is a tiny increase each year for tons of money.  Now that GPU's have huge amounts of ram and are increasing it speed by incredible rates there is no competition in terms of which platform is really moving forward.  I have tested GPU renderers vs Coronas CPU rendering, Corona does impress in speed for how fast it is on CPU, but doesn't hold a candle to GPU. I see some saying I hope they never make it for GPU, I cannot see any logic in that thinking, why not? why not just to have the option?

This.

I'm not fully in agreement that a fully GPU renderer would be hands down better than a CPU renderer, but we should have the option to utilise a computer's full performance. Seems a waste of resources not to. Out of core rendering also gets around GPU ram limits.

I'm pretty sure my SLI Titan X's could seriously cut some render time. And yes, speed is a huge factor for me. Why wait 8 hours for a render when I could get it done in 4 hours? This can mean the difference between meeting a deadline and not.

There seems to be a large number of paying Corona users (myself included) that want GPU + CPU rendering to happen. It would be interesting to see some performance numbers on GPU vs CPU rendering times for various setups.

Interesting fact, my Titan X's have a combined 14 teraflops of single precision compute power going to waste, whilst a 5960x only has around 700 Gigaflops single precision ;)
« Last Edit: 2015-06-19, 12:06:29 by agentdark45 »
Vray who?

2015-06-19, 13:40:26
Reply #15

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
There seems to be a large number of paying Corona users (myself included) that want GPU + CPU rendering to happen. It would be interesting to see some performance numbers on GPU vs CPU rendering times for various setups.
https://embree.github.io/papers/2014-Siggraph-Embree.pdf :D
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-06-19, 14:23:26
Reply #16

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
I've seen you post this before Ondra when this topic arises, but what is this suppose to prove.  In the real world results cpu embree cannot hold a candle to the speed of gpu.

Xeon E5-2699 v3 Octadeca-core (18 Core) $4,764.19 = around 775 GigaFLOPS

Nvidia GTX TITAN X $999 = over 7 TeraFLOPS

Real world test Scene I've tried. Titan X vs 3930K Overclocked

Vray CPU 11min 32sec - Without DOF

Vray GPU 1min25sec - With DOF

Octane full Pathtracing on interior scene 7min.

we can try real world tests, and not just papers.

I think you've done a great job with Corona Ondra, I am not trying to take anything away from you, but to look at now and the future of Gpu vs Cpu. I cannot see not implementing Gpu like everyone else.  What will Cpu's be like in 2016, maybe 10% increase in speed if we are lucky.  The Pascal video card in 2016 is suppose to be 10 TIMES faster than Titan X.

« Last Edit: 2015-06-19, 14:44:53 by dfcorona »

2015-06-19, 18:40:31
Reply #17

DanGrover

  • Active Users
  • **
  • Posts: 21
    • View Profile
I think you've done a great job with Corona Ondra, I am not trying to take anything away from you, but to look at now and the future of Gpu vs Cpu. I cannot see not implementing Gpu like everyone else.  What will Cpu's be like in 2016, maybe 10% increase in speed if we are lucky.  The Pascal video card in 2016 is suppose to be 10 TIMES faster than Titan X.

I don't mean to sound like a meany, but do you really think all this has passed Ondra by? I mean, I assume he has a lot more knowledge about the under-the-hood specifics of render engines than any of us, and he's said he's constantly observing changes in the GPU industry (so it's not a blanket "no"), and if there are engines out there that offer massive speed increases - as you seem to suggest there are - then why write a new render engine? Apparantly the market is already providing you with a render engine, no? If VRay RT is 10x faster than the CPU version (and I love Corona, but it's obviously not 10x faster than VRay generally) then... don't you already have your wish granted?

2015-06-19, 19:42:30
Reply #18

agentdark45

  • Active Users
  • **
  • Posts: 579
    • View Profile

Real world test Scene I've tried. Titan X vs 3930K Overclocked

Vray CPU 11min 32sec - Without DOF

Vray GPU 1min25sec - With DOF

Holy crap is this true?? A ten time render speed increase is not something to ignore! Can you post up some images please? This is making me reconsider Vray...

I don't mean to sound like a meany, but do you really think all this has passed Ondra by? I mean, I assume he has a lot more knowledge about the under-the-hood specifics of render engines than any of us, and he's said he's constantly observing changes in the GPU industry (so it's not a blanket "no"), and if there are engines out there that offer massive speed increases - as you seem to suggest there are - then why write a new render engine? Apparantly the market is already providing you with a render engine, no? If VRay RT is 10x faster than the CPU version (and I love Corona, but it's obviously not 10x faster than VRay generally) then... don't you already have your wish granted?

I think this is a moot point. We are all in search of the best overall rendering software, why not try to make Corona as good as it can possibly be? Most of us are here and have decided to part with our money because we find Corona better / more desirable than other rendering software as a whole. However the speed of a renderer is definitely a HUGE part of what makes it "good". Are you saying you wouldn't care if Corona could be 10x as fast as it is currently? 10 hours vs 1 hour - there is simply no arguing with that. Remember the early Maxwell days - quality, but day long renders! This was a big turn off for most people.
Vray who?

2015-06-19, 20:30:05
Reply #19

burnin

  • Active Users
  • **
  • Posts: 1532
    • View Profile
Would be nice to go Hybrid way with what you already have & OCL... it's coming along just nicely. Tests show it working on Lux, Indigo & Thea. Also VRay started with OCL but switched to CUDA later on... hmmhm.
You've seen new AMD Fury? http://www.forbes.com/sites/jasonevangelho/2015/06/18/amd-radeon-fury-x-benchmarks-full-specs-new-fiji-graphics-card-beats-nvidias-980-ti/
It is getting interesting...

2015-06-19, 21:05:56
Reply #20

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
I think you've done a great job with Corona Ondra, I am not trying to take anything away from you, but to look at now and the future of Gpu vs Cpu. I cannot see not implementing Gpu like everyone else.  What will Cpu's be like in 2016, maybe 10% increase in speed if we are lucky.  The Pascal video card in 2016 is suppose to be 10 TIMES faster than Titan X.

I don't mean to sound like a meany, but do you really think all this has passed Ondra by? I mean, I assume he has a lot more knowledge about the under-the-hood specifics of render engines than any of us, and he's said he's constantly observing changes in the GPU industry (so it's not a blanket "no"), and if there are engines out there that offer massive speed increases - as you seem to suggest there are - then why write a new render engine? Apparantly the market is already providing you with a render engine, no? If VRay RT is 10x faster than the CPU version (and I love Corona, but it's obviously not 10x faster than VRay generally) then... don't you already have your wish granted?

Here is the answer to your question, Corona is brand new, built from the ground up with the latest and greatest.  Look at the speed and quiality of it's GI engine, plus it's simplicity, shader system, even though it takes longer to clean final render it's still very fast for what it is.  Imagine all of that running on the power of GPU or assisted on the GPU.  I believe Ondra has built a fantastic CPU renderer, and would love to see it be better. That's all...... and why not.

2015-06-20, 15:59:18
Reply #21

cecofuli

  • Active Users
  • **
  • Posts: 1577
    • View Profile
    • www.francescolegrenzi.com
VRay CPU vs VRay GPU

1 TEST: 19x
1 TEST: 23x
1 TEST: 10x
1 TEST: 11x
1 TEST: 19x
1 TEST: 17x

I'm sure that, soon or later, Corona will be CPU+GPU (in some way).
Right now, it's better to be concentrate to add a new features on PCU version.

2015-06-20, 18:40:40
Reply #22

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
We are all in search of the best overall rendering software, why not try to make Corona as good as it can possibly be?

This is actually what we are trying to do, but hey, I am programming renderers only for 6 years, and those green-black marketing slides from hardware companies have to be at least a bit correct, right?
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-06-21, 01:21:30
Reply #23

bnzs

  • Active Users
  • **
  • Posts: 16
    • View Profile
And I was a little thrown over to the fan:
CPU is sad. AMD not give us in last 3 ears not one good cpu. Intel in terms of good jump on 2600k then on 4770k (or 3930k someone - that) and now 5820k. And this is in shot words 5 ears of life. And they don give us posibility to use several cpu's on one MB, even xeon - 2 cpu for MB is this standard version is above all more expensive so that it is easier to use DR. All other options is one - DR but for DR for each cpu we must have another MB, another 32gb ram, another SSD+HDD, another PSU.
BUT for 4 very fast gpus for example need only two powerful PSU that it.

Good example
and
conform this http://render.otoy.com/octanebench/results.php?sort_by=avg&singleGPU=1 8xGTX 580
it is = 4x980 Ti and they easy can stay in one case in sense PSU (2x1000W) and mobo - 4 PCIe its easy, and 980ti have 6gb vram. And again - look on speed in video in octane and vray.
I just hope that ondra in a secret from us develop corona on GPU. Even maxwell
think about gpu :)

And some more videos for GPU+corona vote (i hope Ondra already see they all) - in description render time\per frame
and if they give to users even 75% of what they promise here
http://home.otoy.com/otoy-unveils-octanerender-3-worlds-best-gpu-renderer/ it's strange that Ondra not interested in gpu and again I just hope that ondra in a secret from us develop corona on GPU.

2015-06-21, 02:00:35
Reply #24

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
We are all in search of the best overall rendering software, why not try to make Corona as good as it can possibly be?

This is actually what we are trying to do, but hey, I am programming renderers only for 6 years, and those green-black marketing slides from hardware companies have to be at least a bit correct, right?

Ondra, why are you getting defensive? You are taking this the wrong way, we think your brilliant programmer for what you have done with corona. Why do you keep bringing up marketing slides or papers on the subject to dissuade us when we are telling you with real world tests the gpu is much faster for interactive setting up a scene to final rendering.  So far every scene I have worked on testing with cpu and gpu, there is no question that gpu is X times faster.  What will happen to this great renderer you created when next year Pasal comes out and all the other renderers have full GPU support for all features.  No matter how great corona is, people will move on if the render time goes from 4hrs, to just a few minutes.  and not even the fact of cost, gpus are cheap compared to the optional cpus.

Here what is great about GPU cost.

1 Titan X is X times faster than the best CPU.  Now for just $999 more you can add another Titan X reducing render times by half again.  And we all know most systems will house 4 GPUs.  And with pci risers you can put up to 8x GPUs.

compare that to the cost of one system that can barley compete with one Titan X. then multiply the cost of each new complete system you have to buy.

2015-06-21, 08:12:35
Reply #25

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
We're using Corona for 2+ years for production and all this time we're had no doubs that it was right choice. Rendertimes was a bit higher than Vray, but simplicity using of Corona complitely outweights it. But now we have a big project with huge number of images to render at high resolution and render times becomes a bottleneck in pipeline. As we tested Corona and even IrMapped Vray has nothing to do with it. But our test revealed solution in VrayRT GPU as it is a 3x times quiker for the same money (42x i7-4790k roughly the same speed as 8x GTX 980Ti in two PCs). And much more compact and energy efficiet too (40+ PCs versus only 2 PCs).

Of course it is neede completely new code for GPU, new shaders and so on. I know that even all standart maps has to be completely redone to be rendered on GPU. But it has done in Vray and partially in Octane. GPU is really fast and it has a bright future I think. Even one or two yeras ago we can't even imagine that graphics card under $1000 can have 12Gb of memory. Since Titan X and introduction of Pascal with up to 32Gb of memory it will be possibe to to really big projects with rendering on GPU without strong memory limitation (that was a main reason to avoid GPUs).

2015-06-21, 14:02:50
Reply #26

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website

Real world test Scene I've tried. Titan X vs 3930K Overclocked

Vray CPU 11min 32sec - Without DOF

Vray GPU 1min25sec - With DOF

Holy crap is this true?? A ten time render speed increase is not something to ignore! Can you post up some images please? This is making me reconsider Vray...


Would like to see the comparison, because the old one Cecofuli posted is deceiving. It's not Vray vs VrayRT GPU, but VrayRT (CPU) vs VrayRT (GPU), where the engine, is pure BF/BF in both cases.
It just recently received LC algorithm as well, making even VrayRT(CPU) fast, but it's not fast as regular Vray algorithms.

But given I now have Titan-X as well, I can give it some try.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-06-21, 20:04:37
Reply #27

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
Just a small test to see GI speed. Lighting from sun and sky only. No portals.

VrayRT GPU (GTX 580)
Bruteforce+Bruteforce = 32 min
Bruteforce+Lightcache = 10 min

Corona 1.0 (i7-3930k@4.1GHz)
PT+UHDC = 27 min

As you can see VrayRT GPU BF+LC already at 10 minutes is clean enough.
Of course it's not a complex test but one of tests

2015-06-21, 20:13:37
Reply #28

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
One more test at scene from Vray's forum converted to Corona correctly. (of course you can see some differences but I do not think that they have a huge impact in overall speed)

This test was made to know how many time it will take to render same scene as we needed for our project. We see that GTX 580 in VrayRT GPU has done it in 1/3th of time needed to render same scene in Corona on i7-3930k and i7-2600k.

Render times are:
VrayRT - 12 min
Corona - 12 and 29 min

2015-06-21, 20:19:01
Reply #29

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
Fobus, thanks for the comparisons, can you post the tea pots scene, I have same processor as you, but I have a Titan X which is much faster than a 580. I'll run it and post my times for comparison.

2015-06-21, 21:06:24
Reply #30

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
Here are scenes done in 3dsmax 2015. I havn't got Vray so demo version was used as You can see from pictures. So default settings are saved I think. But I made no changes from open to render it so You can simply open it and render with default settings saved (lightcache as secondary GI saved I think)

Update:
I had to check files to upload right versions.

Update 2:
Uploaded right versions of max scenes. Uploaded VrayRT image just rendered in less than 5 min at 2xGTX580 from VrayRT scene.

Update 3:
Uploaded image just rendered from attached scene in Corona 1.0 on i7-3930k@4.1GHz in 15 min
« Last Edit: 2015-06-21, 21:34:51 by fobus »

2015-06-21, 21:46:12
Reply #31

daniel.reutersward

  • Active Users
  • **
  • Posts: 310
    • View Profile
I tried your scenes just for fun.. :)

System:
2x Xeon E5-2697 v3
64gb ram
1 Geforce GTX 980

With V-Ray I set the Max paths/pixel to the same amount you had: 20007
With Corona I set the same amount of passes you had: 3962

With my system the V-Ray RT Gpu version was not that much faster.
With Corona 6min 58s and with V-Ray RT Gpu 6min 14s.

2015-06-22, 00:10:08
Reply #32

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
This is a great comparison, and when I say great I mean not for the vray.  Corona has a more efficient and better GI system.  It gets great time in this, this is the type of test corona will excel at more.  But remember we are comparing corona CPU to vray GPU, two totally different renderers. Now just imagine what the time would be if it was Corona GPU.

1st image) First is Vray straight comparison, you had two options checked that slow down final render and are only good for fast preview for interactive. - 4min32sec

2nd image) This is with those incorrect settings turned off - 4min6sec

3rd image) this is with just a little 2 second tweak - 3min29sec

1 GTX Titan X, now imagine just adding one more card.

If anyone has a larger more detailed scene to try that would be great to do.
« Last Edit: 2015-06-22, 00:14:23 by dfcorona »

2015-06-22, 05:33:02
Reply #33

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
System:
2x Xeon E5-2697 v3
64gb ram
1 Geforce GTX 980

With my system the V-Ray RT Gpu version was not that much faster.

Look at config and You'll realize that You're comparing $500 GPU with $5000 CPU and you can easily set up to 4x980Ti in one PC with increasing GPU speed nearly 5x from your GTX 980 in just $3000. So it is really possible to reach speed of 10x Xeon E5-2697 v3 in just one PC cheaper than 2x Xeon E5-2697 v3. Of course It will be only 6Gb of GPU RAM, but we are talking about future and in nearly future there will be much more RAM on GPU (8Gb on mid-end ATI http://www.techpowerup.com/reviews/MSI/R9_390X_Gaming/ on its way and, as was written before, nVidia said that Pascal GPU supports up to 32Gb of RAM)

2015-06-22, 05:47:54
Reply #34

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
System:
2x Xeon E5-2697 v3
64gb ram
1 Geforce GTX 980

With my system the V-Ray RT Gpu version was not that much faster.

Look at config and You'll realize that You're comparing $500 GPU with $5000 CPU and you can easily set up to 4x980Ti in one PC with increasing GPU speed nearly 5x from your GTX 980 in just $3000. So it is really possible to reach speed of 10x Xeon E5-2697 v3 in just one PC cheaper than 2x Xeon E5-2697 v3. Of course It will be only 6Gb of GPU RAM, but we are talking about future and in nearly future there will be much more RAM on GPU (8Gb on mid-end ATI http://www.techpowerup.com/reviews/MSI/R9_390X_Gaming/ on its way and, as was written before, nVidia said that Pascal GPU supports up to 32Gb of RAM)

You have to remember this is also a scene that really favors coronas superior GI engine.  Now you take more well lit scenes, or even the everyday scenes I work on and GPU renders much faster. A scene I'm working on for a client right now is a kitchen scene, just for fun I tried both CPU vs GPU. Cpu was 12min35sec, GPU was 2min9sec. and there was no comparison the CPU was much more noisy so it would of taken even longer to clean up.

2015-06-22, 06:27:05
Reply #35

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
If anyone has a larger more detailed scene to try that would be great to do.

I've got the scene done in Corona Alpha 7.1 a bit optimized for 1.0. Plus the same scene converted to Vray. But I have not Vray 3.2 to render it (demo has so much limitation that makes it impossible to test)

Corona
https://cloud.mail.ru/public/443AVAermhxe/CONF_Test_Corona_01.rar

VRay
https://cloud.mail.ru/public/2GspyBg1jPb9/CONF_Test_Vray_04.rar

2015-06-22, 08:33:29
Reply #36

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I would like to see Juraj to join this conversation about GPU/CPU :)
He usually have some facts, in one or the other direction that can lead to some note resting data on this regard.

Cheers!

2015-06-22, 09:38:46
Reply #37

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
GPUs are growing in compute power much quicker than CPUs in last few years. Scalability is much more efficient. The two major troubles I see at this moment: First for us, users, is a small amount of RAM available to render huge scenes and the Second is for developers - is much more complicated programming for GPUs plus need in completely redone all 3dsmax maps (at least most used) to be compatible for GPUs. The First one is going to be gone in 1-2 years as we see RAM amount are growing fast. So I hope The Second one will not be a barrier to get much more compute power for much more less money.

2015-06-22, 10:50:44
Reply #38

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
I would like to see Juraj to join this conversation about GPU/CPU :)
He usually have some facts, in one or the other direction that can lead to some note resting data on this regard.

Cheers!

Do I smell irony :- ) ?

I honestly think GPU is also the future, I was never of opposite opinion, but I was always tired of how it is 10-100x faster, when that obviously never was (I could tell given I had Octane). And all the limits GPU engines had (gets much better today but, it took Octane 5 years, and VrayRT GPU is still in puberty, better than infancy though)

Now I honestly don't care when and how that gets implemented given the speed-up in actual scenes can be between 3-4x, and my render times would still be few hours (as I know they are for those few GPU studios that are out there, like DeltaTracing guys),
I would still need render-farms like Rebus as I do now. Because that gives me 5 minute renders for few bucks :- ) I surely ain't buying quad-Titan-X to each in our office when even that won't suffice, just like Dual-Xeons don't suffice when I need to finish 5 8k Finals in single day.
For now my Titan-X is purely Unreal engine beast, but I make give a brief go again to see where all that competition is now (Octane/Redshift/etc..) out of interest when time permits.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-06-22, 11:20:36
Reply #39

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
A little story :)

As I noted before we're faced a big problem in rendering of upcoming big project. It contains 2 mins of exterior shots and 3000+ spherical panoramas of interiors. Our little renderfarm is capable to render all exterior shots in time but it will be 100% loaded. As we started to count the time to render 3000+ images with 5k resolution we're realize that with our lovely Corona it will takes approximately 120 days (4 time more than we have). Of course it is possible to cut rendertimes to 1/4 by decreasing fidelity but this is not our way. Then we're started research how to cut rendertimes in other way. VRay with Bruteforce+LightCache seems to be a real solution till we done some tests from wich we're noticed that it is a 1.5 slower than Corona PT+UHDC. IrMap+LC was roughly the same speed as Corona but with poor quality. So VrayRT was the latest option. And it rocks. Even at old GTX 580 it was much faster than on any our rendernode or workstation. Of course it hasn't got some features from CPU Vray but rendertime was great. So as our farm will be busy all the time we're startetd calculations with options of buying new regular PCs for Corona based rendering, buying GPUs and rendering on external renderfarm. Numbers (rough): Rebus renderfarm - $20000 (400Hrs on 100PCs), Regular PCs - $45000 (30 days on 35PCs!!!) and $9000 for GPU PCs (2PCs with 4x980Ti). As our little farm contains only 16PCs adding 35PCs will kill our energy cables and conditioners. Administration of this huge number of PCs is terrible too. 400Hrs for rendering versus 30 day is delicious but it is wasted $20000.

Cheers

2015-06-22, 11:25:27
Reply #40

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
Now just imagine what the time would be if it was Corona GPU.

Corona GPU would not be better than Corona CPU. It is that simple. It may be faster in synthetic tests, but not in real world, where you have huge scenes, vastly different settings, and limited development resources.
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-06-22, 11:31:31
Reply #41

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
A little story :)

As I noted before we're faced a big problem in rendering of upcoming big project. It contains 2 mins of exterior shots and 3000+ spherical panoramas of interiors. Our little renderfarm is capable to render all exterior shots in time but it will be 100% loaded. As we started to count the time to render 3000+ images with 5k resolution we're realize that with our lovely Corona it will takes approximately 120 days (4 time more than we have). Of course it is possible to cut rendertimes to 1/4 by decreasing fidelity but this is not our way. Then we're started research how to cut rendertimes in other way. VRay with Bruteforce+LightCache seems to be a real solution till we done some tests from wich we're noticed that it is a 1.5 slower than Corona PT+UHDC. IrMap+LC was roughly the same speed as Corona but with poor quality. So VrayRT was the latest option. And it rocks. Even at old GTX 580 it was much faster than on any our rendernode or workstation. Of course it hasn't got some features from CPU Vray but rendertime was great. So as our farm will be busy all the time we're startetd calculations with options of buying new regular PCs for Corona based rendering, buying GPUs and rendering on external renderfarm. Numbers (rough): Rebus renderfarm - $20000 (400Hrs on 100PCs), Regular PCs - $45000 (30 days on 35PCs!!!) and $9000 for GPU PCs (2PCs with 4x980Ti). As our little farm contains only 16PCs adding 35PCs will kill our energy cables and conditioners. Administration of this huge number of PCs is terrible too. 400Hrs for rendering versus 30 day is delicious but it is wasted $20000.

Cheers


Paying for Render farm is wasted if you think of it as investment. It isn’t, it’s something you should tell your client about and bill him directly. I did so for past year and I couldn’t be happier, I no longer maintain farm, I repurposed dual-xeons into ‘quick-preview’ workstations and everythings goes into cloud.
400 hours seems excessive as they can give you simultaneously much more than 100PCs. Sounds like mistake because that is 17 days, hardly much better than 30 days.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-06-22, 11:33:35
Reply #42

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
Now just imagine what the time would be if it was Corona GPU.

Corona GPU would not be better than Corona CPU. It is that simple. It may be faster in synthetic tests, but not in real world, where you have huge scenes, vastly different settings, and limited development resources.

But GPUs processing power grows much faster and scalability is much easier than CPUs. So Corona GPU can reach more heights faster just by riding on nVidia and ATI progress.

400 hours seems excessive as they can give you simultaneously much more than 100PCs. Sounds like mistake because that is 17 days, hardly much better than 30 days.

Of course they can, but we can't pay (or client) so much as we're have 30 days.

2015-06-22, 11:49:43
Reply #43

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
But GPUs processing power grows much faster and scalability is much easier than CPUs. So Corona GPU can reach more heights faster just by riding on nVidia and ATI progress.

THIS - "you dont have to put in effort, you can just wait for GPUs to become faster" - is exactly why we have so many useless GPU renderers. This is just not true. You have to put in effort - much more than on CPU. You will have to deal with all the limitations and problems. I would rather develop new adaptive sampler to speed things up on CPU than rewrite half of my application every time new GPU architecture is out.
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-06-22, 12:12:56
Reply #44

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
Adaptive sampler - it's really what we need to reach clean DoF and MB. Even VRayRT GPU can't deliver clean picture from my test AA scene.
But overall speed on detail reach images will be affected no that much as GPUs speed as it can't boost calculations overall (I hope that I'm wrong).

From former Octane developer: Adaptive sampling has nothing to do with detailed scene with grass, hairs, displacement. It works fine only on clean spaces like painted walls, ceilings and so on.
« Last Edit: 2015-06-22, 16:26:43 by fobus »

2015-06-22, 15:40:14
Reply #45

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile

Do I smell irony :- ) ?


Not at all, really, I really meant what I said, in the end you always had very good facts on this regard.

Cheers!

2015-06-22, 19:59:00
Reply #46

DanGrover

  • Active Users
  • **
  • Posts: 21
    • View Profile

Here is the answer to your question, Corona is brand new, built from the ground up with the latest and greatest.  Look at the speed and quiality of it's GI engine, plus it's simplicity, shader system, even though it takes longer to clean final render it's still very fast for what it is.  Imagine all of that running on the power of GPU or assisted on the GPU.  I believe Ondra has built a fantastic CPU renderer, and would love to see it be better. That's all...... and why not.

I think this is a moot point. We are all in search of the best overall rendering software, why not try to make Corona as good as it can possibly be? Most of us are here and have decided to part with our money because we find Corona better / more desirable than other rendering software as a whole. However the speed of a renderer is definitely a HUGE part of what makes it "good". Are you saying you wouldn't care if Corona could be 10x as fast as it is currently? 10 hours vs 1 hour - there is simply no arguing with that. Remember the early Maxwell days - quality, but day long renders! This was a big turn off for most people.

Because there are limitations to it. It's not simply a case of "Look, 10x quicker, for free!" My point re: Ondra's relative (to us) expertise was meant to demonstrate that; You (both) say that you want it to be as good as it can be whilst ignoring the guy who knows more about writing render engines than any of us when he says what that what he thinks is best for Corona is to not use GPU render. You (dfcorona) then accuse him of being defensive for saying as much. Obviously you're welcome to your opinion but I - and the company I worked for - moved to Corona because the, for want of a better term, vision that Ondra and the team has for Corona matches up to my own desires in a render engine. I'd always like it faster, but if I want fast I'll use Quicksilver. If I want uber high quality I'll use some unbiased 48-hours-per-frame renderer. Everything else fits somewhere in the middle. Speed isn't the main thing I want from a render engine. And if you look at VRay standard vs VRay RT and MentalRay vs IRay and look at the differences, you'll get some idea of the answer to your "and why not?" question.

2015-06-22, 20:20:42
Reply #47

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile

Here is the answer to your question, Corona is brand new, built from the ground up with the latest and greatest.  Look at the speed and quiality of it's GI engine, plus it's simplicity, shader system, even though it takes longer to clean final render it's still very fast for what it is.  Imagine all of that running on the power of GPU or assisted on the GPU.  I believe Ondra has built a fantastic CPU renderer, and would love to see it be better. That's all...... and why not.

I think this is a moot point. We are all in search of the best overall rendering software, why not try to make Corona as good as it can possibly be? Most of us are here and have decided to part with our money because we find Corona better / more desirable than other rendering software as a whole. However the speed of a renderer is definitely a HUGE part of what makes it "good". Are you saying you wouldn't care if Corona could be 10x as fast as it is currently? 10 hours vs 1 hour - there is simply no arguing with that. Remember the early Maxwell days - quality, but day long renders! This was a big turn off for most people.

Because there are limitations to it. It's not simply a case of "Look, 10x quicker, for free!" My point re: Ondra's relative (to us) expertise was meant to demonstrate that; You (both) say that you want it to be as good as it can be whilst ignoring the guy who knows more about writing render engines than any of us when he says what that what he thinks is best for Corona is to not use GPU render. You (dfcorona) then accuse him of being defensive for saying as much. Obviously you're welcome to your opinion but I - and the company I worked for - moved to Corona because the, for want of a better term, vision that Ondra and the team has for Corona matches up to my own desires in a render engine. I'd always like it faster, but if I want fast I'll use Quicksilver. If I want uber high quality I'll use some unbiased 48-hours-per-frame renderer. Everything else fits somewhere in the middle. Speed isn't the main thing I want from a render engine. And if you look at VRay standard vs VRay RT and MentalRay vs IRay and look at the differences, you'll get some idea of the answer to your "and why not?" question.

I accuse him, read what he wrote. And he keeps giving us don't believe the marketing BS. and "It may be faster in synthetic tests, but not in real world, where you have huge scenes, vastly different settings, and limited development resources." What synthetic tests, we are using it in real world projects, and reporting our findings. Ondra is entitled to do what he wants, it's his render engine, and I think he did a fantastic job with it, he is a brilliant programmer. I would love to be able to use corona, but when it takes X times longer to render compared to it's rivals with similar quality.... well.  And I have my answer to Vray vs Vrayrt, the differences?...... well there are some features not supported yet....(YET) just like when CPU renderers where first being built.  But GPU is at a point now that you can complete full projects and it supports most features.  You have to look ahead and not behind, but maybe I'm totally wrong. maybe Intel will decide next year to release a processor 10x faster...... we can all hope.

2015-06-22, 21:40:17
Reply #48

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
maybe Intel will decide next year to release a processor 10x faster...... we can all hope.

Unfortunately CPUs progress seems to be on pause now. Next gen Intel CPUs - Skylake - is faster by roughly  5% than latest  Devil's Canyon (http://wccftech.com/intel-skylake-s-core-i7-6700-k-benchmarks/)

2015-06-22, 21:43:44
Reply #49

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
maybe Intel will decide next year to release a processor 10x faster...... we can all hope.

Unfortunately CPUs progress seems to be on pause now. Next gen Intel CPUs - Skylake - is faster by roughly  5% than latest  Devil's Canyon (http://wccftech.com/intel-skylake-s-core-i7-6700-k-benchmarks/)

Exactly my point.

2015-06-23, 09:56:45
Reply #50

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I disagree with this latest point.

If you compare a 2600k vs a 5820k you have a 200% improvement in rendering times, and I think those two are similar in its qualification inside their CPU family.

But it was just like with the 3xxx/4xxxK CPU's, the improvement vs a 2600k was not so awesome at all, there was other things like power consumption, but in raw rendering power the difference was not too big.

So I don't think CPU evolution is halted, it just that goes at a different pace.

If you go for GPU renderers, are you going to be renewing your GPU's every year?

IMHO we need MORE speed in Corona, I don't know how, Ondra would know, but it's a fact that we need to speed up things without loosing quality.

But GPU's? It's too expensive yet, did you make calcs about how much can consume each PC+4 980? At 100% 24/7 ?

Cheers.

2015-06-23, 10:39:29
Reply #51

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
IMHO we need MORE speed in Corona, I don't know how, Ondra would know, but it's a fact that we need to speed up things without loosing quality.

We were not previously focusing on speed at all, because there were more important things to do. Remember: there is no adaptivity yet whatsoever. Try rendering in vray with adaptivity disabled/enabled and report the difference. We are hoping to speed up corona this way one day.
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-06-23, 10:50:11
Reply #52

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 12711
  • Marcin
    • View Profile
there is no adaptivity yet whatsoever
buckets?
Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us

2015-06-23, 11:26:42
Reply #53

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
there is no adaptivity yet whatsoever
buckets?

I wrote that one in 1 night just to get 2 extra points in a my Master's rendering course, that does not count ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-06-23, 11:27:24
Reply #54

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
But GPU's? It's too expensive yet, did you make calcs about how much can consume each PC+4 980? At 100% 24/7 ?

I've wrote calculations we did a bit earlier ($45000 for 35 CPU nodes vs $9000 for 2 GPU nodes). Each 4790k CPU eats 120W under heavy load and each 980Ti eats 300W under heavy load. 120W*35=4200W vs 300W*8=2400W (this is witout other PC infrastructure where 35PCs consumes much more than 2PCs). So CPU is expensive this way and another.

2015-06-23, 11:31:39
Reply #55

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
The bucket adaptivity is pretty basic, it is there, but there is not a BIG difference against progressive, however I prefer buckets over progressive, but it's a personal choice.

I know that the main focus before was not speed, but quality and features, and that is great, I'm not saying that we need more speed because Vray is faster or not, I'm saying it because we need more speed, I don't care about Vray, mental Ray or any GPU render engine, I'm just asking for more speed hehehe

When are we going to know the new road map? :)

Cheers

2015-06-23, 11:33:58
Reply #56

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
So you say that 8 gpus's give you the same speed as 35 CPU, right?

That is cool, but in practice I'm not so sure about it, did you take in account the GPU limits, like polygons amount, textures amount, etc...

Anyways, if your theories are right, that is cool :) keep us informed on how it goes, and if possible, post your comparing renders (about Vray/vrayRTLC/Corona)

Cheers!

2015-06-23, 12:15:01
Reply #57

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
So you say that 8 gpus's give you the same speed as 35 CPU, right?

That is cool, but in practice I'm not so sure about it, did you take in account the GPU limits, like polygons amount, textures amount, etc...

Exactly. From our tests 8xGTX980Ti with VRayRT gave us the same speed as 35x4790k with Corona.

About limits... Yep. There are limits. Not in polygon counts and texture counts but in RAM amount. Polygons are stored in GPU RAM so it really important to have much of it, but textures sitting in system RAM.

P.S.
I gave numbers for our particular project consist of 3000+ spherical panoramas of simple interiors. So it can vary for other types of scenes.

2015-06-23, 17:33:47
Reply #58

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
When I refer to poly count or texture limits I refer to RAM limits of course :)

We had a project that we rendered in iRay back in the iRay 2.0 times (a bit before the release, we used a beta release for it) and we were able to render that project thanks to iRay because the quality required was prety high and with Maxwell it was a non sense, there was no other option at that time, Octane was too inmature, Arion was a bit slower at that time, and that was all.

We managed to finish the project (it's in our site) but we had to lower the polycount because it could not fit in the 3Gb of the 580 so wehad to lower the subdivisions and you can notice that in some shot.

We are talking about a video of an industrial machine, nothing more on the stage, so it wasn't a complex interior scene or anything, the project we are doing right now (and some of the latest ones) could not fit in 6 or 8 Gb of ram at all, that is for sure, we have several STP models, we have several OpenSubDiv models at pretyt high res, the scene is around 20 million polygons, sometimes a bit more, we can't fit this in a GPU, and maybe those 2 computers with 4 GPU each are great for that project, and you may use them for that one project, but what happens when you need to work in more complex projects?

We have a pipeline, at first GPU seemed to be pretty great because of speed, but in the end, in the majority of projects 6Gb's of GPU ram are not enough :P at least for us, plus you have to add the lack of TONS of features in GPU render engines, several AOV's, depending on the engine different features like Volumetric rendering, etc... we don't like to be constrained to a sub set of features, right now in Corona we are more or less contrained in features, but not so much, we can deal with almost anything, we can't say the same with GPU render engines, at least I speak specially for iRay, Arion has a pretty big and good feature set, but it's more aimed towards other markets and it's evolution is not as fast as I would like.

So that is my storey and why we abandoned GPU render engines (amongst other things) we need speed but we also need reliability to realize our projects, no matter wich type of project is it, and the investment for a proper GPU farm is too high, at least for us, and of course the power consumption is massive, just thinking in having 10 computers draining power with 2 GPU's each one, the energy invoice the month we did the industrial machine video was around 700€... I've never received such invoice using CPU, we've been all month rendering 24/7 with 10 computers and the invoice for 2 months is going to be 170€ ... it's a pretty big difference.

Cheers!

2015-06-23, 18:14:03
Reply #59

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
You're right for sure. My particular example based on one project we think could be done with 6Gb of GPU RAM. So all calculations are relative to it (2x more power consumption and 5x price of CPU based rendering).

But we see that big difference in cost and rendertimes and power consumption. And we're awaiting of Corona GPU. If Ondra see no way to porting it may be someone else can help him or may be he will change his opinion in a future. We're hoping that fabulous Adaptive samplers can help us to improve the speed of Corona, but it has nothing to do with complex detailed scenes (simple fixed sampler much more efficient than adaptive in Vray with lots of grass, hairs, detailed textures, displacement etc.)

2015-06-23, 18:25:26
Reply #60

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
When I refer to poly count or texture limits I refer to RAM limits of course :)

We had a project that we rendered in iRay back in the iRay 2.0 times (a bit before the release, we used a beta release for it) and we were able to render that project thanks to iRay because the quality required was prety high and with Maxwell it was a non sense, there was no other option at that time, Octane was too inmature, Arion was a bit slower at that time, and that was all.

We managed to finish the project (it's in our site) but we had to lower the polycount because it could not fit in the 3Gb of the 580 so wehad to lower the subdivisions and you can notice that in some shot.

We are talking about a video of an industrial machine, nothing more on the stage, so it wasn't a complex interior scene or anything, the project we are doing right now (and some of the latest ones) could not fit in 6 or 8 Gb of ram at all, that is for sure, we have several STP models, we have several OpenSubDiv models at pretyt high res, the scene is around 20 million polygons, sometimes a bit more, we can't fit this in a GPU, and maybe those 2 computers with 4 GPU each are great for that project, and you may use them for that one project, but what happens when you need to work in more complex projects?

We have a pipeline, at first GPU seemed to be pretty great because of speed, but in the end, in the majority of projects 6Gb's of GPU ram are not enough :P at least for us, plus you have to add the lack of TONS of features in GPU render engines, several AOV's, depending on the engine different features like Volumetric rendering, etc... we don't like to be constrained to a sub set of features, right now in Corona we are more or less contrained in features, but not so much, we can deal with almost anything, we can't say the same with GPU render engines, at least I speak specially for iRay, Arion has a pretty big and good feature set, but it's more aimed towards other markets and it's evolution is not as fast as I would like.

So that is my storey and why we abandoned GPU render engines (amongst other things) we need speed but we also need reliability to realize our projects, no matter wich type of project is it, and the investment for a proper GPU farm is too high, at least for us, and of course the power consumption is massive, just thinking in having 10 computers draining power with 2 GPU's each one, the energy invoice the month we did the industrial machine video was around 700€... I've never received such invoice using CPU, we've been all month rendering 24/7 with 10 computers and the invoice for 2 months is going to be 170€ ... it's a pretty big difference.

Cheers!

I hear you on your situation, I to abandoned GPU render earlier. But that's when GPU cards were less efficient, Less powerful, less Vram, and less features.  Now working with renderers like Octane especially 3.0 that's coming with Volumetrics support upon a bunch of other huge features, and Vray which has a lot of support for features, things have changed.  Also now that you can purchase affordable videocards with 12gb vram, that's a game changer.  Like I always said GPU rendering is being developed at an enormous rate now, when each renderer started with cpu they didn't have many features either.  Now GPU rendering are already having features like using system ram so you don't have to worry about vram, and next year when Pascal comes out with nvlink, up to 32gb ram, and a 10x increase in speed, with the renderers have supported most of the features if not all....... well things are really going to be interesting. To each there own right now, it's great to have so many options, and my hat goes off to Ondra for creating a fantastic CPU renderer.

2015-06-23, 18:44:41
Reply #61

cecofuli

  • Active Users
  • **
  • Posts: 1577
    • View Profile
    • www.francescolegrenzi.com
I think it's always better to have the possibility to choose (GPU or CPU).
Both have their pros and cons.
Now, Corona developers are focused on adding features essential for a modern rendering engine.
They are not 20 people and they cannot, physically speaking, also developing a GPU version.
With V-Ray had to wait almost two-three years since the first demonstration of V-Ray RT Cuda (2009?).
Yes, in the next years (2-3) we will have TOP Nvidia Videocard with 32 GB RAM.
So, the problem about RAM will disappear.
The main problem will be finding the time and energy to develop, alongside the CPU version, the GPU version.
Though Ondra say no, I bet 10 Corona (beer ehehe) that sooner or later it will happen. ^__^

2015-06-23, 18:46:51
Reply #62

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
What is that affordable 12Gb card that you are going to be forced to change in a year because it won't have DX12 support?

Hehe, GPU may be the future, but at least for us, is not the present, also being constrained to a GPU vendor, CUDA being propetary forces us to acquire Nvidia cards, so everything will evolve at the speed Nvidia wants.
The promise of Pascal is the same as the promise of Maxwell chips... did Maxwell change so much? No, more Raw power? Yes, for sure, more flexibility? I doubt it even when in theory there is more flexibility the only thing I hear from GPU render engines developers is the constant limitations they have to do that thing or the other thing... Pascal? Will see, Maxwell is not what it was supposed top be, at least up to my understanding.

As I said, GPU may be the future, but is not the present, and I think it won't be for a few years yet, will see, I may be wrong of course :)

And som final questions to hear opinions and thoughs:

- What happens if Intel starts integrating thoushands of OpenCL cores in their CPU's?
- Do you think the GPU integration effort from intel is just so the GPU can be inside the CPU?
- What do you think about Intel interest in ARM architeture as competitor and model to draw the future?
- Do you think Intel don't see that people is thinking in GPU's as RAW power instead of thinking in their CPU's?
- Why do you think Intel has developed Embree and their failed Computation Card?

Intel has not become the giant it is because it's been standing still seeing how competitors gain market, what happened to the reign of AMD64? AMD was the first implementing a x86 compatible 64 bit architecture... can you compare the AMD power as of today with Intel power?
I think a lot of things will come, specially regarding the CPU world, and CUDA is here to stay, but if OpenCL starts growing and receive support by different vendors... will see...

Cheers!

Cheers.

2015-06-23, 18:58:42
Reply #63

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
But we see that big difference in cost and rendertimes and power consumption. And we're awaiting of Corona GPU. If Ondra see no way to porting it may be someone else can help him or may be he will change his opinion in a future.

There is no such thing as "porting something to GPU". You write another program from scratch that works with the same inputs, and, if you are lucky, uses roughly the same algorithms ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-06-23, 19:01:32
Reply #64

RobSteady

  • Active Users
  • **
  • Posts: 45
    • View Profile
To add fuel to the fire...
Just kidding, I think Corona is a nice engine and is greatly integrated into Max (you can't say this for Octane) ;)
Here's a 4k 10 minute Octane render with 2 x 980 Ti and 1 x Titan Z
(The 980 Ti is a nice card for anyone considering Octane)

« Last Edit: 2015-06-23, 19:06:30 by RobSteady »

2015-06-23, 19:19:23
Reply #65

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
Octane will be remaing to be seen, the cloud option is taking more and more force in Otoy, let's see what happens to their Offline render engine in the future.

The worst thing about current GPU render engines like Octane or iRay is that their mother companies are not to be trusted :P

Coronas is to be trusted, at least is what they've demonstrated so far with the pricing structure and maintaining BOX licenses + Subs, this is also an added value to base your piepline in a piece of software because if you base your pipeline in a software like Octane, and suddenly they start focusing efforts just in ther Cloud business model... you are going to be forced to go the path they want, the same that happens with Nvidia and Cuda.

IMHO there are more things that just raw power and features to think about if you are going to base your pipeline and your farm in an specific type of render engine.

Cheers.

2015-06-24, 02:47:04
Reply #66

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
What is that affordable 12Gb card that you are going to be forced to change in a year because it won't have DX12 support?

Hehe, GPU may be the future, but at least for us, is not the present, also being constrained to a GPU vendor, CUDA being propetary forces us to acquire Nvidia cards, so everything will evolve at the speed Nvidia wants.
The promise of Pascal is the same as the promise of Maxwell chips... did Maxwell change so much? No, more Raw power? Yes, for sure, more flexibility? I doubt it even when in theory there is more flexibility the only thing I hear from GPU render engines developers is the constant limitations they have to do that thing or the other thing... Pascal? Will see, Maxwell is not what it was supposed top be, at least up to my understanding.

As I said, GPU may be the future, but is not the present, and I think it won't be for a few years yet, will see, I may be wrong of course :) 

And som final questions to hear opinions and thoughs:

- What happens if Intel starts integrating thoushands of OpenCL cores in their CPU's?
- Do you think the GPU integration effort from intel is just so the GPU can be inside the CPU?
- What do you think about Intel interest in ARM architeture as competitor and model to draw the future?
- Do you think Intel don't see that people is thinking in GPU's as RAW power instead of thinking in their CPU's?
- Why do you think Intel has developed Embree and their failed Computation Card?

Intel has not become the giant it is because it's been standing still seeing how competitors gain market, what happened to the reign of AMD64? AMD was the first implementing a x86 compatible 64 bit architecture... can you compare the AMD power as of today with Intel power?
I think a lot of things will come, specially regarding the CPU world, and CUDA is here to stay, but if OpenCL starts growing and receive support by different vendors... will see...

Cheers!

Cheers.

There is a whole flip side to your statements. You say being constrained to a GPU vendor. Are you not constrained by CPU vendor? I think you answered your own question, unless for some reason you buy AMD's if so I can say the same for there video cards since some render engines right now and soon most will support OpenCL. I'm not sure how much knowledge you have on Videocards but My 12gb Titan X is already DirectX 12 API with Feature Level 12.1.  And your also asking what shall I do if I have to sell it for some reason, that's easy..... I sell it on ebay, get most of my money back and buy the newest card, Lets see you try that with your CPU.  Did Maxwell change so much? yes it did, it's much more efficient and powerful, next time they will focus back on much more performance it seems with Pascal.  Even if Pascal is only 2x faster than Maxwell instead of 10x like they claim, that's a huge win. I would like to see Intel do something like that.  Who knows what the future brings, I know Gaming is driving videocard performance to the roof which is good for us, and Intel seems to do minor increases in speed. I have a 6core i7, waited forever for just a boost of 2 more cores with the 8 core. what's next a few years for a 10core. Unless Intel starts getting some competition, they are just going to sail through the years with minimum updates.

2015-06-26, 18:10:57
Reply #67

steyin

  • Active Users
  • **
  • Posts: 375
  • BALLS
    • View Profile
    • Instagram Page

The worst thing about current GPU render engines like Octane or iRay is that their mother companies are not to be trusted :P



I don't know about Octane, but with Autodesk I agree. As far as I'm concerned, iRay is dead. It's online user base/forum is pretty much non-existent now as compared to a year or two ago. I enjoyed it at first, but it was way too slow as an engine without having to fork out $$$ for a super card, plus its development was even slower. But again, look at who's holding the reigns on that.

2015-06-27, 13:07:13
Reply #68

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
iRay is a scam. I guess when other renderers weren't developing fast enough to be used as marketing super-piece by nVidia, they simply set aside some budget to small team and developed it for some time.
Not it gets few features per Autodesk cycle, effectively becoming abandoware. Only place where it gets some use is outsources core to little renderers like Keyshot,etc..
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-07-09, 05:36:53
Reply #69

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
32Gb RAM on video card is reality now https://forum.corona-renderer.com/index.php/topic,8870.0.html so there are less and less reasons not to calculate by videocards.

2015-07-09, 11:16:23
Reply #70

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
Great, price and performance?

Because you know you can have a 8 CPU's multithreaded in one system with 256Gb of RAM right? a mini-renderfarm in one system, the downside is the price hehehe

BTW I did not responded to your previous post because lack of time (the one where you answered me) but I have it on my list, asap I'll answer to the reasonings you said there :)

Cheers!

2015-07-09, 11:43:47
Reply #71

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
Impressive, single-GPU, so it's actually 32GB Vram.

The cost will be similar to TeslaK80/QuadroM6000 I guess, or slightly more. Somewhere in range of 5-7000 dollars.
Performance will be likely in range of the 390X, which is counterpart to Titan-X/980Ti.

Nothing mainstream here guys :- ) Yet.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2016-02-10, 23:50:01
Reply #72

sebastian___

  • Active Users
  • **
  • Posts: 200
    • View Profile
I read most of the replies here. And some argue that CPU rendering is more efficient, and is not economical to buy nvidia cards for rendering.
But the point is most of us already have powerful cards. Some even 2 or 3. Which just sit idle, while rendering with CPU.

I understand the GPU's can only do specialized stuff. And the Vray way - having to choose either the "conventional" vray renderer, or the CPU RT or the GPU RT is confusing, and you have to compromise if you want the speed of the vray gpu.

The best solution, and probably the only acceptable solution, would be if the GPU can be "added" somehow, like adding another cpu or like adding on the network another computer.
I mean if the GPU can be used as a general purpose processor and used in music to calculate hall reverb and other music related effects, it stands to reasons that it should be able to calculate at least some parts of a render. Even if with very low efficiency.

I think maybe Arion did something similar, and I also remember reading some years ago a mental ray paper about using the gpu to aid some parts of the rendering in addition to the cpu.

2016-02-11, 01:17:00
Reply #73

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
Sebastian, you're the guy with the superior CryEngine work :- ) I remember being in awe over you stuff....are you still active in this ?

Regarding current GPU raytracers, I think it's been pretty much proven by now it's the pure-GPU ones that are best developed and fastest (Octane and Redshift), the ones who took middle route of acceleration (Thea, Maxwell, iRay,etc..) do not work very impressively in this mode.
There might definitely be some resurgence of their popularity in next two years, unless they get killed by real-time game engines. The lines are getting blurrier...
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2016-02-12, 20:10:04
Reply #74

sebastian___

  • Active Users
  • **
  • Posts: 200
    • View Profile
Thanks. I had to take a long break from cryengine (a few years), but I hope this year to resume the work. Even though I'm using the old engine (2007), and even with my very long delay, I still think it can be relevant with my additions like compositing real actors inside cryengine, 3d motion blur, 3d dof and many more features which are still unavailable in current engines.

And yes I'm aware the pure-gpu programmed are the fastest and most efficient, but that would not be a very good solution for Corona. If someone wants that, they can choose Octane, Vray and so on.
 People like Corona for the quality and simplicity.
  For the Corona developers to take the Vray route and start building an additional different renderer, called Corona GPU, which you have to select if you want to use the GPU and depending on how it's coded - now with an opposite problem - having all your Xeon processors almost idle. And having to wait while the developers slowly add from time to time another supported map and another feature, and if you want the "full" Corona experience you still have to choose the CPU version... It doesn't sound like the spirit of Corona.

 But having the gpu contribute transparently and almost invisible to the user I think would be best, even if with much lower efficiency. Also the gpu not being a requirement, so you can still easily use your CPU render farm, or your workstation with your CPU investment built especially for corona, and any GPU card you would add would increase the speed.

It would not be the fastest possible way but the most convenient one. I think.

2016-02-29, 18:34:53
Reply #75

rampally

  • Active Users
  • **
  • Posts: 208
    • View Profile
We are observing GPU renderer development, but we are not developing GPU renderer. And we do not plan to do it unless some game-changer GPU architecture appears
Hi Ondra is Vulkan game changer GPU architecture  OR API???
« Last Edit: 2016-02-29, 18:44:53 by rampally »

2016-02-29, 19:27:27
Reply #76

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
no, that one is for real time graphics, not for ray tracing
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2016-03-02, 12:49:32
Reply #77

rampally

  • Active Users
  • **
  • Posts: 208
    • View Profile
no, that one is for real time graphics, not for ray tracing
Ok Ondra thnaks........however what are you expecting  to make ray tracing  good with GPU??

2016-03-02, 19:37:18
Reply #78

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
Ok Ondra thnaks........however what are you expecting  to make ray tracing  good with GPU??
I am not expecting anything from GPU, instead I focus on making the best use out of CPU ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2016-03-02, 19:39:27
Reply #79

rampally

  • Active Users
  • **
  • Posts: 208
    • View Profile
Ok Ondra thnaks........however what are you expecting  to make ray tracing  good with GPU??
I am not expecting anything from GPU, instead I focus on making the best use out of CPU ;)
hahahaha Cool.......:)