Author Topic: Corona GPU  (Read 56304 times)

2015-06-23, 18:25:26
Reply #60

dfcorona

  • Active Users
  • **
  • Posts: 292
    • View Profile
When I refer to poly count or texture limits I refer to RAM limits of course :)

We had a project that we rendered in iRay back in the iRay 2.0 times (a bit before the release, we used a beta release for it) and we were able to render that project thanks to iRay because the quality required was prety high and with Maxwell it was a non sense, there was no other option at that time, Octane was too inmature, Arion was a bit slower at that time, and that was all.

We managed to finish the project (it's in our site) but we had to lower the polycount because it could not fit in the 3Gb of the 580 so wehad to lower the subdivisions and you can notice that in some shot.

We are talking about a video of an industrial machine, nothing more on the stage, so it wasn't a complex interior scene or anything, the project we are doing right now (and some of the latest ones) could not fit in 6 or 8 Gb of ram at all, that is for sure, we have several STP models, we have several OpenSubDiv models at pretyt high res, the scene is around 20 million polygons, sometimes a bit more, we can't fit this in a GPU, and maybe those 2 computers with 4 GPU each are great for that project, and you may use them for that one project, but what happens when you need to work in more complex projects?

We have a pipeline, at first GPU seemed to be pretty great because of speed, but in the end, in the majority of projects 6Gb's of GPU ram are not enough :P at least for us, plus you have to add the lack of TONS of features in GPU render engines, several AOV's, depending on the engine different features like Volumetric rendering, etc... we don't like to be constrained to a sub set of features, right now in Corona we are more or less contrained in features, but not so much, we can deal with almost anything, we can't say the same with GPU render engines, at least I speak specially for iRay, Arion has a pretty big and good feature set, but it's more aimed towards other markets and it's evolution is not as fast as I would like.

So that is my storey and why we abandoned GPU render engines (amongst other things) we need speed but we also need reliability to realize our projects, no matter wich type of project is it, and the investment for a proper GPU farm is too high, at least for us, and of course the power consumption is massive, just thinking in having 10 computers draining power with 2 GPU's each one, the energy invoice the month we did the industrial machine video was around 700€... I've never received such invoice using CPU, we've been all month rendering 24/7 with 10 computers and the invoice for 2 months is going to be 170€ ... it's a pretty big difference.

Cheers!

I hear you on your situation, I to abandoned GPU render earlier. But that's when GPU cards were less efficient, Less powerful, less Vram, and less features.  Now working with renderers like Octane especially 3.0 that's coming with Volumetrics support upon a bunch of other huge features, and Vray which has a lot of support for features, things have changed.  Also now that you can purchase affordable videocards with 12gb vram, that's a game changer.  Like I always said GPU rendering is being developed at an enormous rate now, when each renderer started with cpu they didn't have many features either.  Now GPU rendering are already having features like using system ram so you don't have to worry about vram, and next year when Pascal comes out with nvlink, up to 32gb ram, and a 10x increase in speed, with the renderers have supported most of the features if not all....... well things are really going to be interesting. To each there own right now, it's great to have so many options, and my hat goes off to Ondra for creating a fantastic CPU renderer.

2015-06-23, 18:44:41
Reply #61

cecofuli

  • Active Users
  • **
  • Posts: 1577
    • View Profile
    • www.francescolegrenzi.com
I think it's always better to have the possibility to choose (GPU or CPU).
Both have their pros and cons.
Now, Corona developers are focused on adding features essential for a modern rendering engine.
They are not 20 people and they cannot, physically speaking, also developing a GPU version.
With V-Ray had to wait almost two-three years since the first demonstration of V-Ray RT Cuda (2009?).
Yes, in the next years (2-3) we will have TOP Nvidia Videocard with 32 GB RAM.
So, the problem about RAM will disappear.
The main problem will be finding the time and energy to develop, alongside the CPU version, the GPU version.
Though Ondra say no, I bet 10 Corona (beer ehehe) that sooner or later it will happen. ^__^

2015-06-23, 18:46:51
Reply #62

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
What is that affordable 12Gb card that you are going to be forced to change in a year because it won't have DX12 support?

Hehe, GPU may be the future, but at least for us, is not the present, also being constrained to a GPU vendor, CUDA being propetary forces us to acquire Nvidia cards, so everything will evolve at the speed Nvidia wants.
The promise of Pascal is the same as the promise of Maxwell chips... did Maxwell change so much? No, more Raw power? Yes, for sure, more flexibility? I doubt it even when in theory there is more flexibility the only thing I hear from GPU render engines developers is the constant limitations they have to do that thing or the other thing... Pascal? Will see, Maxwell is not what it was supposed top be, at least up to my understanding.

As I said, GPU may be the future, but is not the present, and I think it won't be for a few years yet, will see, I may be wrong of course :)

And som final questions to hear opinions and thoughs:

- What happens if Intel starts integrating thoushands of OpenCL cores in their CPU's?
- Do you think the GPU integration effort from intel is just so the GPU can be inside the CPU?
- What do you think about Intel interest in ARM architeture as competitor and model to draw the future?
- Do you think Intel don't see that people is thinking in GPU's as RAW power instead of thinking in their CPU's?
- Why do you think Intel has developed Embree and their failed Computation Card?

Intel has not become the giant it is because it's been standing still seeing how competitors gain market, what happened to the reign of AMD64? AMD was the first implementing a x86 compatible 64 bit architecture... can you compare the AMD power as of today with Intel power?
I think a lot of things will come, specially regarding the CPU world, and CUDA is here to stay, but if OpenCL starts growing and receive support by different vendors... will see...

Cheers!

Cheers.

2015-06-23, 18:58:42
Reply #63

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
But we see that big difference in cost and rendertimes and power consumption. And we're awaiting of Corona GPU. If Ondra see no way to porting it may be someone else can help him or may be he will change his opinion in a future.

There is no such thing as "porting something to GPU". You write another program from scratch that works with the same inputs, and, if you are lucky, uses roughly the same algorithms ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-06-23, 19:01:32
Reply #64

RobSteady

  • Active Users
  • **
  • Posts: 45
    • View Profile
To add fuel to the fire...
Just kidding, I think Corona is a nice engine and is greatly integrated into Max (you can't say this for Octane) ;)
Here's a 4k 10 minute Octane render with 2 x 980 Ti and 1 x Titan Z
(The 980 Ti is a nice card for anyone considering Octane)

« Last Edit: 2015-06-23, 19:06:30 by RobSteady »

2015-06-23, 19:19:23
Reply #65

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
Octane will be remaing to be seen, the cloud option is taking more and more force in Otoy, let's see what happens to their Offline render engine in the future.

The worst thing about current GPU render engines like Octane or iRay is that their mother companies are not to be trusted :P

Coronas is to be trusted, at least is what they've demonstrated so far with the pricing structure and maintaining BOX licenses + Subs, this is also an added value to base your piepline in a piece of software because if you base your pipeline in a software like Octane, and suddenly they start focusing efforts just in ther Cloud business model... you are going to be forced to go the path they want, the same that happens with Nvidia and Cuda.

IMHO there are more things that just raw power and features to think about if you are going to base your pipeline and your farm in an specific type of render engine.

Cheers.

2015-06-24, 02:47:04
Reply #66

dfcorona

  • Active Users
  • **
  • Posts: 292
    • View Profile
What is that affordable 12Gb card that you are going to be forced to change in a year because it won't have DX12 support?

Hehe, GPU may be the future, but at least for us, is not the present, also being constrained to a GPU vendor, CUDA being propetary forces us to acquire Nvidia cards, so everything will evolve at the speed Nvidia wants.
The promise of Pascal is the same as the promise of Maxwell chips... did Maxwell change so much? No, more Raw power? Yes, for sure, more flexibility? I doubt it even when in theory there is more flexibility the only thing I hear from GPU render engines developers is the constant limitations they have to do that thing or the other thing... Pascal? Will see, Maxwell is not what it was supposed top be, at least up to my understanding.

As I said, GPU may be the future, but is not the present, and I think it won't be for a few years yet, will see, I may be wrong of course :) 

And som final questions to hear opinions and thoughs:

- What happens if Intel starts integrating thoushands of OpenCL cores in their CPU's?
- Do you think the GPU integration effort from intel is just so the GPU can be inside the CPU?
- What do you think about Intel interest in ARM architeture as competitor and model to draw the future?
- Do you think Intel don't see that people is thinking in GPU's as RAW power instead of thinking in their CPU's?
- Why do you think Intel has developed Embree and their failed Computation Card?

Intel has not become the giant it is because it's been standing still seeing how competitors gain market, what happened to the reign of AMD64? AMD was the first implementing a x86 compatible 64 bit architecture... can you compare the AMD power as of today with Intel power?
I think a lot of things will come, specially regarding the CPU world, and CUDA is here to stay, but if OpenCL starts growing and receive support by different vendors... will see...

Cheers!

Cheers.

There is a whole flip side to your statements. You say being constrained to a GPU vendor. Are you not constrained by CPU vendor? I think you answered your own question, unless for some reason you buy AMD's if so I can say the same for there video cards since some render engines right now and soon most will support OpenCL. I'm not sure how much knowledge you have on Videocards but My 12gb Titan X is already DirectX 12 API with Feature Level 12.1.  And your also asking what shall I do if I have to sell it for some reason, that's easy..... I sell it on ebay, get most of my money back and buy the newest card, Lets see you try that with your CPU.  Did Maxwell change so much? yes it did, it's much more efficient and powerful, next time they will focus back on much more performance it seems with Pascal.  Even if Pascal is only 2x faster than Maxwell instead of 10x like they claim, that's a huge win. I would like to see Intel do something like that.  Who knows what the future brings, I know Gaming is driving videocard performance to the roof which is good for us, and Intel seems to do minor increases in speed. I have a 6core i7, waited forever for just a boost of 2 more cores with the 8 core. what's next a few years for a 10core. Unless Intel starts getting some competition, they are just going to sail through the years with minimum updates.

2015-06-26, 18:10:57
Reply #67

steyin

  • Active Users
  • **
  • Posts: 375
  • BALLS
    • View Profile
    • Instagram Page

The worst thing about current GPU render engines like Octane or iRay is that their mother companies are not to be trusted :P



I don't know about Octane, but with Autodesk I agree. As far as I'm concerned, iRay is dead. It's online user base/forum is pretty much non-existent now as compared to a year or two ago. I enjoyed it at first, but it was way too slow as an engine without having to fork out $$$ for a super card, plus its development was even slower. But again, look at who's holding the reigns on that.

2015-06-27, 13:07:13
Reply #68

Juraj

  • Active Users
  • **
  • Posts: 4761
    • View Profile
    • studio website
iRay is a scam. I guess when other renderers weren't developing fast enough to be used as marketing super-piece by nVidia, they simply set aside some budget to small team and developed it for some time.
Not it gets few features per Autodesk cycle, effectively becoming abandoware. Only place where it gets some use is outsources core to little renderers like Keyshot,etc..
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-07-09, 05:36:53
Reply #69

fobus

  • Active Users
  • **
  • Posts: 388
    • View Profile
32Gb RAM on video card is reality now https://forum.corona-renderer.com/index.php/topic,8870.0.html so there are less and less reasons not to calculate by videocards.

2015-07-09, 11:16:23
Reply #70

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
Great, price and performance?

Because you know you can have a 8 CPU's multithreaded in one system with 256Gb of RAM right? a mini-renderfarm in one system, the downside is the price hehehe

BTW I did not responded to your previous post because lack of time (the one where you answered me) but I have it on my list, asap I'll answer to the reasonings you said there :)

Cheers!

2015-07-09, 11:43:47
Reply #71

Juraj

  • Active Users
  • **
  • Posts: 4761
    • View Profile
    • studio website
Impressive, single-GPU, so it's actually 32GB Vram.

The cost will be similar to TeslaK80/QuadroM6000 I guess, or slightly more. Somewhere in range of 5-7000 dollars.
Performance will be likely in range of the 390X, which is counterpart to Titan-X/980Ti.

Nothing mainstream here guys :- ) Yet.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2016-02-10, 23:50:01
Reply #72

sebastian___

  • Active Users
  • **
  • Posts: 197
    • View Profile
I read most of the replies here. And some argue that CPU rendering is more efficient, and is not economical to buy nvidia cards for rendering.
But the point is most of us already have powerful cards. Some even 2 or 3. Which just sit idle, while rendering with CPU.

I understand the GPU's can only do specialized stuff. And the Vray way - having to choose either the "conventional" vray renderer, or the CPU RT or the GPU RT is confusing, and you have to compromise if you want the speed of the vray gpu.

The best solution, and probably the only acceptable solution, would be if the GPU can be "added" somehow, like adding another cpu or like adding on the network another computer.
I mean if the GPU can be used as a general purpose processor and used in music to calculate hall reverb and other music related effects, it stands to reasons that it should be able to calculate at least some parts of a render. Even if with very low efficiency.

I think maybe Arion did something similar, and I also remember reading some years ago a mental ray paper about using the gpu to aid some parts of the rendering in addition to the cpu.

2016-02-11, 01:17:00
Reply #73

Juraj

  • Active Users
  • **
  • Posts: 4761
    • View Profile
    • studio website
Sebastian, you're the guy with the superior CryEngine work :- ) I remember being in awe over you stuff....are you still active in this ?

Regarding current GPU raytracers, I think it's been pretty much proven by now it's the pure-GPU ones that are best developed and fastest (Octane and Redshift), the ones who took middle route of acceleration (Thea, Maxwell, iRay,etc..) do not work very impressively in this mode.
There might definitely be some resurgence of their popularity in next two years, unless they get killed by real-time game engines. The lines are getting blurrier...
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2016-02-12, 20:10:04
Reply #74

sebastian___

  • Active Users
  • **
  • Posts: 197
    • View Profile
Thanks. I had to take a long break from cryengine (a few years), but I hope this year to resume the work. Even though I'm using the old engine (2007), and even with my very long delay, I still think it can be relevant with my additions like compositing real actors inside cryengine, 3d motion blur, 3d dof and many more features which are still unavailable in current engines.

And yes I'm aware the pure-gpu programmed are the fastest and most efficient, but that would not be a very good solution for Corona. If someone wants that, they can choose Octane, Vray and so on.
 People like Corona for the quality and simplicity.
  For the Corona developers to take the Vray route and start building an additional different renderer, called Corona GPU, which you have to select if you want to use the GPU and depending on how it's coded - now with an opposite problem - having all your Xeon processors almost idle. And having to wait while the developers slowly add from time to time another supported map and another feature, and if you want the "full" Corona experience you still have to choose the CPU version... It doesn't sound like the spirit of Corona.

 But having the gpu contribute transparently and almost invisible to the user I think would be best, even if with much lower efficiency. Also the gpu not being a requirement, so you can still easily use your CPU render farm, or your workstation with your CPU investment built especially for corona, and any GPU card you would add would increase the speed.

It would not be the fastest possible way but the most convenient one. I think.