Author Topic: Corona GPU rendering  (Read 84396 times)

2015-01-24, 13:32:26

dux

  • Active Users
  • **
  • Posts: 6
    • View Profile
Hi.. My big question is
Will be CORONA also GPU ready ? .. 1.0 for max release ?
CUDA or OpenCL ?

I hope will be.. I am using IRAY.. but Corona is faster only with my CPU :) ...
will be COOL to have CPU-GPU rendering both same time

with best regards Dux

2015-01-24, 13:42:57
Reply #1

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
Hi.. My big question is
Will be CORONA also GPU ready ? .. 1.0 for max release ?
CUDA or OpenCL ?

I hope will be.. I am using IRAY.. but Corona is faster only with my CPU :) ...
will be COOL to have CPU-GPU rendering both same time

with best regards Dux
Due to the nature of GPU rendering restrictions, there are no plans for either GPU or a Hybrid model in the foreseeable future.

(In a distant future, where Vram is not an issue and a corona GPU port does not mean to sacrifice 50% of compatibility with it's own features, unlike eg. Vray)

refer to: https://corona-renderer.com/features/proudly-cpu-based/
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )

2015-01-24, 14:16:06
Reply #2

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
I hope will be.. I am using IRAY.. but Corona is faster only with my CPU :) ...

If Corona is faster using CPU, why would you want it to change?
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-01-24, 14:40:06
Reply #3

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
If Corona is faster using CPU, why would you want it to change?
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )

2015-01-24, 18:57:01
Reply #4

dux

  • Active Users
  • **
  • Posts: 6
    • View Profile
I am IRAY user (CPU+GPU).. but .. I tested Corona and Corona is a much faster than IRAY.. and Corona is running only CPU and faster...
So my question was.. when will be +GPU :) will be more more FASTER  :)

big thanks to programmer of Corona renderer

2015-01-24, 19:41:17
Reply #5

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 12711
  • Marcin
    • View Profile

2015-01-24, 23:42:10
Reply #6

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
Hi Dux.

I was also an iRay user and completely changed to Corona, it's cheaper (in energy terms, having a famr loaded with GPU's will increase the energy cost A LOT)

I don't want it to be GPU, I want it to be faster, but not with GPU but just like it is now, in CPU :) No constrains and limitations like in GPU render engines, you have your system ram and old fashion systems like proxys (wich does not suffer from slowdown like in GPU render engines).

Enjoy Corona!! :D

Cheers.

2015-01-25, 13:34:31
Reply #7

dux

  • Active Users
  • **
  • Posts: 6
    • View Profile
I changed my MAIN render to Corona from IRAY :)
It is much faster even only CPU :)

and one BIG reason why ?

 - much faster
 - less grain
 - no fireflies
 - easier material model

..I hope will be better material converter from Mental Ray... still not support for each properties

Dux


2015-01-25, 14:20:13
Reply #8

racoonart

  • Active Users
  • **
  • Posts: 1446
    • View Profile
    • racoon-artworks
..I hope will be better material converter from Mental Ray... still not support for each properties

Then please make a request in the converter thread what you would like to have. I'm not a MR user, so I have no idea what people normally use.
Any sufficiently advanced bug is indistinguishable from a feature.

2015-01-25, 15:46:53
Reply #9

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
I hope will be.. I am using IRAY.. but Corona is faster only with my CPU :) ...
I am IRAY user (CPU+GPU).. but .. I tested Corona and Corona is a much faster than IRAY.. and Corona is running only CPU and faster...
I changed my MAIN render to Corona from IRAY :)
« Last Edit: 2015-01-25, 15:50:17 by SairesArt »
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )

2015-01-25, 22:09:02
Reply #10

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
...

...

Cheers :)

2015-01-26, 01:08:37
Reply #11

arqrenderz

  • Active Users
  • **
  • Posts: 990
  • https://www.behance.net/Arqrenderz1
    • View Profile
    • arqrenderz
Looool love those posts!

2015-03-08, 11:30:56
Reply #12

GLG

  • Active Users
  • **
  • Posts: 9
    • View Profile
According to the GPU limitations, I understand the use of the CPU, however, developpers should be able to find ways or means to use such a technological potential.
We need speed in rendering and there exist hardware capacities to use to improve our needs.
So, developpers, yes, it may be harder but we need all the power of our devices....find a solution...
In computer programming, the limitations are just in the capacity brain. I'm joking...
There is a challenge to do and Corona seems to be on the best way than other ones.

I have tryed Octane,Thea,Arion and only Corona gave the fastest way to change our way of working.
Easy workflow, very high implementation in 3ds Max, fast, very good quality renderings and no crash...strong and stable...I didn't find it in the other render engines.
I'd like it to be developped more especially in the speed with a GPU solution, and I bought a licence to encourage the team .

I work in France with 5 workstations but I go to China often where I cannot bring all my render farm.
I need to get a big power in China and Corona may help me to not wait after long renderings on one computer, but I'd like to get more power as I know
how powerfull are the graphic cards. It would be a non sense, nowdays to not use them for high calculations.

I know, it is a large debate and the corona team seems not ready to search in this direction however, I invite them to really think about it.

A last note: we need a good displacement integration...really urgent...thank you !
Gaƫl

2015-03-08, 12:11:54
Reply #13

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
As Ondra stated in other threads, GPU's are not faster than CPU's, it's just a different way of programming, the power of corona relies on "being as fat as a GPU on a CPU".
I used to think like you that GPU's are faster per-se, but they are not, apart from their limitations, it all depends on the type of algorythms you use for that specific kind of CPU (in the end a GPU is an task-oriented ARM CPU), so what Corona gives us is that speed plus the freedom of the CPU regarding RAM and other things like using proxys without slowdown, etc...

I'm sure Ondra can be more specific on this matter and explain you why CPU is not slower than GPU, and specifically using Embree.

I'm also sure that more speed optimizations will be done soon as is the most voted feature request in the feature requests post, and I suspect that it will be forever hahaha

I'm talking as a person that was a huge promoter of GPU rendering, I could have been an Nvidia salesman even hahaha, but when things change, the change is a must, and Corona gave us that change for sure :)

Cheers!

2015-03-08, 12:25:04
Reply #14

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
As Ondra stated in other threads, GPU's are not faster than CPU's,

Well it's not that black and white equal either. It was by now thoroughly debunked that GPU renderers are by no means 100-to-10 000 :- D faster as was often advertised, fueled by nVidia's marketing of their CUDA.

But by on average, they are faster, and that is only supported by fact that GPUs were able to almost double their performance each year, something that didn't hold true at all for cpu performance development.
Previously, I had no belief in sustainability of using GPU renderer, but right now it's exactly the time when things might finally take off for them. 'Mainstream' GPUs with 12GB memory like Titan-X with 3k stream cores, under 1000 euros ? That's damn cheap and humungously powerful. Renderers able to overcome most limitations (memory cycling, out-of-core texture keeping).
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-08, 15:06:06
Reply #15

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
We are open to all kinds of stuff.

funny trivia: I am programming a GPU renderer sideproject right now ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-03-08, 15:29:04
Reply #16

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I'll shut up then :)

The thing is that the big cost with GPU rendering is not just the GPU (you need also a computer ) but the energy cost, it is much higher than with CPU rendering, and that is a major drawback for me, great to render a few stills, it's pretty expensive to render a long animation, and 12Gb are filled in no time, and with slow downs when it comes to use proxys, so still a lot of problems IMHO :)

BTW Ondra, more information about that?

Cheers!

2015-03-08, 16:18:33
Reply #17

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 8779
  • Let's move this topic, shall we?
    • View Profile
    • My Models
funny trivia: I am programming a GPU renderer sideproject right now ;)

Furry...? ;]
I'm not Corona Team member. Everything i say, is my personal opinion only.
My Models | My Videos | My Pictures

2015-03-09, 03:04:20
Reply #18

philippelamoureux

  • Active Users
  • **
  • Posts: 218
    • View Profile
I wish I could say something like...I'm coding a gpu renderer in my spare time. hehe

funny thing...the postman delivered my gtx 980 during the reveal of the titanX on stream...I was like...meh.

2015-03-09, 04:16:44
Reply #19

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
funny thing...the postman delivered my gtx 980 during the reveal of the titanX on stream...I was like...meh.
The whole titan lineup seems so out of place.
Quote
Double-precision
Extra large Vram
1000+ $
It's a gaming card.
wat?
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )

2015-03-09, 10:34:07
Reply #20

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
What's out of place about it ?
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-09, 11:42:35
Reply #21

tomislavn

  • Active Users
  • **
  • Posts: 706
  • Lightbringer
    • View Profile
    • My 3docean Portolio
What's out of place about it ?

Probably gaming and double precision in the same sentence :)

But then again they can't market it as a workstation card because of Quadro line - which is a good thing price-wise since Quadro of that power would cost 10 times more.
My 3d stock portfolio - http://3docean.net/user/tomislavn

2015-03-09, 12:33:10
Reply #22

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
Everyone kept begging for card with capacity of quadro/teslas (high vram), unlocked precision if needed but tailored to mainstream pricetag. And they did deliver it in second time, and extremely well done package.
What's there to complain ? They have to market it as gaming otherwise they would severely limit its audience to niche professionals. If they label it as part-gaming card, it will open up to much bigger audience of enthousiast builders,
making it profitable.
Everyone benefits from the deal.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-09, 13:11:51
Reply #23

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I still don't see it as a great value, I'm reaching the 24Gb limit with ease, and this is using proxys, so GPU rendering has become unavailable to me under good pricing, also I would like to see a comparison between the GTX580 (pretty old, but good carg, pretty limited to 3Gb, I know) and the Titan X,  I'm sure the 580 is several times slower, but I would like to see how much to make some calculations, maybe the Titan X is worth the money if you acquire 4 or 5, but this could be for very specific projects, ram limited, and I would have to think in the electricity invoice, wich has been historically high using GPU rendering :P

I think the day the GPU's could be used with the system ram interchange, and the day the GPU render engines could work with proxys without significant performance reduction, this will be the day when the GPU makes sense for me again, as of today, I sitll prefer Corona + 5820k as render nodes, awesome value, 790ā‚¬+vat per full node with 16Gb of ram, and awesome performance :) (and upt to 64Gb and 4 PCI-E, just in case in the future the GPU could be back hehehe)

Cheers!

2015-03-09, 13:59:02
Reply #24

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
funny trivia: I am programming a GPU renderer sideproject right now ;)

Furry...? ;]

stop with the bad jokes ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-03-09, 14:26:39
Reply #25

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
I still don't see it as a great value, I'm reaching the 24Gb limit with ease, and this is using proxys, so GPU rendering has become unavailable to me under good pricing, also I would like to see a comparison between the GTX580 (pretty old, but good carg, pretty limited to 3Gb, I know) and the Titan X,  I'm sure the 580 is several times slower, but I would like to see how much to make some calculations, maybe the Titan X is worth the money if you acquire 4 or 5, but this could be for very specific projects, ram limited, and I would have to think in the electricity invoice, wich has been historically high using GPU rendering :P

I think the day the GPU's could be used with the system ram interchange, and the day the GPU render engines could work with proxys without significant performance reduction, this will be the day when the GPU makes sense for me again, as of today, I sitll prefer Corona + 5820k as render nodes, awesome value, 790ā‚¬+vat per full node with 16Gb of ram, and awesome performance :) (and upt to 64Gb and 4 PCI-E, just in case in the future the GPU could be back hehehe)

Cheers!

So you went from "salesmen" (your words) to complete oppositionist ?

I don't see where your facts come from. Both CPUs and GPUs had periods in which their architecture was powerhungry, and when less. Current nVidia's Maxwell is extremely efficient, overclocked versions of their top range (980; TitanX,etc..) are cards attaining 300W draw in full power (from default 165W, which isn't much more than 140W for top i7), which is a lot, but it's identical to 5960X i7 for example.
Regarding ram, GPU renderers, even those that can't memory cycle/reach out of core, require less memory than CPU counterpart, as they're mostly build with memory preserving as their crucial feature. 12GB Vray is easily up to 18GB of conventional CPU raytracer, and with memory cycling (like with Redshift), this point becomes even more moot. In those case, 12GB is a lot given that framebuffer and textures don't need to reside inside either. From what I've read, proxies aren't issue anymore (Redshift).

Why would Titan-X be worth it only if you acquire 4-5 of it ? Each has 3072 stream (cuda) cores, as opposed to 2048 of 980, which has single-point precision computation power roughly 3-4 times higher than 580GTX, the best workhorse of past. That makes Titan-X at least 5 times more powerful, speaking dirty approximated numbers.

I don't know of what awesome performance you are getting with your 5820k, but I am not getting anything boast-worthy with my 200+ cores. So there is always big room for improvement.

I have currently no fan in GPU renderers. Octane is moving at speed of....no speed, all attention is given to cloud, Brigade or some other funny gimmicks they are doing. Redshift has the strangest philosophy possible and took the worst Vray traits ever. It's exactly what happens when great engineers fails to understand their public. The UI, options, their material,....lol-worthy. So for now, the card can be used as excellent powerhorse for Unreal Engine visualizations. But the tide is getting equal again.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-09, 14:56:18
Reply #26

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
So you went from "salesmen" (your words) to complete oppositionist ?
Yes of course, I change my mind if I have to, and I am a fan of everything I use, but when I discover something newer and better I recognize it and recommend it instead of the old thing, and change my mind, in this case is Corona vs GPU renderers
Quote
I don't see where your facts come from. Both CPUs and GPUs had periods in which their architecture was powerhungry, and when less. Current nVidia's Maxwell is extremely efficient, overclocked versions of their top range (980; TitanX,etc..) are cards attaining 300W draw in full power (from default 165W, which isn't much more than 140W for top i7), which is a lot, but it's identical to 5960X i7 for example.

So you don't think 140w vs 300w is a huge difference? now remember you have to run your GPU plus a CPU, my facts come from my personal experience and energy invoices from the company, whenever I did GPU rendering the invoice was up in the air, with CPU rendering I barely noticed it, why? I'm not sure because I'm not sure how to measure the real power compsumption of a computer in real time to make calculations, but if you know how tell me please :)

Quote
Regarding ram, GPU renderers, even those that can't memory cycle/reach out of core, require less memory than CPU counterpart, as they're mostly build with memory preserving as their crucial feature. 12GB Vray is easily up to 18GB of conventional CPU raytracer, and with memory cycling (like with Redshift), this point becomes even more moot. In those case, 12GB is a lot given that framebuffer and textures don't need to reside inside either. From what I've read, proxies aren't issue anymore (Redshift).

This is if you don't use things like motion blur wich can multiply your memory requirements quite a lot, and in the end you are limited to that memory, no matter if they optimize it, they will be slowed down if you use features such as proxys, plus redshift is a completely biased render engine, different approach, valid for some people, not valid for me, nothing personal, it's just I don't like those kind of render engines anymore, I speak about Unbiased render engines or nearly unbiased (iRay, Octane or Arion)
RedShift can be pretty fast but I don't like it's workflow or results, bear in mind that I'm not imposing my conclusions to anyone, I'm just stating what I decided and why I decided this, but there can be people that love redshift, vray (I never liked it even when I know it's an industry standard and it's probably the most powerful engine) or mental ray, I don't like that kind of workflow anymore, I rather prefer to spend more time in machine time than in human time, and that is what unbiased renderers give me, maxwell was out of the equation because it's render times are ridiculously big, and unbiased over CPU was out of the question until Corona arrived.

Quote
Why would Titan-X be worth it only if you acquire 4-5 of it ? Each has 3072 stream (cuda) cores, as opposed to 2048 of 980, which has single-point precision computation power roughly 3-4 times higher than 580GTX, the best workhorse of past. That makes Titan-X at least 5 times more powerful, speaking dirty approximated numbers.

That is for me, and that is because with that amount of GPU's it can be a difference against what I have in CPU right now, anyway the investment is pretty high for something specifically dedicated to just render and for something that I need to put inside a system, the CPU's could be used for much more things, like simulation for example, I repeat, I'm talking about my situation, maybe you don't need to do anything more than rendering with the farm, not my case :)

Quote
I don't know of what awesome performance you are getting with your 5820k, but I am not getting anything boast-worthy with my 200+ cores. So there is always big room for improvement.

Something is wrong with your 200+ cores if you are not getting great performance XD
what does the phrase "great performance" stands for you?

Quote
I have currently no fan in GPU renderers. Octane is moving at speed of....no speed, all attention is given to cloud, Brigade or some other funny gimmicks they are doing. Redshift has the strangest philosophy possible and took the worst Vray traits ever. It's exactly what happens when great engineers fails to understand their public. The UI, options, their material,....lol-worthy. So for now, the card can be used as excellent powerhorse for Unreal Engine visualizations. But the tide is getting equal again.

Unreal engine is still working flawlessly in my GTX580, no need for a Titan X or Z for that hehehe

And those things you said about GPU renderers, why are they so slow in their evolution?, if you ask to iray developers they are always saying "no can do with GPU because <add your preferred GPU limitation here>", they are slow because they can't do anything if the hardware don't change, if they can't communicate with the system and practically make the GPU's work like additional CPU's, or to be more exact, if they can't make the system work as fast as the GPU's internally, because the slowdown in those renderers come from communicating with the system outside the GPU.

IMHO GPU rendering could be the holy grail of rendering as much as CPU could be in the future, Intel did the Embree technology to demonstrate the CPU can be as efifcient as the GPU's, we are all here in the Corona forum, CPU based, why is that even when the first Titan came out some time ago? hehehe

Intel can make changes to it's architecture as much as Nvidia can do it for the newer GPU family from the next year :)

You and I have different visions of the things, you prefer to invest in one 5960X while I prefer to invest in two 5820k for the same money, for me the additional performance of an additional cpu is better than the increased performance of a single super powerful computer, IMHO they are different POV's, both valid for each of us, it's just a matter of what kind of business do we have to run :)

Cheers!
« Last Edit: 2015-03-09, 15:06:50 by juang3d »

2015-03-09, 15:09:39
Reply #27

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
Quote
So you don't think 140w vs 300w is a huge difference?

That's not what I wrote. The default draw is 165W for 980, and 140W for 5960X. Both reach 300W in full (overclocked, 100perc. use) draw. The current generation of nVidia cards (only nVidia) are not powerhungry, and quite opposite.

Regarding existing limitations, Redshift was able to bypass all of them pretty elegantly. I don't like that engine either, and don't plan to use it. But what iRay or Octane do, is irrelevant, if Redshift can.

Quote
and that is because with that amount of GPU's it can be a difference against what I have in CPU right now

OK, but that is completely not relevant comparison again. You're basically comparing what you have already accumulated, to what you would have to buy to outweight it.
That makes it economically innefficient solution for your personal case, but that can't be used as actual economical comparison. So why even write that ? We're not discussing personal situations.

Quote
Unreal engine is still working flawlessly in my GTX580, no need for a Titan X or Z for that hehehe

It's also working perfectly fine on my laptop with GM750... But you will not fit full-fledged architectural scene into your 3GB memory and run it 60fps.

Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-09, 15:20:52
Reply #28

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
OK, but that is completely not relevant comparison again. You're basically comparing what you have already accumulated, to what you would have to buy to outweight it.
That makes it economically innefficient solution for your personal case, but that can't be used as actual economical comparison. So why even write that ? We're not discussing personal situations.

You are not doing it, I am since the first post I wrote, I always say that what I say is from my personal POV and situation, you want to make data unbiased and as much isolated from subjective opinion as you can, I'm fine with that, but I'm not speaking in those terms, to do that I would have to do a massive amount of tests and have access to all that hardware, I don't, so I speak from my personal situation, and I always did.

And bout the " economically innefficient", elaborate a bit more because in other case this could be tagged as you tag my words, not relevant, because you don't add any objective data here.

It's also working perfectly fine on my laptop with GM750... But you will not fit full-fledged architectural scene into your 3GB memory and run it 60fps.

Give me a real demo and I'll tell you it's performance.


Cheers!

2015-03-09, 15:29:23
Reply #29

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
I am not really trying to make actual comparison where Titan-X or current GPU rendering stands. I don't use it, or own any of these cards and I can only extrapolate from others, and I don't like to do that.
I simply wanted to counter-argue against dismissive statements that these cards are expensive, contribute to economically unviable solution and that gp-gpu rendering is still in limited infancy.

That really isn't anymore the case. Perpetuating these notions can lead to cult mentality on forums as lot of people just easily parrot these opinions without really researching them themselves.
I argued against gp-gpu (I wrote a huge mostly negative response on Redshift on cgarchitect, and numerously times debunked false claims) when it was clear it was far worse solution to be had, but I will also defend it when it starts becoming viable alternative.
And as matter of fact, that slowly starts happening now.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-09, 16:52:33
Reply #30

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I simply wanted to counter-argue against dismissive statements that these cards are expensive, contribute to economically unviable solution and that gp-gpu rendering is still in limited infancy.

I agree with that, the only thing is that it is not a viable solution for me, but it could be for other people of course :)

Quote
And as matter of fact, that slowly starts happening now.

The key thing here is "slowly", it depends on Nvidia and what pace do they want to keep for releasing their cards and how the technology evolves, what I ask myself is if Intel will counter attack again like they did with Embree in a future CPU family, I doubt the GPU's will stand as more powerful as is, but rather there will be some kind of leveling over the time, the thing is IMO what to use in what moment, and how much can you spend to mutate your environment and adapt it to each evolution of the state of the art.

Cheers!
« Last Edit: 2015-03-09, 17:09:09 by juang3d »

2015-03-09, 18:08:34
Reply #31

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
Out of curiosity, check this:

https://forum.corona-renderer.com/index.php?action=dlattach;topic=559.0;attach=28939;image

It is from the benchmark post, RobSteady took the time to do this test:

https://forum.corona-renderer.com/index.php/topic,559.480.html

It's interesting that the benchmark was 3 minutes for a 5930K and 2 minutes for a similar version (in noise level, specially in the floor and the end wallof the room, and this with 2 Titan Z + 2 Titan...

Not a "valid" benchmark at all, but a curious test :)

Cheers.


2015-03-09, 18:44:57
Reply #32

racoonart

  • Active Users
  • **
  • Posts: 1446
    • View Profile
    • racoon-artworks
Apart from the fact that I find both images a bit too identical, Octane renders pt only, Corona images are Pt + HDcache. Yes, I know, "it's the end result that counts.." etc etc, but still, if this comparison would pop up in the Octane forum or if it the places were swapped, people would complain about it.
Any sufficiently advanced bug is indistinguishable from a feature.

2015-03-09, 20:17:48
Reply #33

Coronaut

  • Active Users
  • **
  • Posts: 131
    • View Profile
If you spend that much money on any hardware things will be fast no question about it, it is 4 titans...
And he has some details missing on curtains...
Anyways, corona bench is somewhat not optimized for dual systems... 2680v3 x 2 give smaller results than 5960x OC to 4.2ghz and it ias more then double faster in corona 1.0... so if one 5960x(4.2ghz) do this bench in about 02min:22sec realistic expectations are that 2x2680v3 do bench in half that time, and this system is much cheaper than his 3k GPU vs 3k CPU, even 5960x is very fast.
In both cases you will need same stuff(except gpu for rendering node that uses cpu) but you can't place 4gpus in shit MB or with slow CPU and that make additional cost that place nail in GPU rendering coffin(for now).
« Last Edit: 2015-03-09, 20:21:35 by Coronaut »

2015-03-09, 21:28:52
Reply #34

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
but you can't place 4gpus in shit MB or with slow CPU

No, you can actually do just that. GP-GPU rendering isn't gaming, the CPU doesn't really come as bottleneck anywhere in pipeline as it would otherwise. And 4xPCI-E come on boards as low as 100 euro 1150 ones (any Z87 for example), since you don't need all of them running at X8/16 to get full performance. Almost any CPU provides enough bandwidth (with exception of strangely crippled 5820k perhaps).
It is actually cheaper to build 4xTitan-X build, than it is to build above-average Dual Xeon build (2680v3+ and higher).
« Last Edit: 2015-03-09, 21:32:38 by Juraj_Talcik »
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-09, 22:04:57
Reply #35

Coronaut

  • Active Users
  • **
  • Posts: 131
    • View Profile
I know it is possible, but i am talking of minimum 5930(40 pci lanes) cuz of memory bandwidth needed to feed those gpus, run max etc... well you get picture, it would be insane using similar setup like you would in crypto mining.

Let's say 5930k+mobo(around 700e)+memory 32gb(400e) case and all other stuff: ssd, cooling etc.(500e) + 4 titans that are around?!(1.5k piece?) + psu(that have to be at least 1.5kw(250-300e)= 7900e
This will draw 1.5kw from socket under full load...

I don't know how much is your setup(i see you have something similar) but here is mine and it is lot cheaper from this GPU build...
2x2680 v3(2300e) + 530e(asus z10-pe w16 mobo) + 760e 64gb crucial ecc ddr4 + chief case(100e) + 2 Noctua u14s(120e) 240gb intel ssd 730(200e) + (ss 850w)psu 150e =4160e
No gpu...
This draw around 500w from socket full load(checked with UPS)
This is render node and even if i am to use this as WS it will have 780ti(more than enough) so how is this more expensive, it is cheaper, faster, more versatile and above everything else more economical.


2015-03-09, 23:06:02
Reply #36

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
Oh, we're going so off-topic, and I have to work :- (, but I have to keep my part :- D

If you're commenting on my 2680v2 setup, it was bit more, I think about 5500 euros. including VAT., and almost exactly the same spec, just different brands (samsung ssd, fractal case,) somewhere. Anyway, the Xeons were pricier, where did you get 2680v3 for 1150 euro ? Even without VAT it would be cheap. Current average price is about 1800 euros including VAT. You could have gotten good deal (if so, congrats), but don't change your math because of it. Please be objective.

4xTitan-X build wouldn't be 8000 euros with VAT either... The price is also expected to be 1000 euros, not 1500...
So my math comes to be very similar 5500 +/- euros.

The watt loads would be sort of like you described, 500 vs 1500.

Now, I am not using any actual GPU rendering engine, so I can't and don't want to go into comparisons at the moment, but don't try to persuade me we're comparing equal performance....4x3072 Maxwell cores is absolutely different level of performance than 40 Haswell cores, even if no direct comparison can be made {note}. But really 2x2680v3 =/= 4xTitan-X, that's literally just....wrong. This gets very abstract though and the discussion looses point.

{ http://www.cs.virginia.edu/kim/docs/ispass11.pdf  "Where is the Data? Why You Cannot Debate CPU vs. GPU Performance Without the Answer"}
« Last Edit: 2015-03-09, 23:17:23 by Juraj_Talcik »
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-10, 00:12:21
Reply #37

Coronaut

  • Active Users
  • **
  • Posts: 131
    • View Profile
Yes i agree, there is no real cross reference here apart the fact we are keep seeing more and more RS that are quite capable in providing same results on cpu/gpu, it is more of a Bike vs car stuff, someone can go around the world with bike and someone can't drive to the store it all depends on many factors.
I can't discuss publicly but this retail price for 2680v3 processors is insane... cheapest i did find was 1400e so this was very good deal(no it didn't fall of the truck :))
I can't find titan-z(i was discussing them since those are the ones used for octane RS) for less then 1.5k i know titan-x will be killer and cheaper but when it gets out, right now price is big mystery someone is saying 1k others 1.3k.
It is matter of some time(years) when gpu(or some other form of more compact architecture) will take over, just 2-3 years ago you could fry egg on mid range gpu, now you have beast that don't even get warm(over exaggerating :))
I used and tried few of GPU RS they are not as nearly "flamboyant" types you are willing to jump in bed when you lay eyes on them for the first time, but when you get them home... :D More of a plastic fantastic types that keep their teeth on night table and garter belt they bought from Top shop... but they know how to talk dirty and sometimes it is exactly what you want, but they are dyslectic so do not expect them to read you bed time story...

I tried 2x780gtx and it doesn't even come close to let's say 2x4930k, i am talking not only one gpu/cpu RS but several tests. I don't have titans but 2680v3 is beast and Vray and Corona go well with it.
12gb is joke, it cramps style of(few months in project) i don't care anymore(yes when you are listening to Ramones), no one will be willing to be extra careful with texture size, meshes etc. so it could fit in 12gb of ram... I am sure that 99% of people would rather save few days there as they wouldn't have to clean scene meshes and different assets... and have those extra days for longer rendering(but it isn't).
Already saw those papers, they are quite old... Anyhow lots of new stuff is coming soon, nvidia is already failed to fulfill their promises(last couple of years) things are rolling slowly and i am sure CPU giant Intel wouldn't let P4 fiasco to happen again same as AMD claim to have ace in their sleeve for 2016.
Things weren't so openly secretive for quite some time and who knows what might happen, but i am sure in one thing that GPU isn't only thing right now that wants piece of cake.
I think it is more of standardization of industry that will win at the end, that is why you can see UE starting to take of, it is based on something old more or less standardized, usually to change things you need more revolutionary approach and for now i don't see anything else except Corona to do this(no this is not fanboyism) i hate the fact i have to learn new RS again, i hate the fact i had to spent 300e on another piece of software, i hate lots of shit but i have to take bow to Ondra(and puke on his new yellow later pattent shoes while taking that bow and thank him at the end) as standardization based on monopoly is worst type...
All this seems to me like i saw it before, oh yes i did...
Sorry ondra i puked on your new shoes i will buy you new ones next year :D

2015-03-10, 07:11:21
Reply #38

philippelamoureux

  • Active Users
  • **
  • Posts: 218
    • View Profile
I have currently no fan in GPU renderers. Octane is moving at speed of....no speed, all attention is given to cloud, Brigade or some other funny gimmicks they are doing.

This is something that saddens me. I think Octane is a formidable renderer but the absolut lack of documentation/support/communication/tutorials makes it less appealing. The community may be small but the team is not putting much emphasis on communication to help it. Same thing for their cloud services, we don't know much details, don't know when it's coming, etc. They don't talk for god sake lol. The amount of times I've asked questions on their forums only to get no answer...

Another thing is that maxwell is (might be fixed now tho, haven't checked recent patches/bernchmarks) not performing as well as Kepler with Octane yet. It kind of suck that every time a new gpu architecture will come out, the renderer may not be updated fast, or even not at all...

2015-03-10, 10:59:09
Reply #39

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
Wasn't Otoy under agreement with Autodesk... There you have why they don't speak.

Rest assure that with that agreement all the Otoy business will be revenue oriented without thinking in the customer at all :P

In fact it's possible that in a near future you won't be able to acquire any octane license but you will be forced to go under the forced-saas model paying 100ā‚¬/month per license (Autodesk thinks that 100ā‚¬/month per node is nothing for everyone, who don't have a few thousands in their pocket to rent software monthly? Come oooon!)

Cheers.

2015-03-10, 11:21:29
Reply #40

RobSteady

  • Active Users
  • **
  • Posts: 45
    • View Profile
Hey there, CPU-competitors ;)

Some notes:
  • Power draw is 860W max for 2 x Titan + 1 x Titan Z + System (for big scenes with proxies etc.)
    I've heard that peaks could be much higher but I've never seen that so far (watching the power meter while rendering).
  • I got my Titan's for 900ā‚¬ each and the Z for 1400ā‚¬
  • CPU and GPU ram consumption behaves different: If you organize your scene a little bit you can get away with 6gb very easily.
    There is also the option to use system memory with a little bit of speed loss. Have not tested so far.
  • Support is......ok. Requests get implemented...and sometimes not. I think that's a pitty because the Max plugin could be much better (material converter/editor, light lister etc)
    We have a long request list running on the forum with very little feedback

Overall I think it's a great engine and a big relief (coming from V-Ray) but usefull requests are getting ignored for whatever reason.
I do like Corona as well but had no time to really test it.


« Last Edit: 2015-03-10, 11:45:42 by RobSteady »

2015-03-10, 11:33:54
Reply #41

RobSteady

  • Active Users
  • **
  • Posts: 45
    • View Profile
For the benchmark scene: are there any light-planes/portals in front of the windows? The Octane scene is only lit by daylight.
I will try to optimize the scene a little bit and also render with "Direct Light" which is not unbiased.
« Last Edit: 2015-03-10, 11:45:06 by RobSteady »

2015-03-10, 13:00:40
Reply #42

cosbu

  • Active Users
  • **
  • Posts: 15
  • civil engineer
    • View Profile
Hi all, I am user of both octane and corona, but not a high quality visualizer as most guys in the forums. My opinion is that both engines are great and very fast. But you can't do direct comparison, it's just a matter of choice for the user, depending on more parameters, like the modeling application. If someone have the money for let's say 4xtitan cards or more  I would bet on octane because I think the setup may be easier than a local render farm and you can have almost instant results . Octane's real time is looks a bit smoother but corona is new on this. On the other hand corona is very fast without the need to spend a fortune for GPUs+PSUs and there is the solution of cloud render farms for high end results.

Octane on my single 580 with 1.50gb ram was far outdated in comparison with corona on my i7 but then otoy did two things: the one was "coherent ratio", a mode that almost doubled the render speed and the other one was "out of core rendering" for textures only (they are loaded on system's ram if there is a need) . So now they are almost equal because I can do a clean 2k image in about 2hrs I think, in both engines and that gives me flexibility.
One funny thing is how less ram memory a scene consumes when you remove it from 3dsmax. I did a scene first in sketchup and then in octane standalone. It was almost 700mb in my gpu's ram. 3dsmax for corona took 7gb!
civil engineer

2015-03-10, 13:24:35
Reply #43

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
Sorry ondra i puked on your new shoes i will buy you new ones next year :D

do not worry, I stopped reading this flame war/fanboy outing-from-closet long time ago ;)
« Last Edit: 2015-03-10, 13:28:02 by Ondra »
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-03-10, 13:49:38
Reply #44

borisquezadaa

  • Active Users
  • **
  • Posts: 614
    • View Profile
Another priceless post. I expect more to come now that Corona is going comercial. Is always refreshing to read this comments.
What i do with Corona My Corona post of random stuff rendering
WARNING: English.dll still loading...

2015-03-10, 13:51:29
Reply #45

Coronaut

  • Active Users
  • **
  • Posts: 131
    • View Profile
No need to get mad if i out myself :D Trolling you a bit, you know i love you so stop bitching and tripping.
I wish i could see what is your original post, damn i couldn't catch it, i hope it isn't just typo... It must be some serious shit either way.

2015-03-10, 14:07:55
Reply #46

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
I wish i could see what is your original post, damn i couldn't catch it, i hope it isn't just typo... It must be some serious shit either way.

I rather not :- D
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-10, 14:44:30
Reply #47

RobSteady

  • Active Users
  • **
  • Posts: 45
    • View Profile
Here's another one, with the scene a litle bit more optimized.
Forget the old ones ;)
39 seconds
« Last Edit: 2015-03-10, 19:21:51 by RobSteady »

2015-03-10, 15:13:46
Reply #48

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
350MB. How much does this scene take in Corona does anyone know ?
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2015-03-10, 18:07:54
Reply #49

borisquezadaa

  • Active Users
  • **
  • Posts: 614
    • View Profile
You mean this?
Goes from 5,16 to 4,4 GB on closing benchmark here.

« Last Edit: 2015-03-10, 18:16:36 by borisquezadaa »
What i do with Corona My Corona post of random stuff rendering
WARNING: English.dll still loading...

2015-03-10, 18:12:14
Reply #50

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
So plenty of room for improvement unless the memory is managed in different ways in GPU (I don't have a clue on how is this done at all) :)

Cheers.

2015-03-10, 18:33:00
Reply #51

RobSteady

  • Active Users
  • **
  • Posts: 45
    • View Profile
It's managed differently.
Somehow ;)
« Last Edit: 2015-03-10, 19:22:47 by RobSteady »

2015-03-10, 20:28:47
Reply #52

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I mean at a programming level, because in the end 1 billion triangles is 1 billion triangles, why it occupies less memory in the GPU than in the system is what I mean.

Cheers!

2015-03-10, 21:21:54
Reply #53

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
Corona stores stuff such as multiple mapping channels, support for anisotropy, support for visible/invisible edges with edge shader, support for forestpack color, etc... I dont know how many of these features does octane support.

Also Corona does not compress textures, they should take 512MB in the benchmark according to my calculation. Octane probably uses some kind of D3D/OGL lossy compression which comes practically free on GPUs and is already implemented in the system.
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-03-10, 22:14:21
Reply #54

borisquezadaa

  • Active Users
  • **
  • Posts: 614
    • View Profile
Agreed. it lack fine detail on the far away materials, like the curtain opacity maps.
What i do with Corona My Corona post of random stuff rendering
WARNING: English.dll still loading...

2015-03-10, 22:32:17
Reply #55

RobSteady

  • Active Users
  • **
  • Posts: 45
    • View Profile
Agreed. it lack fine detail on the far away materials, like the curtain opacity maps.

Its only a simple diffuse material with transparency for now. Will update the scene tomorrow so it matches the original better. There is no material converter for Corona (for V-Ray scenes there is one).
I guessed the materials based on the benchmark screenshots.

2015-03-11, 10:47:44
Reply #56

RobSteady

  • Active Users
  • **
  • Posts: 45
    • View Profile
Here is the updated scene.
50 seconds.

Btw Titan X is coming, could be as fast as Titan Z.
All specs TBC:
http://videocardz.com/55013/nvidia-geforce-gtx-titan-x-3dmark-performance
« Last Edit: 2015-03-11, 11:15:17 by RobSteady »

2015-03-11, 11:05:27
Reply #57

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
Interesting and pretty awesome time, I find hard to achieve this in 50 seconds, but I may be wrong :)

I'll try some test with the 5820k when I have time (using DR)

BTW can you try the same at 1280x720 and 1920x1080 please?

Cheers!
« Last Edit: 2015-03-11, 11:11:48 by juang3d »

2015-03-11, 11:14:49
Reply #58

RobSteady

  • Active Users
  • **
  • Posts: 45
    • View Profile
BTW can you try the same at 1280x720 and 1920x1080 please?
Sure, will post them later.
Maybe also a 4k version.
« Last Edit: 2015-03-11, 11:29:15 by RobSteady »

2015-03-20, 23:09:42
Reply #59

sbrusse

  • Active Users
  • **
  • Posts: 26
    • View Profile
hey mate,
Any chances you could post that scene with octane?
I'd like to try my GTXs here :-)

If you have any interest : I've done quite some test with Vray RT GPU as prod render here :
https://www.youtube.com/user/sbrusse/videos

I'm just wondering how Octane compares to Vray and corona obviously.

Cheers

Stan

2015-03-20, 23:34:58
Reply #60

cecofuli

  • Active Users
  • **
  • Posts: 1577
    • View Profile
    • www.francescolegrenzi.com
You can download the .max file and convert it for Octane. =)

2015-03-22, 10:49:56
Reply #61

gabrielefx

  • Active Users
  • **
  • Posts: 109
    • View Profile
The Octane test has GI clamped to 1. It means almost zero caustics.
You can't compare 6 fastest gpus with a good dual Xeon rig.
Less noise, less heat, less power, less space.
The good side of gpu render is that we can replace our gpus. After 4 years of 24/7 of 4xGTX580s usage I can replace all of them with 4 Titans X and get 2,2X speed.
It's not true that the Titan X is fast as the Titan Z. Titan X is slight faster than the Titan Black, not 2X.
The bright side of gpu computing is the feedback. With Corona 1.0 we have a good rt feedback but not comparable to the Octane rt speed with 4 Titans.
I don't know if is possible to port Embree on gpus, I know that Thea did it.
The light distribution in Corona is better than the Octane one but the PMC (sort of Metropolis lt) produces incredible renders and we can use pmc always in realtime.
Never say never.

2015-03-22, 11:16:31
Reply #62

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
Buy CPU's that are equal price to 4 Titan GPUs and then do interactivity speed comparison. You will get dual Xeon machine for that price. Let alone it will eat a LOT less power, which will pay off on energy bills.

We got 48thread dual Xeon machine at work, and feedback is almost realtime in exterior scenes, and still reasonably fast in interior ones.

2015-03-22, 14:26:41
Reply #63

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
also embree will be never ported to GPU, as it is highly-specialized CPU code, plus... you know... it is being developed by Intel...
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)