Author Topic: No corona vs redshift comparison so far?  (Read 46797 times)

2014-06-15, 11:31:05
Reply #30

Kramon

  • Active Users
  • **
  • Posts: 16
    • View Profile
The new mental ray is really back in game they fix so many stuff and so many improvments..

2014-06-15, 15:11:25
Reply #31

Animator89

  • Active Users
  • **
  • Posts: 29
    • View Profile
I'm a redshift consumer.
I use it in Maya and it is very fast.
For exemple my works:
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Spalnya_R1_1200_900.jpg
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image4_1200_900.jpg
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image1_1200_900.jpg
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image3_1200_900.jpg
Every image takes from 10 to 15 min render time on 2xGTX 780
All images was rendered with Brute force+poind cloud GI (something like in Corona path tracing+HD cache)
For me Redshift is much faster than Corona but I like Corona materials and lighting. So for me Corona is more realistic solution because of materials.
Thanks!
-Pavel
P.S. Sorry for my bad English ;)

2014-06-15, 16:58:59
Reply #32

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
I'm a redshift consumer.
I use it in Maya and it is very fast.
For exemple my works:
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Spalnya_R1_1200_900.jpg
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image4_1200_900.jpg
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image1_1200_900.jpg
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image3_1200_900.jpg
Every image takes from 10 to 15 min render time on 2xGTX 780
All images was rendered with Brute force+poind cloud GI (something like in Corona path tracing+HD cache)
For me Redshift is much faster than Corona but I like Corona materials and lighting. So for me Corona is more realistic solution because of materials.
Thanks!
-Pavel
P.S. Sorry for my bad English ;)

You were comparing those dual GTX 780's versus what? If you say Redshift is much faster, then you should be comparing it against dual processors of same performance.

One GTX780 is approximatelly 610 USD, two cost about 1220 USD. So you should compare it to dual Xeon E5-2430 which is closest to GTX780 price. Xeon E5-2430 is about 650 USD, so two are worth about 1300 USD. Slightly expensive, but you also need to count in that those Xeon processors consume vastly less electricity than GTX780's do.

So unless you have a good base for comparison, it's the usual apples vs oranges comparison.

2014-06-15, 17:00:55
Reply #33

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
The new mental ray is really back in game they fix so many stuff and so many improvments..

bullshit, bullshit, bullshit, bullshit, bullshit, bullshit, bullshit, bullshit, bullshit, bullshit and bullshit.

Sorry, but just total BS.

I have been very long standing MR user, and checking every year.... NOTHING improved, and it's a bit slower with every release. It got about 50% slower since version 3.6. And also final gather got a bit slower and blotchier. They also broke many things, didn't fix anything.

2014-06-15, 21:16:15
Reply #34

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
I don't know if it's me, but the more I check redshift renders, the more I find that the quality isn't so good, like the samples or something was kind of low; I see that the contact shadows/AO stuff, or highlights aren't so detailled. It just feels a little cheap at the end of the day, as if they had reach this speed but at the cost of quality.
Also, it would be interesting which processor is equal to the gtx 780 (approximatively guessing of course), so we could kind of compare.

And a last but important point, is having a second computer makes you have twice the speed in corona? I've heard that gpu cards scale very well, I mean when you get a second gpu, you just double the speed, but this may not be valid for cpu's. Am I right?

« Last Edit: 2014-06-15, 22:26:44 by boumay »

2014-06-16, 00:49:35
Reply #35

cecofuli

  • Active Users
  • **
  • Posts: 1577
    • View Profile
    • www.francescolegrenzi.com
Don't forget that you can use a good dual Xeon workstation for many thing: After effect rendering, 3ds max task, Marvelous designer simulation, real flow simulation etc.. etc...

2014-06-16, 01:50:33
Reply #36

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
One GTX780 is approximatelly 610 USD, two cost about 1220 USD. So you should compare it to dual Xeon E5-2430 which is closest to GTX780 price. Xeon E5-2430 is about 650 USD, so two are worth about 1300 USD.

Slightly true, but not entirely true since a dual cpu xeon cost is much higher due to the special needs in motherboard and RAM, they are much more expensive.

But on the other hand, I would like to know what was the CPU behind those GTX780 also, to know what performance are you experiencing and against what are you comparing.

Cheers!


2014-06-16, 09:18:24
Reply #37

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
One GTX780 is approximatelly 610 USD, two cost about 1220 USD. So you should compare it to dual Xeon E5-2430 which is closest to GTX780 price. Xeon E5-2430 is about 650 USD, so two are worth about 1300 USD.

Slightly true, but not entirely true since a dual cpu xeon cost is much higher due to the special needs in motherboard and RAM, they are much more expensive.

But on the other hand, I would like to know what was the CPU behind those GTX780 also, to know what performance are you experiencing and against what are you comparing.

Cheers!

Motherboards are bit more expensive, but i do not know about any special ram requirements. ECC ram is optional, not mandatory AFAIK.

And also, for dual GPUs, you need stronger expensive PSU in exchange. And that is, again, connected with huge power drain if both GPUs run at their Max.
« Last Edit: 2014-06-16, 15:50:51 by Rawalanche »

2014-06-16, 15:09:43
Reply #38

Juraj

  • Active Users
  • **
  • Posts: 4762
    • View Profile
    • studio website
I have to see how the Redshift memory cycling works, but I am pretty sure it's still at performance cost, and you can't just go for lowest memory possible without setbacks (780ti by default has 3GB that's not a miracle).

Xeons have artificial margins...but they need to be compared to GPUs which are likewise. TeslaK40/QuadroK6000, while identical in performance to 780, cost 4000 euros each but bring 12GB Vram to the table.
The cost between "production-ready" high-end station based on pure CPU or GPU, are equally expensive, but GPUs actually scale in price even steeper. GPUs are NOT therefore any cheaper.
In fact you can go for Octa-CPU IvyBridge Xeon single-machine amounting to 256 cores for about 40 000 euros. For the same price, nVidia offers their Octa-Kepler based boxes which are about 50 000 euros each. I would say performance would be very similar in practical terms.

Power requirements as Rawalanche wrote also applies: Average IvyB (non-WS) Xeons is 120W, Kepler nVidia 240W. So in average, double the heat, noise and electricity, although this might not so much matter in grand scale both schemes cost :- ).
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-06-16, 15:52:43
Reply #39

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
I have to see how the Redshift memory cycling works, but I am pretty sure it's still at performance cost, and you can't just go for lowest memory possible without setbacks (780ti by default has 3GB that's not a miracle).

Xeons have artificial margins...but they need to be compared to GPUs which are likewise. TeslaK40/QuadroK6000, while identical in performance to 780, cost 4000 euros each but bring 12GB Vram to the table.
The cost between "production-ready" high-end station based on pure CPU or GPU, are equally expensive, but GPUs actually scale in price even steeper. GPUs are NOT therefore any cheaper.
In fact you can go for Octa-CPU IvyBridge Xeon single-machine amounting to 256 cores for about 40 000 euros. For the same price, nVidia offers their Octa-Kepler based boxes which are about 50 000 euros each. I would say performance would be very similar in practical terms.

Power requirements as Rawalanche wrote also applies: Average IvyB (non-WS) Xeons is 120W, Kepler nVidia 240W. So in average, double the heat, noise and electricity, although this might not so much matter in grand scale both schemes cost :- ).

Thank you, that's informative.

2014-06-16, 16:13:06
Reply #40

Animator89

  • Active Users
  • **
  • Posts: 29
    • View Profile
Redshift has out of core architecture so it fits high amount of polys/textures in 3 gb of ram. I used 16 000 000 tris without a problems
I compare Corona render on Intel Core i7-4960X Extreme Edition overclocked to 4.5 ghz CPU vs 2x GTX 780 3gb
I never can take Image in 2k with Corona without noise if my render times is less than 1:40 or 2 hours BUT render result is more realistic(for me)
But when I need speed. or when I render video - Redshift wins.
So I prefer Corona for stills and quality of shading and redshift for video or situations where I don't need very realistic result
I think we(professionals) don't need to compare cpu vs gpu but we need to compare tasks what you want to do with.
So
Redshift has very high speed GI, hair rendering, volumetric rendering, camera dof(with redshift I forgot about shitty DOF in post), motion blur and overall it is amazing in workflow when you speak about speed but shading realism is poor for architectural rendering(for me)
So I use Corona for stills and architecture. And I will buy Corona for that tasks ;)
I would like to see Corona on something like Xeon Phi or Caustic cards (with open rl) ;)
For me GPU is easy to upgrade solution. CPU for me is solution for different tasks(like water and cloth simulation, hair sim, fast anim. playback e.t.c.)
So I love Corona and I love Redshift :)
Thanks!
-Pavel
P.S. Sorry for my English again  ;)

2014-06-16, 16:42:25
Reply #41

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
I have to see how the Redshift memory cycling works, but I am pretty sure it's still at performance cost, and you can't just go for lowest memory possible without setbacks (780ti by default has 3GB that's not a miracle).

Xeons have artificial margins...but they need to be compared to GPUs which are likewise. TeslaK40/QuadroK6000, while identical in performance to 780, cost 4000 euros each but bring 12GB Vram to the table.
The cost between "production-ready" high-end station based on pure CPU or GPU, are equally expensive, but GPUs actually scale in price even steeper. GPUs are NOT therefore any cheaper.
In fact you can go for Octa-CPU IvyBridge Xeon single-machine amounting to 256 cores for about 40 000 euros. For the same price, nVidia offers their Octa-Kepler based boxes which are about 50 000 euros each. I would say performance would be very similar in practical terms.

Power requirements as Rawalanche wrote also applies: Average IvyB (non-WS) Xeons is 120W, Kepler nVidia 240W. So in average, double the heat, noise and electricity, although this might not so much matter in grand scale both schemes cost :- ).
There is a performance hit to going out of core (using more VRAM than what's available). How big a hit you'll take depends on numerous factors. First of all, images aren't as problematic as geometry. In fact, Redshift defaults to a GPU texture cache of just 128 megs. It simply will not use more than that for image maps, no matter how many you have. Streaming them from VRAM is apparently really fast, so image usage is basically not a problem.

Things irradiance or SSS point caches must fit into VRAM. If such caches grow too large, it will simply fail.

Geometry works much like images, except it'll use whatever is left for it, and the performance hit is much larger. It's still usable, though, up to very large data sets. I saw someone testing it against Arnold using a Geforce with two or three gigs of VRAM, and Arnold didn't outperform Redshift until several hundred million unique triangles. It is certainly worth noting that Arnold did actually end up outperforming Redshift by a decent margin. GPU rendering is still sort of memory limited. It just takes gigabytes upon gigabytes of data to get there.

2014-06-16, 18:21:30
Reply #42

Juraj

  • Active Users
  • **
  • Posts: 4762
    • View Profile
    • studio website
Well, until nVidia comes with mainstream Maxwell card with 8GB vram (some 880ti maybe? ) I remain quite sceptical. But maybe I just really need to see myself :- )

So their out-of-core streaming enables pretty much unlimited texture amount ? Did it also bypass the texture amount limit CUDA previously had (and which still seems to be case for Octane or not?)
Hi-res textures are pretty much the biggest memory eater in my scenes. Few 4k maps to start with and it starts to pack.

16mil. polies for 3GB vram is nice, but 16mil. is still nothing. How does it go around displacement ?

One guy from their team replied to my cgarchitect post about my thoughts on GPU rendering. He seems quite nice and humble, but I still don't believe his claims much :- )
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-06-16, 19:08:24
Reply #43

Animator89

  • Active Users
  • **
  • Posts: 29
    • View Profile
Well, until nVidia comes with mainstream Maxwell card with 8GB vram (some 880ti maybe? ) I remain quite sceptical. But maybe I just really need to see myself :- )

So their out-of-core streaming enables pretty much unlimited texture amount ? Did it also bypass the texture amount limit CUDA previously had (and which still seems to be case for Octane or not?)
Hi-res textures are pretty much the biggest memory eater in my scenes. Few 4k maps to start with and it starts to pack.

16mil. polies for 3GB vram is nice, but 16mil. is still nothing. How does it go around displacement ?

One guy from their team replied to my cgarchitect post about my thoughts on GPU rendering. He seems quite nice and humble, but I still don't believe his claims much :- )
There is no texture count or resolution limit in Redshift. I often use many hires (6k) textures with Redshift.
But the main question still here: why you need it for? If for archviz then I don't see problems with corona+3ds max.

You simple can download free demo and test by yourself.
I also have Octane render license (standalone + 3ds max)  and I whant to say for all peoples who want to migrate from corona to octnae - DON'T DO THAT :)
Corona is MUCH faster(mostly because of HD cache) even if you compare 4xTitan vs my six core i7 Corona don't has limits in memory or textures, and has more ftatures
Did you know that Octane 2.0 is slower by 15-20%? :)
But yes... Octane is very realistic... Not like maxwell but very close. Maybe because of spectral rendering...
Thanks!
-Pavel
P.S. I will no longer apologize for my terrible English:)

2014-06-16, 20:15:57
Reply #44

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
So their out-of-core streaming enables pretty much unlimited texture amount ? Did it also bypass the texture amount limit CUDA previously had (and which still seems to be case for Octane or not?)
Basically, yes. In a simple test I just did, using a 2k by 1k HDRI resulted in a whopping 556 kB memory on the GPU used for textures. It's obviously only loading the parts it needs. It doesn't have a "max number of texture" like Octane. Presumably performance might suffer if you have thousands upon thousands of images, but there is no set limit as far as I know.


Quote
16mil. polies for 3GB vram is nice, but 16mil. is still nothing. How does it go around displacement ?
In the same simple test I mentioned earlier, I rendered 38 million (unique) triangles on a card with 1.6 gigs of free memory. Out of the 1.6 gigs available to Redshift, the texture cache used up 128 megs, and various other things accounted for a bit more. In the end, there was 1.2 gigs available for geometry, and it used 1.1 gigs for 38 million triangles. It stands to reason that if you had a 6 gigabyte card used just for rendering (to save from Windows' overhead), you could fit about 190 million triangles before worrying about going out of core.


Quote
How does it go around displacement ?
Displacements are generated on the CPU and resulting triangles are streamed to the GPU as needed, same as with regular geometry. It doesn't do texture-space displacement rendering, as far as I know (like V-Ray's 2D displacement effect).




Octane isn't great. I'd rather use Corona. It's faster, more reliable, easier to use, and produces better results.