Author Topic: No corona vs redshift comparison so far?  (Read 46609 times)

2014-06-08, 21:14:00

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
I'm surprised that we don't see active tests "corona vs redshift", as redshift is really fast so it would have been very interesting to see the results. I know we can't compare exactly because redshift uses gpu + cpu, and corona only cpu, but approximatively.
I only saw one post with a simple test.
Are you afraid of seeing corona being beaten by redshift? lol, just joking.

2014-06-08, 21:40:49
Reply #1

racoonart

  • Active Users
  • **
  • Posts: 1446
    • View Profile
    • racoon-artworks
I don't think there will be a problem if you're doing some comparisons between corona and redshift. I guess the lack of threads comparing corona and several other renderers is simple: why would you do it?
What do you want to compare? I mean, redshift is a biased gpu renderer, corona is an (un)biased cpu renderer, both have their pros and cons.
I think I tested pretty much every renderer which is available for 3dsmax (not redshift of course, since it's not available for max) and there's not much sense in comparing images. There are so many variables which are important for me to make a tool useful or "fun to work with", the technical stuff is even more complicated. If you don't know exactly what you're doing it's completely useless to compare two renderings.

So, feel free to make some comparisons but make sure you don't do a "Giant Shark vs. cherry-banana-shake" one ;)
Any sufficiently advanced bug is indistinguishable from a feature.

2014-06-08, 21:57:52
Reply #2

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
There is a huge benefit indeed in this case, because it could determine whether you choose to invest in graphics cards or new pc nodes to build your renderfarm. That's exactly what I'm trying to do these days. Redshift is interesting and I was really wondering which setup I would build and that was depending directly on which renderer you pick, which was itself depending on this kind of comparison.

2014-06-09, 02:49:01
Reply #3

Kramon

  • Active Users
  • **
  • Posts: 16
    • View Profile
i think redshift will be the fastest but it is biased so... it dosn't count :)

2014-06-09, 09:12:04
Reply #4

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
i think redshift will be the fastest but it is biased so... it dosn't count :)
Why shouldn't it count?
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2014-06-09, 09:15:10
Reply #5

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
Well, thank you for this info, I didn't know Redshift was biased, I always thought it was unbiased. That's a good thing to know.

2014-06-09, 11:00:06
Reply #6

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
Well, thank you for this info, I didn't know Redshift was biased, I always thought it was unbiased. That's a good thing to know.
It's kind of all over their website. ;-)


I've been testing Redshift a bit, but the lack of a 3ds Max version makes it kind of tricky to compare it with Corona. It's a hassle. However, here are my impressions about how Redshift compares to Corona:

Redshift is much more focused on film/TV production features. It does support progressive rendering, but it's mostly designed for a fast interactive preview, not final rendering. Final rendering in Redshift is buckets with adaptive anti-aliasing, which is very different to Corona's approach. While Corona's focus has been on realistic light transport and shaders, the Redshift team concentrate instead on things like SSS, proxies and memory cycling ("out-of-core").

I'd say Corona is better for that last bit of realism, but Redshift has more "production features," is (typically) faster, and deals better with high scene complexity. They have different goals, so a comparison wouldn't be entirely apt. I'd say Corona's GPU competition is Octane, rather than Redshift.

2014-06-09, 12:16:46
Reply #7

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
Well, thank you for this info, I didn't know Redshift was biased, I always thought it was unbiased. That's a good thing to know.
It's kind of all over their website. ;-)

lol, I must say that I didn't search. But that wasn't my motivation in this topic anyway : )

2014-06-09, 14:03:34
Reply #8

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
Haven't (and will not) given it a test, but from my observation of their website, I would just repeat what Captain said.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-06-09, 21:06:15
Reply #9

Kramon

  • Active Users
  • **
  • Posts: 16
    • View Profile
i think redshift will be the fastest but it is biased so... it dosn't count :)
Why shouldn't it count?

generaly i would not compare biased with unbiased it is kinda cheating...

2014-06-09, 21:36:31
Reply #10

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
i think redshift will be the fastest but it is biased so... it dosn't count :)
Why shouldn't it count?

generaly i would not compare biased with unbiased it is kinda cheating...

Why? When the results look the same.

BTW: there are probably no truly unbiased engines on the market right now, so it does not matter ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2014-06-10, 11:26:38
Reply #11

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
The focus on a comparison is not to know what of the engines is more scientifically accurate, is to know what of the render engines will fit in our piepline better, what is the fastest one for the best results, I don't car if it's biased or unbiased, as long as it delivers what I want it to deliver.

The leverage of unbiased engines is the lack of configuration needs, they are pretty simple, unlike the biased ones, but I always need the best posible result, so if I have to use a biased one, I'll do.

From my POV Corona is one of the best render engines I've saw in my life because it tries to stay as much unbiased as possible, but as fastest as possible, and if it has to bias a bit the scene to be faster, it will give the user the option to do it.

Biased or Unbiased fight it's a nonsense, the render fight should always be speed at the best possible quality the render engine can deliver, and that is where the redshift vs corona comparison should be IMHO.

I just wanted to trwo my 2 cents :)

Cheers.

2014-06-10, 12:25:29
Reply #12

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
The focus on a comparison is not to know what of the engines is more scientifically accurate, is to know what of the render engines will fit in our piepline better, what is the fastest one for the best results, I don't car if it's biased or unbiased, as long as it delivers what I want it to deliver.

The leverage of unbiased engines is the lack of configuration needs, they are pretty simple, unlike the biased ones, but I always need the best posible result, so if I have to use a biased one, I'll do.

From my POV Corona is one of the best render engines I've saw in my life because it tries to stay as much unbiased as possible, but as fastest as possible, and if it has to bias a bit the scene to be faster, it will give the user the option to do it.

Biased or Unbiased fight it's a nonsense, the render fight should always be speed at the best possible quality the render engine can deliver, and that is where the redshift vs corona comparison should be IMHO.

This is exactly what I meant with my thread.

2014-06-12, 14:51:24
Reply #13

lasse1309

  • Active Users
  • **
  • Posts: 70
    • View Profile
comparing things is really hard in general. if something is faster and more accurate.. well how would you measure that? you can multiply your render time in vray e.g. by changing just one number after the comma in dmc sampler, you can also let your corona-render "cook" a few hours longer.
it is really hard to define the spot when it is "good" - thus it is hard to compare.

regarding fitting in pipeline: there are a lot more varibles in a production pipeline than the render-engine. first and most important would be the desired result :D

all the best
L

2014-06-12, 16:00:58
Reply #14

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
comparing things is really hard in general. if something is faster and more accurate.. well how would you measure that? you can multiply your render time in vray e.g. by changing just one number after the comma in dmc sampler, you can also let your corona-render "cook" a few hours longer.
it is really hard to define the spot when it is "good" - thus it is hard to compare.

regarding fitting in pipeline: there are a lot more varibles in a production pipeline than the render-engine. first and most important would be the desired result :D


Thank you for the input.
But I meant some kind of approximative comparison. We know for example than redshift is faster than vray, even if the test isn't really accurate.

2014-06-12, 17:25:59
Reply #15

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
@lasse1309 - xoio

It's not that hard... if you have a scene with some materials and one render result in a specific time, you can convert this scene to achieve the exact or similar visual aspect, with renderer specific materials, same lighting or as similar as possible and check the time.
The thing is that you have to dedicate that time to configure as much as you can the scene to fiti perfectly with every render you want to compare, and that's time eating :)

The thing is the client is going to see that final result, is what matters, everything in the middle... well the client doesn't care about it.

It's the same with animation, or with every type of project, the final output and the final render time is what matters.

So I can compare oranges vs oranges because even if they were apples and bananas in it's seed form, they are both oranges in their final form, and the client wants oranges, so you can definitely compare render engines and results and times no matter the technology the render uses.

Cheers.

2014-06-13, 00:16:10
Reply #16

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
@lasse1309 - xoio

It's not that hard...

Well, just by using "Well I look and see" aproach you can't really evaluate if you reached the same GI and AA quality for example.

Lasse's example stands quite right. By improving your AA by small amount in Vray for example, you can exponentially multiply your rendertime. And these nuances can be hard to judge and let alone compare by eye.
Therefore any such comparison is doomed to be very far off, almost pointless. How can you even prove you reached similar result without using some overlay software technique ? And that takes into account purely GI and AA,
and those hundreds of little things in how it deals with caustics, dispersion, motion blur...even creating a scene that tests thoroughly and fairly all aspects can be dauntingly challenging.

With underlying algorithms being very different, comparing renderers will almost always be "apples and oranges".

With that said, I LOVE comparisons, but I dislike easy conclusions with "It is faster !", which by most measure, usually aren't very objective and biased proportional to tester's expertise in them.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-06-13, 01:42:04
Reply #17

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I never said you can't overlay results.

What I've said is that you must have a base point to start, a vray render form example, and then work in the scene to match as much as possible teh visual quality and configuration of that scene in corona, for example, now you can keep improving it until you reach the desired quality, including AA or what ever thing you need.

You may not get the "exact" AA, but what you want is a result, and if you increase the render time by increasing AA settings, then increase the AA settings until you reach your desired production AA and quality and use that picture as baseline, then match it in Corona.

I still stand, is not that hard, it's just time, and any comparison is fair, no matter what technology lies behind the render engine, we just need results to our clients, what render can deliver faster results with better quality? That's the question to answer in a comparison test.

Now regarding overlaying results, if what you mean is to check if you can mix render engines to render different layers of the same scene... now that is another story, and there is were you may need an exact comparison of rendered pictures, otherwise... it's time.

What is your client going to see? that is where the difference lies... the internal differences doesn't matter, I don't care if the AA is slightly different as long as I reach my desired quality, and the client eye is going to judge... so the comparison should be made by eye IMHO, not mathematically, the only thing that matters is to know what helps you reach your desired result in less time and with less work.

Of course you can do a more techie test, but that is not for decission making but more for curiosity.

Cheers.
« Last Edit: 2014-06-13, 01:45:51 by juang3d »

2014-06-13, 02:45:32
Reply #18

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
the render fight should always be speed at the best possible quality the render engine can deliver

I argued why you can't objectively measure speed/quality. You can just subjectively decide when some random quality you decided on matches former result (that you will eyeball based on previous renderer's result)
matches speed you attained based on your skill with particular renderer. It yields absolutely no value to anyone else than yourself then. it varies wildly proportional to your skill with such renderer.

Regarding overlay. no, that's not what meant. I simply put that forward as imaginary example of what would yield to be some analytical tool to measure difference/likeness of results in technical terms.

Regarding end-result for client, that's absolutely different matter. Almost all renderers can give you identical result if you try.

I think what you want to compare is how much effort in terms of human resources and time spent is required for each renderer to get likeable/desired result. But that can't be used for objective 'comparison' between renderers ("abc" vs "xyz" ) but instead
only as personal view on particular renderer (i.e "I like Corona because...abc"). If it's falsely used as the former, it only and always (in 100perc. of cases) leads to wildly biased fanboy fights on behalf of their amateur skills.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-06-13, 09:50:21
Reply #19

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2014-06-13, 18:17:56
Reply #20

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
“To put it into perspective, when Pixar was developing the movie ‘Cars 2,’ its average rendering time per frame was 11.5 hours,” says Bassett. “We’re rendering at seven seconds a frame using Octane. It changes everything. There are now Hollywood movies being made using this technology.”

http://www.bdcnetwork.com/hyper-speed-rendering-how-gensler-turns-bim-models-beauty-shots-seconds?eid=216311880&bid=881711

You can think Corona is fast, but it will never be fast as Octane in Gensler office. They're 600 times faster than Pixar.

Jeez marketing these days is just...
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-06-13, 18:31:49
Reply #21

racoonart

  • Active Users
  • **
  • Posts: 1446
    • View Profile
    • racoon-artworks
You can think Corona is fast, but it will never be fast as Octane in Gensler office. They're 600 times faster than Pixar.

It's exactly THIS bullshit which makes clients think we're just trying to rip them off when we charge render farm time.
Any sufficiently advanced bug is indistinguishable from a feature.

2014-06-13, 18:33:15
Reply #22

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
Maybe we are thinking in compare tests in a different way and with a different objective in mind... I do them and I like them to decide what is going to be my production render engine, where to spend my money if I have to, I need to compare, and the thing is that I need to compare for the end result I'm going to get, is not about this render is better than this other, the Furryball is a great example of a biased comparison, for a start you can take tht test just for interior viz, and one not too complex, and for the end, they didn't even tried to get the same result and with the best time with every render, this is a biased test used commercially to convince people that Furryball is better, IMHO this test is easily beatable, in fact here is where eyeball comparison can tell you something is wrong with this test, different refraction bounces, different noise level, different AA, those times and render results are completely random.

But if a good test is done, like some tests that are out there comparing vray with mental ray, they can help decide you if that render engine is good for your project or not.

Now this is a completely subjective matter, I mean, I can get my conclussion, I can think Corona is better than vRay based on my experience, I can think Corona is better than iRay based on my tests and feature set, but is subjective, but a good comparison is always to be subjective, so you can try to do the best test you can, being as fair as you can with each compared render engine, and maybe you will get some helpful results for your type of job.

Now, what I'm not trying to say is that a comparison will tell us wich render is the best render, what I'm trying to say is that it's completely valid to compare vray with it's biased methods against Corona completely unbiased if you want, because the idea is what you've said in the quoted phrase:

Regarding end-result for client, that's absolutely different matter. Almost all renderers can give you identical result if you try.

"Almost all renderers can give you identical result if you try." YES, but in what time? that's the test I meant :)

Cheers.

EDIT: "“To put it into perspective, when Pixar was developing the movie ‘Cars 2,’ its average rendering time per frame was 11.5 hours,” says Bassett. “We’re rendering at seven seconds a frame using Octane. It changes everything. There are now Hollywood movies being made using this technology.” " this is marketing crap, it's a non-sense :P
« Last Edit: 2014-06-13, 18:37:41 by juang3d »

2014-06-13, 18:39:36
Reply #23

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
Sorry, I meant 6000 times :- D My math sucks. Damn, they are fast like hell. We all must do something very wrong...


Juang3d: It got big confusing now :- ) But I agree on tests being helpful, just not decideful on renderer's capacity. But of course, I love to see them anyway.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-06-13, 20:25:38
Reply #24

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile

2014-06-13, 23:53:55
Reply #25

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
http://furryball.aaa-studio.eu/aboutFurryBall/compare.html

nuff said.
Ugh, don't get me started on Furryball. I emailed them my MODO version of that classroom, you know. Vastly better quality than any of the images on their comparison page, and it took less than five minutes on a run-of-the-mill i7. Ten minutes on my quad-core laptop. They didn't want to post it, though, because "only Maya plugins" -- despite the fact that MODO is available as a plugin for Maya via moma. If they can include Maxwell, which is also an export-to-standalone-based plugin, surely the same setup for MODO is also valid. Bah!



Anyway, back to Redshift... They're releasing a 3ds Max plugin eventually and when they do I'll post some tests here.



edit: also, all this talk about performance is mostly irrelevant. What matters is what you need, how you prefer to work, and how slow you're willing to accept. That's why there's room in the marketplace for both Arnold and Unreal Engine. Arnold is pretty goddamned slow, but it handles pretty much anything you throw at it. It will, as far as I've heard, basically never fail to produce good results. It just takes a very long time to do so. So if reliability even with complex scenes and high quality is your top priorities and you're willing to pay a lot of money for rendering, then it's a valid choice. But if you need stuff in actual real-time, you're obviously better off with Unreal Engine, even though it means compromising on flexibility, quality, reliability, etc.

Corona is not the fastest engine around. It really isn't. Many engines, even mental ray, and definitely V-Ray, can be tweaked to produce good results in less render time than Corona would typically take. But the main advantage of Corona is that it gives you quality that competes with the best of what unbiased rendering has to offer, with a feature set that you typically only get in classical biased rendering, performance that isn't too far behind (and sometimes ahead), and a workflow that is better than either. That's a pretty cool thing. Don't get lost in the whole "rendertime" issue.
« Last Edit: 2014-06-14, 00:09:03 by Captain Obvious »

2014-06-14, 10:25:19
Reply #26

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
Many engines, even mental ray, and definitely V-Ray, can be tweaked to produce good results in less render time than Corona would typically take. But the main advantage of Corona is that it gives you quality that competes with the best of what unbiased rendering has to offer, with a feature set that you typically only get in classical biased rendering, performance that isn't too far behind (and sometimes ahead), and a workflow that is better than either. That's a pretty cool thing. Don't get lost in the whole "rendertime" issue.

Vray yes, but mental ray no. No matter how you tweak mental ray, it will always provide either vastly inferior, or extremely long rendering results.

2014-06-14, 13:22:26
Reply #27

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
edit: also, all this talk about performance is mostly irrelevant. ...

...
Corona is not the fastest engine around. It really isn't. Many engines, even mental ray, and definitely V-Ray, can be tweaked to produce good results in less render time than Corona would typically take.


But why people would have been seduced by corona or redshift, if vray or mr were as fast if you tweak their paramaters a little bit, if it wasnt' that they saw there was, not only a difference, but a huge increase of speed in common scenarios they were facing in every day life?
« Last Edit: 2014-06-14, 16:08:44 by boumay »

2014-06-15, 02:39:09
Reply #28

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
Many engines, even mental ray, and definitely V-Ray, can be tweaked to produce good results in less render time than Corona would typically take. But the main advantage of Corona is that it gives you quality that competes with the best of what unbiased rendering has to offer, with a feature set that you typically only get in classical biased rendering, performance that isn't too far behind (and sometimes ahead), and a workflow that is better than either. That's a pretty cool thing. Don't get lost in the whole "rendertime" issue.

Vray yes, but mental ray no. No matter how you tweak mental ray, it will always provide either vastly inferior, or extremely long rendering results.
Notice that I didn't say mental ray could produce as good results in less time, merely that it could produce good results -- good in this case meaning good enough. That was kind of my point. Corona is great if you need really great quality. If you only need decent quality, it might not be the best choice.

2014-06-15, 05:31:56
Reply #29

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
Notice that I didn't say mental ray could produce as good results in less time, merely that it could produce good results -- good in this case meaning good enough. That was kind of my point. Corona is great if you need really great quality. If you only need decent quality, it might not be the best choice.

Ok, I see. Makes sense.
« Last Edit: 2014-06-15, 13:15:14 by boumay »

2014-06-15, 11:31:05
Reply #30

Kramon

  • Active Users
  • **
  • Posts: 16
    • View Profile
The new mental ray is really back in game they fix so many stuff and so many improvments..

2014-06-15, 15:11:25
Reply #31

Animator89

  • Active Users
  • **
  • Posts: 29
    • View Profile
I'm a redshift consumer.
I use it in Maya and it is very fast.
For exemple my works:
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Spalnya_R1_1200_900.jpg
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image4_1200_900.jpg
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image1_1200_900.jpg
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image3_1200_900.jpg
Every image takes from 10 to 15 min render time on 2xGTX 780
All images was rendered with Brute force+poind cloud GI (something like in Corona path tracing+HD cache)
For me Redshift is much faster than Corona but I like Corona materials and lighting. So for me Corona is more realistic solution because of materials.
Thanks!
-Pavel
P.S. Sorry for my bad English ;)

2014-06-15, 16:58:59
Reply #32

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
I'm a redshift consumer.
I use it in Maya and it is very fast.
For exemple my works:
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Spalnya_R1_1200_900.jpg
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image4_1200_900.jpg
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image1_1200_900.jpg
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image3_1200_900.jpg
Every image takes from 10 to 15 min render time on 2xGTX 780
All images was rendered with Brute force+poind cloud GI (something like in Corona path tracing+HD cache)
For me Redshift is much faster than Corona but I like Corona materials and lighting. So for me Corona is more realistic solution because of materials.
Thanks!
-Pavel
P.S. Sorry for my bad English ;)

You were comparing those dual GTX 780's versus what? If you say Redshift is much faster, then you should be comparing it against dual processors of same performance.

One GTX780 is approximatelly 610 USD, two cost about 1220 USD. So you should compare it to dual Xeon E5-2430 which is closest to GTX780 price. Xeon E5-2430 is about 650 USD, so two are worth about 1300 USD. Slightly expensive, but you also need to count in that those Xeon processors consume vastly less electricity than GTX780's do.

So unless you have a good base for comparison, it's the usual apples vs oranges comparison.

2014-06-15, 17:00:55
Reply #33

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
The new mental ray is really back in game they fix so many stuff and so many improvments..

bullshit, bullshit, bullshit, bullshit, bullshit, bullshit, bullshit, bullshit, bullshit, bullshit and bullshit.

Sorry, but just total BS.

I have been very long standing MR user, and checking every year.... NOTHING improved, and it's a bit slower with every release. It got about 50% slower since version 3.6. And also final gather got a bit slower and blotchier. They also broke many things, didn't fix anything.

2014-06-15, 21:16:15
Reply #34

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
I don't know if it's me, but the more I check redshift renders, the more I find that the quality isn't so good, like the samples or something was kind of low; I see that the contact shadows/AO stuff, or highlights aren't so detailled. It just feels a little cheap at the end of the day, as if they had reach this speed but at the cost of quality.
Also, it would be interesting which processor is equal to the gtx 780 (approximatively guessing of course), so we could kind of compare.

And a last but important point, is having a second computer makes you have twice the speed in corona? I've heard that gpu cards scale very well, I mean when you get a second gpu, you just double the speed, but this may not be valid for cpu's. Am I right?

« Last Edit: 2014-06-15, 22:26:44 by boumay »

2014-06-16, 00:49:35
Reply #35

cecofuli

  • Active Users
  • **
  • Posts: 1577
    • View Profile
    • www.francescolegrenzi.com
Don't forget that you can use a good dual Xeon workstation for many thing: After effect rendering, 3ds max task, Marvelous designer simulation, real flow simulation etc.. etc...

2014-06-16, 01:50:33
Reply #36

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
One GTX780 is approximatelly 610 USD, two cost about 1220 USD. So you should compare it to dual Xeon E5-2430 which is closest to GTX780 price. Xeon E5-2430 is about 650 USD, so two are worth about 1300 USD.

Slightly true, but not entirely true since a dual cpu xeon cost is much higher due to the special needs in motherboard and RAM, they are much more expensive.

But on the other hand, I would like to know what was the CPU behind those GTX780 also, to know what performance are you experiencing and against what are you comparing.

Cheers!


2014-06-16, 09:18:24
Reply #37

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
One GTX780 is approximatelly 610 USD, two cost about 1220 USD. So you should compare it to dual Xeon E5-2430 which is closest to GTX780 price. Xeon E5-2430 is about 650 USD, so two are worth about 1300 USD.

Slightly true, but not entirely true since a dual cpu xeon cost is much higher due to the special needs in motherboard and RAM, they are much more expensive.

But on the other hand, I would like to know what was the CPU behind those GTX780 also, to know what performance are you experiencing and against what are you comparing.

Cheers!

Motherboards are bit more expensive, but i do not know about any special ram requirements. ECC ram is optional, not mandatory AFAIK.

And also, for dual GPUs, you need stronger expensive PSU in exchange. And that is, again, connected with huge power drain if both GPUs run at their Max.
« Last Edit: 2014-06-16, 15:50:51 by Rawalanche »

2014-06-16, 15:09:43
Reply #38

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
I have to see how the Redshift memory cycling works, but I am pretty sure it's still at performance cost, and you can't just go for lowest memory possible without setbacks (780ti by default has 3GB that's not a miracle).

Xeons have artificial margins...but they need to be compared to GPUs which are likewise. TeslaK40/QuadroK6000, while identical in performance to 780, cost 4000 euros each but bring 12GB Vram to the table.
The cost between "production-ready" high-end station based on pure CPU or GPU, are equally expensive, but GPUs actually scale in price even steeper. GPUs are NOT therefore any cheaper.
In fact you can go for Octa-CPU IvyBridge Xeon single-machine amounting to 256 cores for about 40 000 euros. For the same price, nVidia offers their Octa-Kepler based boxes which are about 50 000 euros each. I would say performance would be very similar in practical terms.

Power requirements as Rawalanche wrote also applies: Average IvyB (non-WS) Xeons is 120W, Kepler nVidia 240W. So in average, double the heat, noise and electricity, although this might not so much matter in grand scale both schemes cost :- ).
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-06-16, 15:52:43
Reply #39

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile
I have to see how the Redshift memory cycling works, but I am pretty sure it's still at performance cost, and you can't just go for lowest memory possible without setbacks (780ti by default has 3GB that's not a miracle).

Xeons have artificial margins...but they need to be compared to GPUs which are likewise. TeslaK40/QuadroK6000, while identical in performance to 780, cost 4000 euros each but bring 12GB Vram to the table.
The cost between "production-ready" high-end station based on pure CPU or GPU, are equally expensive, but GPUs actually scale in price even steeper. GPUs are NOT therefore any cheaper.
In fact you can go for Octa-CPU IvyBridge Xeon single-machine amounting to 256 cores for about 40 000 euros. For the same price, nVidia offers their Octa-Kepler based boxes which are about 50 000 euros each. I would say performance would be very similar in practical terms.

Power requirements as Rawalanche wrote also applies: Average IvyB (non-WS) Xeons is 120W, Kepler nVidia 240W. So in average, double the heat, noise and electricity, although this might not so much matter in grand scale both schemes cost :- ).

Thank you, that's informative.

2014-06-16, 16:13:06
Reply #40

Animator89

  • Active Users
  • **
  • Posts: 29
    • View Profile
Redshift has out of core architecture so it fits high amount of polys/textures in 3 gb of ram. I used 16 000 000 tris without a problems
I compare Corona render on Intel Core i7-4960X Extreme Edition overclocked to 4.5 ghz CPU vs 2x GTX 780 3gb
I never can take Image in 2k with Corona without noise if my render times is less than 1:40 or 2 hours BUT render result is more realistic(for me)
But when I need speed. or when I render video - Redshift wins.
So I prefer Corona for stills and quality of shading and redshift for video or situations where I don't need very realistic result
I think we(professionals) don't need to compare cpu vs gpu but we need to compare tasks what you want to do with.
So
Redshift has very high speed GI, hair rendering, volumetric rendering, camera dof(with redshift I forgot about shitty DOF in post), motion blur and overall it is amazing in workflow when you speak about speed but shading realism is poor for architectural rendering(for me)
So I use Corona for stills and architecture. And I will buy Corona for that tasks ;)
I would like to see Corona on something like Xeon Phi or Caustic cards (with open rl) ;)
For me GPU is easy to upgrade solution. CPU for me is solution for different tasks(like water and cloth simulation, hair sim, fast anim. playback e.t.c.)
So I love Corona and I love Redshift :)
Thanks!
-Pavel
P.S. Sorry for my English again  ;)

2014-06-16, 16:42:25
Reply #41

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
I have to see how the Redshift memory cycling works, but I am pretty sure it's still at performance cost, and you can't just go for lowest memory possible without setbacks (780ti by default has 3GB that's not a miracle).

Xeons have artificial margins...but they need to be compared to GPUs which are likewise. TeslaK40/QuadroK6000, while identical in performance to 780, cost 4000 euros each but bring 12GB Vram to the table.
The cost between "production-ready" high-end station based on pure CPU or GPU, are equally expensive, but GPUs actually scale in price even steeper. GPUs are NOT therefore any cheaper.
In fact you can go for Octa-CPU IvyBridge Xeon single-machine amounting to 256 cores for about 40 000 euros. For the same price, nVidia offers their Octa-Kepler based boxes which are about 50 000 euros each. I would say performance would be very similar in practical terms.

Power requirements as Rawalanche wrote also applies: Average IvyB (non-WS) Xeons is 120W, Kepler nVidia 240W. So in average, double the heat, noise and electricity, although this might not so much matter in grand scale both schemes cost :- ).
There is a performance hit to going out of core (using more VRAM than what's available). How big a hit you'll take depends on numerous factors. First of all, images aren't as problematic as geometry. In fact, Redshift defaults to a GPU texture cache of just 128 megs. It simply will not use more than that for image maps, no matter how many you have. Streaming them from VRAM is apparently really fast, so image usage is basically not a problem.

Things irradiance or SSS point caches must fit into VRAM. If such caches grow too large, it will simply fail.

Geometry works much like images, except it'll use whatever is left for it, and the performance hit is much larger. It's still usable, though, up to very large data sets. I saw someone testing it against Arnold using a Geforce with two or three gigs of VRAM, and Arnold didn't outperform Redshift until several hundred million unique triangles. It is certainly worth noting that Arnold did actually end up outperforming Redshift by a decent margin. GPU rendering is still sort of memory limited. It just takes gigabytes upon gigabytes of data to get there.

2014-06-16, 18:21:30
Reply #42

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
Well, until nVidia comes with mainstream Maxwell card with 8GB vram (some 880ti maybe? ) I remain quite sceptical. But maybe I just really need to see myself :- )

So their out-of-core streaming enables pretty much unlimited texture amount ? Did it also bypass the texture amount limit CUDA previously had (and which still seems to be case for Octane or not?)
Hi-res textures are pretty much the biggest memory eater in my scenes. Few 4k maps to start with and it starts to pack.

16mil. polies for 3GB vram is nice, but 16mil. is still nothing. How does it go around displacement ?

One guy from their team replied to my cgarchitect post about my thoughts on GPU rendering. He seems quite nice and humble, but I still don't believe his claims much :- )
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-06-16, 19:08:24
Reply #43

Animator89

  • Active Users
  • **
  • Posts: 29
    • View Profile
Well, until nVidia comes with mainstream Maxwell card with 8GB vram (some 880ti maybe? ) I remain quite sceptical. But maybe I just really need to see myself :- )

So their out-of-core streaming enables pretty much unlimited texture amount ? Did it also bypass the texture amount limit CUDA previously had (and which still seems to be case for Octane or not?)
Hi-res textures are pretty much the biggest memory eater in my scenes. Few 4k maps to start with and it starts to pack.

16mil. polies for 3GB vram is nice, but 16mil. is still nothing. How does it go around displacement ?

One guy from their team replied to my cgarchitect post about my thoughts on GPU rendering. He seems quite nice and humble, but I still don't believe his claims much :- )
There is no texture count or resolution limit in Redshift. I often use many hires (6k) textures with Redshift.
But the main question still here: why you need it for? If for archviz then I don't see problems with corona+3ds max.

You simple can download free demo and test by yourself.
I also have Octane render license (standalone + 3ds max)  and I whant to say for all peoples who want to migrate from corona to octnae - DON'T DO THAT :)
Corona is MUCH faster(mostly because of HD cache) even if you compare 4xTitan vs my six core i7 Corona don't has limits in memory or textures, and has more ftatures
Did you know that Octane 2.0 is slower by 15-20%? :)
But yes... Octane is very realistic... Not like maxwell but very close. Maybe because of spectral rendering...
Thanks!
-Pavel
P.S. I will no longer apologize for my terrible English:)

2014-06-16, 20:15:57
Reply #44

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
So their out-of-core streaming enables pretty much unlimited texture amount ? Did it also bypass the texture amount limit CUDA previously had (and which still seems to be case for Octane or not?)
Basically, yes. In a simple test I just did, using a 2k by 1k HDRI resulted in a whopping 556 kB memory on the GPU used for textures. It's obviously only loading the parts it needs. It doesn't have a "max number of texture" like Octane. Presumably performance might suffer if you have thousands upon thousands of images, but there is no set limit as far as I know.


Quote
16mil. polies for 3GB vram is nice, but 16mil. is still nothing. How does it go around displacement ?
In the same simple test I mentioned earlier, I rendered 38 million (unique) triangles on a card with 1.6 gigs of free memory. Out of the 1.6 gigs available to Redshift, the texture cache used up 128 megs, and various other things accounted for a bit more. In the end, there was 1.2 gigs available for geometry, and it used 1.1 gigs for 38 million triangles. It stands to reason that if you had a 6 gigabyte card used just for rendering (to save from Windows' overhead), you could fit about 190 million triangles before worrying about going out of core.


Quote
How does it go around displacement ?
Displacements are generated on the CPU and resulting triangles are streamed to the GPU as needed, same as with regular geometry. It doesn't do texture-space displacement rendering, as far as I know (like V-Ray's 2D displacement effect).




Octane isn't great. I'd rather use Corona. It's faster, more reliable, easier to use, and produces better results.

2014-06-16, 22:07:05
Reply #45

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
Thanks for answer guys, satisfied my curiousity :- ).

Animator, you got me wrong, I am not needing anything or looking to migrate :- ). I am very content with using Corona and Vray at the moment. Once Corona matures out (and it does so quickly!) I will be fully content to keep it purely.
It's exactly how I want renderer to look like.

I am simply curious of what other kids on block are doing.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-06-17, 18:57:53
Reply #46

Alexp

  • Active Users
  • **
  • Posts: 153
    • View Profile
Im thinking about upgrade my pc (I have 4000$ for this).

What do you think is better, in performance terms:

My Intel Core i7-4820K with 4 Geforce GTX Titan for Gpu render or a Dual Intel Xeon E5-2680 2.7
The option Cpu cost 3500$ and Gpu 4000$, more or less ...

Thanks!

2014-06-17, 20:43:19
Reply #47

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
Get some nice i7, use Corona, use the rest of money to hit Vegas ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2014-06-17, 20:55:04
Reply #48

Alexp

  • Active Users
  • **
  • Posts: 153
    • View Profile
Good idea!, but this apearse like 3th option.

Seriously, between this 2 options, what you do you think is better for professional purpose now.

Of couse i'd like to get some license of corona, Im waiting for you ...

2014-06-18, 22:01:07
Reply #49

Animator89

  • Active Users
  • **
  • Posts: 29
    • View Profile
Good idea!, but this apearse like 3th option.

Seriously, between this 2 options, what you do you think is better for professional purpose now.

Of couse i'd like to get some license of corona, Im waiting for you ...

If yo work in archviz then of course 2xCPU is better than GPU workstation. Simly because of:
1)Heavy for GI scenes
2)Archviz needs more realistic results
3)You have many renderers for CPU and for 3ds max no one normal for GPU

If you want to render spheres or boxes. or spheres and boxes. or spheres and boxes in room with 4 walls and one light source (like most of gpu render developers) then buy only one titan and one cheaper for display
I think you can get very comfortable solution with gpu renders today only if you do animations in maya/softimage an renders with redshift. Redshift is not so realistic like corona but in animations its rely doesn't matter.
After compositing in Nuke you can get very nice results:)
-Pavel

2014-06-18, 23:33:22
Reply #50

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
If you want to render spheres or boxes. or spheres and boxes. or spheres and boxes in room with 4 walls and one light source (like most of gpu render developers) then buy only one titan and one cheaper for display

burn! :D
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2014-06-20, 12:46:56
Reply #51

Alexp

  • Active Users
  • **
  • Posts: 153
    • View Profile
Thanks for your answer Animator89.
I was thinking in that posibility because I saw some nice images from octane and redshift. But I realy dont know if in the present the Gpu renderers are capable to execute big scenes like others based on cpu. The vram limit of gpu maybe dont permit to realize this archviz scenes?

Thanks guys!

2014-06-20, 21:28:51
Reply #52

Animator89

  • Active Users
  • **
  • Posts: 29
    • View Profile
If you talking about Redshift then it has a nice out of core architecture that provides to render very large scenes with lot of textures. If you talk about others(Octane, iray, arion, e.t.c.) then I simple can't render big scenes with them because of very slow speed. Yes, boxes with one HDR light they render faster then Corona or vray but when you add more lights and increse scene complexity you will have very slow(but true brute force) render process. I don't want shitty unbias render with no visual difference from render with point cloud or HD cache solutions... OCtane dev team makes out of core for their renderer but feature developement speed in Otoy is too slow... Also Otoy provides some cloud compute solutions. what do you think for octane dev team is in priority?  fast renderer-few cloud users or slow renderer-large cloud user base? ;)
I also saw some very nice images from Corona ;)


2014-06-22, 16:06:47
Reply #53

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
Thanks for your answer Animator89.
I was thinking in that posibility because I saw some nice images from octane and redshift. But I realy dont know if in the present the Gpu renderers are capable to execute big scenes like others based on cpu. The vram limit of gpu maybe dont permit to realize this archviz scenes?
Redshift deals with large scenes better than most CPU-based renderers on the market. The rest of them? Not so much. OTOY are working on an out-of-core architecture for Octane, but I'm a little bit sceptical, because Octane renders pixels in a random order which means even more random access to texture and geometry, which puts even more stress on the PCIe bandwidth and slows things down. Redshift can render in bucket mode, which helps a bit even with path tracing.

2014-06-25, 14:33:11
Reply #54

Alexp

  • Active Users
  • **
  • Posts: 153
    • View Profile
Thanks for the info. Now I think is better the xeon's option.

Best regards