Author Topic: Corona render speed  (Read 43588 times)

2014-09-21, 18:44:08
Reply #60

boumay

  • Active Users
  • **
  • Posts: 96
    • View Profile

...it has a huge cost vs Corona render, I can get more CPU nodes than GPU nodes for the same price...

With Corona I have an outstanding node for 800€, with any GPU render engine this price is impossible...

Can you tell you setup, cpu model, ram, etc? Because, from my study cpu node is indeed much more expensive than a gpu. A highend pc (4930k based) would cost 1500 euros approx. While a gtx 780 6gb is 500 euros.
I woud have like to build cpu based render farm, but...
And I took the example of highend hardware here because, imo, a mid-range cpu isn't very worth the investment, what we are looking after is maximum firepower! :)

2014-09-21, 19:40:20
Reply #61

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
I7-5820k, the cheapest ATX 2011-3 mother board you can find, the cheapest RAM you can find (preferible a 32Gb kit for every 2 nodes), the cheapest Hrd disk you can find, the cheapest case, a Decent 500W or 600W PSU, a cheap but Decent liquid cooling system, the cheapest but Decent case you can find and a passive cooled Nvidia GPU with 2gb of video memory, you can find all this for around 800€ :)

For me a node MUST be cheaper as it can be except for the CPU that has to be the one with the best performance/price ratio, in this case the 5820K IMO.

If something breaks I can find a replacement in no time and with very few money :)

You may think that using a cheap MOBO or a cheap RAM you loose some performance, in my experience over the years, that difference is mínimum.

For that GTX you need a node, the GTX alone does nothing :) of course you can assemble a node for that GPU for 400€ or 500€ with a cheap CPU, but then you are limited to GPU rendering, what will you do when you have to Render a scene that does not fit in 4Gb of Video RAM?

Cheers!

2014-09-21, 20:26:07
Reply #62

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
For that GTX you need a node, the GTX alone does nothing :) of course you can assemble a node for that GPU for 400€ or 500€ with a cheap CPU, but then you are limited to GPU rendering, what will you do when you have to Render a scene that does not fit in 4Gb of Video RAM?
What will you do with your CPU farm when the scene doesn't fit in 16 gigs of memory? Add more to each machine?

The Octane people are adding out-of-core stuff. Redshift already has it. It won't be very long before out-of-core is the standard for GPU renderers, at which point 4 gigs of VRAM will be plenty for all but the craziest of scenes.

2014-09-21, 21:27:13
Reply #63

Juraj

  • Active Users
  • **
  • Posts: 4761
    • View Profile
    • studio website
Well, definitely need to try Redshift eventually, but will be like..December until I get there :- ). At the moment I sort of don't believe out-of-core rendering doesn't come at decent performance cost, so I find it hard to believe 4GB could be staple, but 8GB could be decent, let's see if any vendor will come forward with 980 version with that.

The Luxmark difference in favour of 980 vs 780Ti is interesting, but seems alone. Maybe the bigger memory bus ? Otherwise I have to defend this new line-up, 980 is not supposed to be contender against 780Ti, eventually 980Ti will surely come in some form.
The performance increase in past 3 generations of gpus is also quite better compared to CPU department. I really wanted to buy 8GB 980, but there is none right now...I will buy one anyway for gaming but if could snap 8GB I would be happy.

"iRay speed" is such oxymoron :- )
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-09-21, 22:11:35
Reply #64

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
At the moment I sort of don't believe out-of-core rendering doesn't come at decent performance cost
Oh, it most certainly does! But how big the performance detriment is, depends on numerous factors. Specifically: if you render a scene where the geometry is so heavy and the rendering so complex that it cannot keep all the geometry needed for that one bucket in memory at the same time, things can get awfully slow. However, if all that happens is that it can't keep the entire scene in memory and does most of the "paging" to RAM between buckets instead, the performance cost is much smaller. Basically, the less you go out of core, the better. A little bit of out-of-core only hurts performance by a very small amount, but doing it constantly means you might as well not render on the GPU at all.

For image maps, apparently the performance hit is so small that they don't even try to keep everything in memory. The default is a 128 megabyte cache that they just stream everything to, chunk by chunk. Because it can load individual pixels straight from the images in RAM, it doesn't really matter much.

Of course even if they do add out-of-core memory management to Octane it will probably benefit less than Redshift, since Redshift is a bucket renderer and Octane renders pixels "randomly," which would be terrible for geometry swapping.


The rumours are that the 8 gig 980s will be out later this year. I heard november-december.

2014-09-21, 23:00:34
Reply #65

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
What will you do with your CPU farm when the scene doesn't fit in 16 gigs of memory? Add more to each machine?

Of course, can you say the same about the GPU? Plus the performance hit of out of core is not as big in CPU as in GPU

Anyways, I'm not closing that door, it's just that for me it's not a good investment, in the future... Who knows, anyways I don't believe too much in the out of core on GPU basically because even people from Nvidia says that it cannot be don without a considerable loss of performance, this was going to change, theoretically, with the Maxwell family, I'm not sure if it changes or not.

Please try a scene that takes 16Gb of ram in geometry and textures and try it in Red Shift or Octane, of course software advances day by day, it will evolve and maybe it will be better at some point, but let me say that currently the total cost it's not worth the investment, at least for me :)

Cheers!

2014-09-21, 23:28:22
Reply #66

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
I saw some tests comparing Arnold and Redshift while constantly upping the amount of geometry. The machine had enough RAM to cope, but only 2-3 gigs of VRAM. Somewhere around the 200 million triangle mark, Arnold overtook Redshift because of out-of-core issues. But still, that's 200 million unique, un-instanced triangles with just a couple of gigs of VRAM, rendered with full GI.

2014-09-21, 23:34:05
Reply #67

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
Did you tried Arnold? It is SLOOOOOOOOOOOW... So if Arnold outperforms RedShift that is a biased render engine designed to be fast at the cost of quality...

Cheers

2014-09-22, 11:34:10
Reply #68

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
Arnold may be slow at some things, but it's fast at dealing with extremely heavy geometry. What do you think would happen if you tried to render 200 million unique triangles in Corona?

2014-09-22, 12:06:44
Reply #69

juang3d

  • Active Users
  • **
  • Posts: 636
    • View Profile
It will crash of course, the out of core tech is not yet developed AFAIK.

Arnold is slow in every aspect, it's a great production render, very solid, and it gives an astonishing quality, but it is in the Maxwell side of the fence, Quality at the expense of excessive amount of time.

Still as I've said GPU is not a good investment for me, at least yet, and I won't recommend to anyone for the ti,e being, I did in the past, and I was wrong I have to admit, maybe with the next gen (after the 980 family) it will be awesome, for the time being, with Corona in the game, I'll stay with CPU.

BTW I say all this because Corona exists, if there is no Corona I'll be back to GPU rendering, even with all it's flaws, but I prefer Octane, iRay or even RedShift if I need it to being back to mentalRay or having to acquire VRay.

Think also that having CPU nodes let me leverage those nodes for more tasks, like distributed simulations or giving a CPU farm personalized service to small artists and studios, the GPU market is not a good investment yet for me, and I have several GPU's distributed across my farm for GPU rendering.

But... for me... Corona wins! Hahaha

Cheers.

2014-09-23, 10:17:24
Reply #70

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
But... for me... Corona wins! Hahaha
Haha, same here. :-)

2014-09-27, 16:24:43
Reply #71

photomg1

  • Active Users
  • **
  • Posts: 10
    • View Profile


What are you using as your bridge from modo ? , or are you just using max now as I 've noticed you are not around as much over there anymore?
« Last Edit: 2014-09-27, 16:55:20 by photomg1 »

2014-09-28, 15:21:25
Reply #72

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
I still use modo a lot. I don't really do that much arch viz any more to be honest. I mostly write code.

2014-10-02, 21:28:14
Reply #73

yagi

  • Active Users
  • **
  • Posts: 370
    • View Profile
while we talk renderers and speed, gpu n cpu.... i'd like to ask the house : im thinking of upgrading to the new 8core haswell 8 processor and i would like to know if its worth it for corona's sake :)  im currently using the i7 3.45ghz, so i would like to know what the possible difference could be in terms of speed(render times). assuming the benchmark scene rendered on my i7 3.4ghz for 6mins ,then what could be the possible render time on the haswell e processor? keymaster should hopefully have a clear idea on the possible outcome, right?....urgent, so i'd know if spending all that money on a bad ass processor is worth it..... over 2,000 dolls is no joke. thanks

2014-10-03, 01:23:10
Reply #74

Captain Obvious

  • Active Users
  • **
  • Posts: 167
    • View Profile
If you consider the price for an upgrade to be a lot of money then no, it's not worth it. If you've got a six-core @ 3.45 GHz right now, the 8-core Haswell-E isn't going to be that much faster. It might cut your 6 minutes into 4-5 minutes. Is that really worth two grand?