Author Topic: instance objects to corona scatter ...  (Read 4493 times)

2016-04-13, 23:24:07

guest_guest

  • Active Users
  • **
  • Posts: 43
  • Marcopolo in rendering world!
    • View Profile
is any way to convert instance objects to corona scatter??

2016-04-13, 23:34:35
Reply #1

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
is any way to convert instance objects to corona scatter??
The question is unclearly phrased.
Corons scatter scatters objects, any object scattered is automatically instanced, it requires no further user interaction.
You can scatter any Geo, there is no need for conversion of any sort.

It's best to scatter things of incredibly large amounts in patches, like grass for instance, but the memory diffidence is small.
 1000000 grassblades are mathematically best distributed in 1000 instanced patches รข 1000 grassblades, but again the gain is minimal and the actual user should not concern himself with such details.

If that didnt cover it. try to rephrase.
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )

2016-04-14, 00:03:09
Reply #2

guest_guest

  • Active Users
  • **
  • Posts: 43
  • Marcopolo in rendering world!
    • View Profile
i feel that 1.4 daily builds has problem with instances or scatter plugins like Multiscatter ... bad ram management in massive instancing ... or corona proxy bug??

2016-04-14, 00:55:56
Reply #3

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
i feel that 1.4 daily builds has problem with instances or scatter plugins like Multiscatter ... bad ram management in massive instancing ... or corona proxy bug??
Proxies simply offload the geometry until render time, they don't save memory aside from what the viewport eats. This is not what proxies are meant for, they don't change instance behaviour.
Afaik geo management has not been touched since 1.0.
Scatter plugins don't change the behaviour of the Engine's memory management. They simply distribute and create fast previews.
It is completely possible that dailies screw stuff up, they are not created for stable operation, but for quick iteration and feedback in "exchange" for state of the art functions.

If you believe to have found an issue, make it reproducible.
Create a teapot test scene with millions of tea pots and make bad memory behaviour vs 1.3 reproducible. Then submit a ticket over at corona-renderer.com/bugs
This is the prime way of getting devs onto something.

Right now I don't think there are any memory degradation vs 1.3 reports. (Since that code propably wasn't touched) As such it suggests, that your setup us wrong.
But if you can reproduce something as severe as worsened memory management, devs will immediately will jump on it. No doubt about that.
« Last Edit: 2016-04-14, 01:05:26 by SairesArt »
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )

2016-04-14, 02:30:38
Reply #4

guest_guest

  • Active Users
  • **
  • Posts: 43
  • Marcopolo in rendering world!
    • View Profile
that your setup us wrong.

Render setup or scene setup??

My scene is nothing else ... simple HDR + dozens of corona proxies that scattered by multiscatter plugin + water surface with displacement and Anima plugin characters + usually shading setup (no nested setup) ...
Render setup is default ... just changed the pass limit ...

I came across a curious case when render an specific view ...

render time in 1st try is about 8 min.
rendertime in 2nd try is about 1 h. [same view - not changed / max is still open / current session]
rendertime in 3rd try is about 5 h. [same view - not changed / max is still open / current session]
rendertime in 4th try is about => HAAAAANG !!! [same view - not changed / max is still open / current session]

on restart max and try again , take different time !?!
« Last Edit: 2016-04-14, 03:20:51 by guest_guest »

2016-04-14, 11:38:38
Reply #5

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
Render setup or scene setup??
on restart max and try again , take different time !?!
This is classical Paging behaviour. Window's pagefile is set to dynamic by default and does some resize calculation each time it's full, then each time you rerender you start randomly pushing data and fragmenting the pagefile. All in all contributes to the wrong stuff being pulled at the wrong time each time the process is repeated. Also, ironically enough proxies can make this worse, since Now to load them into ram there have to be IOPs done to read to ram, flush requested space of ram to pagefile, then a second later flush itself just to get the next asset.

We have established in your last thread, that you only have 16gb of ram. If you create a scene, that exceeds that limit, then by definition your setup is wrong. 7mio instances + displacement + Crowd simulation plugin. Is not realistic to work with that little Ram.

Disable displace (especially displacement eats a lot) and crowd. Lower HDRI res, use low res for lighting and composit highres after that, mold similar instances into one object. If that doesn't help,
Open new max file, import just geo, no displacements, render that and in another maxfile the crowd and postprocess it together. You are stuck otherwise.

As such there is still no evidence for bad memory management.
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )