Author Topic: More intelligent 'Render only Mask'  (Read 6366 times)

2020-02-19, 23:13:14

Dan Rodgers

  • Active Users
  • **
  • Posts: 55
    • View Profile
Currently, I am rendering some extra masks out for an animation,  basically the attached image.

if I render normally, with 10 passes it takes 19 seconds a frame, 10 of which is rendering.

If i render with a region drawn, either a max or corona VFB region it makes no difference, the time drops to 7 - 8 seconds a frame. and render time is around 1 second for the exact same mask.

Drawing regions won't really work in scenes where you need lots of additional masks or have more extreme movement

Can we have a more intelligent system that only renders/calculates what it needs to instead of calculating the entire scene? This shot is 150 frames so without drawing a region I would be waiting over 45 mins instead of about 18



« Last Edit: 2020-02-19, 23:28:44 by Dan Rodgers »

2020-02-20, 13:45:28
Reply #1

TomG

  • Administrator
  • Active Users
  • *****
  • Posts: 5434
    • View Profile
If this is a separate render as you already have the animation itself rendered, do you have "Render Masks Only" enabled? That way it skips doing the Beauty renders and just does the masks (then no need for regions, as it's super fast anyway)
Tom Grimes | chaos-corona.com
Product Manager | contact us

2020-07-29, 03:16:13
Reply #2

Dan Rodgers

  • Active Users
  • **
  • Posts: 55
    • View Profile
So i am going to dig this up, I don't think the problem was understood before, so here goes for try #2

I am rendering the attached image, a render select on just a car's body panels at 6k.

If I do not draw the region around the car, corona predicted around 4 hours to finish the render.

With the region drawn around the objects, the render took 10 mins.

The overall image is obviously identical regardless of whether I draw a region or not.  So Corona seems to be wasting a lot of resources rendering nothing.

2020-07-29, 13:50:18
Reply #3

sebastian___

  • Active Users
  • **
  • Posts: 200
    • View Profile
I said this before here, but not sure anyone got it. You can sort of animate a mask.

I think I did it by using the 3ds max render function - render selected object and I designated that selected  object to be a simple animated plane which is made to be non renderable. The plane is animated to move and be where you have your needed area to render.


This method is also good where you have a huge resolution with a scene which is mostly static but you have a small animated object, or objects. It would be a big waste of time to render entire frames for that small object only.

2020-07-29, 18:50:35
Reply #4

burnin

  • Active Users
  • **
  • Posts: 1532
    • View Profile
A feature request then... already posted here "Render sampling mask"

& Few examples from other engines:

- Extra Sampling in Maxwell~Render
Quote
Broadly speaking, to render a scene, Maxwell casts rays to all pixels in the image (considering the camera sensor as a whole). But very often, the noise ends up being concentrated in certain areas, and so sometimes it would be useful to focus the rays only on those areas selected by the user. Very often the render looks generally fine except for those parts where, due their particular lighting and material characteristics, they are still noisy and need more rendering. So instead of continuing to render the whole frame (which will require rendering effort in areas that are already clean), you can put all the render power into refining specifially areas that you choose. This is when the Extra Sampling feature can help. With this feature, you can define an area to be rendered to a higher Sampling Level than the general frame, so you can distribute the rendering effort on your scene in a smarter way and save a huge amount of time, optimizing the render process as it is refining only the areas of your choice. In fact, the saving in time is directly related to the proportion of pixels sampled in the mask on the global frame.

- Texture-Controlled Pixel Renderer in appleseed
Quote
We've added a way to control how many samples each pixel will receive based on a user-provided black-and-white mask. This allows to get rid of sampling noise in specific parts of a render without adding samples in areas that are already smooth. This is yet another tool in the toolbox, complementing the new adaptive tile sampler introduced in appleseed 2.0.0-beta and the per-object shading quality control that has been present in appleseed since its early days. In the following mosaic, the top-left image (1) is the base render using 128 samples for each pixel; the top-right image (2) is a user-painted mask where black corresponds to 128 samples/pixel, white corresponds to 2048 samples/pixel and gray levels correspond to intermediate values; the bottom-left image (3) is the render produced with the new texture-controlled pixel renderer using the mask; the bottom-right image (4) is the Pixel Time AOV where the color of each pixel reflects the relative amount of time spent rendering it (the brighter the pixel, the longer it took to render it):



And the old (now gone) LuxRender had a feature that enabled the user to paint a mask directly on the image while it was being rendered :o

EDIT: & New one, LuxCoreRender (standalone) has it too :)

PS
Even with Adaptive sampling it comes handy for just doing an extra work on hard-to-get shadowy areas, SSS materials...
« Last Edit: 2020-07-29, 19:49:55 by burnin »

2020-11-18, 10:53:03
Reply #5

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 12711
  • Marcin
    • View Profile
Confirmed and reported.
(Internal ID=597096944)

What is strange is that in the past we already fixed a very similar issue so it could be some sort of regression.
Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us