Author Topic: one sampling pattern might not be enough  (Read 7955 times)

2020-08-05, 15:24:57

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
Hi there

I am currently writing our own farm distributing system. The goal is to distribute full images but each with a fraction of passes. If I need 100 passes for an image to look clean, 10 passes on 10 computers will do the work a lot faster. It is similar to coronaDR, however I want those slaves to be independend and controllable by a manager and not by a fixed list or "searching for new slaves during render" thing. Also as an advantage, I can control the image merging my self - leaving out failed images from blades automatically and still having a correct final output. To merge the images together I use something similar to this:
- however I do it with imagemagick automatically at render end. I quickly found out that the images look a lot cleaner if the noise pattern changed on every blade. This might not be 100% physically accurate, but looks way better.

So my thoughts are currently that it might be nice to change the sampling pattern during a rendering locally, blending the images together on the fly to get rid of fireflys and hard to render bokeh balls - build into corona.

Opinions?

2020-08-05, 15:32:01
Reply #1

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 12752
  • Marcin
    • View Profile
Sorry, but I really did not understand what exactly you mean here. You can affect the sampling patter by using:
- "Lock sampling pattern" option in the Performance tab
- Changing the random sampler in the devel/debug rollout (https://coronarenderer.freshdesk.com/support/solutions/articles/12000021288)

Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us

2020-08-05, 15:37:47
Reply #2

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
Hi

Yeah, I can change it for every rendering I do. but I want the sampling pattern and random seed to be changed during rendering. When I send Images to the farm with my solution it also disables the lock sampling pattern. I get back some images with different sampling patterns. when I combine them using "mean" or "median" I get a cleaner image then a locally rendered Image with a lot of passes. I found this to be useful but afaik it is not possible to render the same image locally as the pattern needs to be changed during rendering then.

I can also combine your own two sample images from here:
https://coronarenderer.freshdesk.com/support/solutions/articles/12000039645-what-is-the-new-improved-sampler-

the only thing that is needed is the same images with different noise seeds. but rendering this locally without saving images after some passes and hit render again would be cool.

Alex

« Last Edit: 2020-08-05, 15:50:07 by Phasma »

2020-08-05, 16:24:16
Reply #3

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 12752
  • Marcin
    • View Profile
I think something like this is already happening between each pass, and that's why the image sampling is "random".

If you just blend the passes on top of each other, you ignore some important features:
- pixel weights (that is why some of the pixels will appear as a bright firefly when you render, and that is why, I guess, your experiment has less fireflies and less intensive reflections)
- adaptivity

 I am not a developer though, and spreading bro science is the last thing I would like to do ;) so I will log some more info here once I learn more.

Update:
Yep, confirmation from the Corona HQ:
- if you render many images each with random seed and combine them yourself and use mean as the combination method, you should get the same thing as rendering one image - so there is no advantage / no difference
- if you use median, you will most likely get a cleaner image, but some features may disappear (it is a bit like a simple firefly removal method)
Other than that, the pattern is changing during rendering by itself. Otherwise the rendered image would not continue improving.



« Last Edit: 2020-08-05, 16:37:16 by maru »
Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us

2020-08-05, 16:51:02
Reply #4

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
thanks a lot for your reply!


what random sampler should i try to get this result?

However, I am not understanding the adaptivity thing. If I render 10x10 passes with adaptivity on, it will still focus the rays to the important locations on every machine. combining them all together would not make the adaptive sampling disapear.

2020-08-05, 16:53:29
Reply #5

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 12752
  • Marcin
    • View Profile
thanks a lot for your reply!

I will try the random sampler, but I am not understanding the adaptivity thing. If I render 10x10 passes with adaptivity on, it will still focus the rays to the important locations on every machine. combining them all together would not make the adaptive sampling disapear.

If you render less than 5 passes, there is no adaptivity whatsoever.
Also, there is a chance that a bright spot will appear after more than 10 passes (let's say 100 or 200) and only then adaptivity will kick in in that area (imagine some tiny bright light). Obviously that's an edge case example.
Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us

2020-08-05, 17:10:26
Reply #6

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
I set the adaptive recalc down to just 2 passes and always make sure every blade renders at least 10 passes. I also did some tests and adaptivity is used and is important.

whatever i set as random sampler though. it seems i am not able to get the same results as my mean/median combined images...

Quote
you should get the same thing as rendering one image - so there is no advantage / no difference

so I think this is not true :-(

2020-08-05, 18:17:55
Reply #7

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
So I did a comparison now:

all random image samplers and mean/average/median merge compared. psycho was nice to watch cleaning up :-)

I used the "New Improved" Image sampler fpor the 10 median images.

I think there is a huge difference and I am still the opinion that this would make sense to have build in.

2020-08-05, 19:19:33
Reply #8

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 8833
  • Let's move this topic, shall we?
    • View Profile
    • My Models
While your median examples are effective at firefly removal, they are actually a lot worse at DOF noise, compared to single image, rendered at 100 passes. I'm not sure what you're expecting to gain from your requested feature.
I'm not Corona Team member. Everything i say, is my personal opinion only.
My Models | My Videos | My Pictures

2020-08-05, 21:33:19
Reply #9

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
mostly firefly removal and distributed rendering. sometimes dof is better sometimes not that good. however we also dont render with dof that often.

also with some more passes those dof areas would clear up but not those noisy bokeh balls/fireflies
« Last Edit: 2020-08-05, 22:17:02 by Phasma »

2020-08-06, 11:45:05
Reply #10

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 12752
  • Marcin
    • View Profile
Sorry, but if a simple solution like this existed, and if it would bring measurable benefits, it would have been described in scientific papers and I am sure our developers would have known about it already.
It seems that what you are describing is either worse than some other solutions, or is already implemented in a better way.
Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us

2020-08-06, 11:51:48
Reply #11

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
here is a more production related test

as I said, I am was mostly after a solution to distribute renderings to a farm. that was because with tiles and strips etc we can not use:
-adaptivity
-bloom/glare
-camera distortion
-screen mapped stuff
-denoising

but if you look on the first two examples that were rendered with absolutely the same settings, but one was distributed on the farm and merged with median, I think there is not only the distribution advantage. for me it looks less noisy and fireflies are removed.

and yes - it is super simple. and it is not 100% physically correct as it might get rid of stuff like medium bright bokeh balls etc. - but these would be hard to sample anyway - so this might be a solution to get a cleaner image faster with the compromise of giving up some highlight intensities.
« Last Edit: 2020-08-06, 11:55:21 by Phasma »

2020-08-06, 12:37:12
Reply #12

pokoy

  • Active Users
  • **
  • Posts: 1861
    • View Profile
As a workaround, and only if the scene is static, you could render an animation with each PC rendering a different frame. The noise pattern would change on each frame giving you what you're after. Small overhead from scene data transfer, of course.

2020-08-06, 12:58:01
Reply #13

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
that is basically what i am doing. I spawn X jobs rendering the same frame with passlimit = total_passes/X_Jobs. Currently I do this with backburner. a post-render script will make sure that: a) output luminosity and other meta info is saved once the job is done and b) If every job was done for an image - it will call imagemagick to average them all to gether renderelement by renderelement - but only if the luminosity does not differ from the median luminosity of all rendered "passbunches". This makes sure that - if a blade was missing a texture or forest pack was not rendered properly, these passes will not be used for the assembled median image.

with this we can render distributed without missing out nice corona features as we render full images. This also works with a locked sampling pattern, giving the same result as when all passes would render locally (or with corona DR). however as you can see above, if you unlock the sampling pattern within this process you can gain some quality.

If no one is interested in having this happening locally (as an option in corona) without saving and restarting the rendering with a different sampling pattern and do all this manually, we dont need to talk any further here. it was just a suggestion that is super simple and might not be in technical papers, but could be benifitial for more people and not just for us.

2020-08-06, 15:09:39
Reply #14

Frood

  • Active Users
  • **
  • Posts: 1920
    • View Profile
    • Rakete GmbH
I know how hard it can be convincingly requesting, even discussing things like this, especially when it leaves the standard territory of how it should be done or touches any holy grail. But what about turning this in a feature request to improve DR? The alternate merge variant may be something worth at least to be checked, cannot judge this. Additionally it looks that this is one of your main issues as reason to look for alternatives:

It is similar to coronaDR, however I want those slaves to be independend and controllable by a manager and not by a fixed list or "searching for new slaves during render" thing.

Yes, "search for lan" is a well-intentioned feature to keep things simple but absolutely unusable and impractical for us - never used it because DrServers are running everywhere and permanently. I shortly describe how I "manage" them in a running BB job with activated DR, maybe it adds another aspect for you to consider: every node rendering anything (part of a mandatory prerender script) periodically looks for a script dedicated to the job and executes it if present. In that script I can do many things I like (but not all) during processing: finish the job, dump the VFB to get a snapshot of the current state, change renderer properties, also changing the slave list. But to let the changes of the slave list take effect, you have to disable an re-enable DR again. Unfortunately this kicks out all participating slaves of course. They rejoin, but have to load the complete job once again.

So if the slave list would be periodically evaluated during rendering (and not only when toggling DR on) we would have an important feature for writing managers/custom solutions without the drawback of disconnecting and rejoining slaves.

Edit: Unfortunately in Corona v6, that amazing feature (DR on/off toggling during rendering) has been removed. DR gets less flexible every release to fix bugs it seems.

Good Luck



« Last Edit: 2020-09-16, 18:28:41 by Frood »
Never underestimate the power of a well placed level one spell.

2020-08-06, 15:41:46
Reply #15

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
thanks! exactly these are the reasons why I did not choose DR as a plattform for all of this. backburner however also seems sketchy and has it's limitations. for example i can not submitt this all as one job with x tasks. bb only accepts tasks if they are actually different frames. like pokoy mentioned, this is only possible for absolutely static scenes. pre-render scripts that adjust the frame will cause backburner also to always add the same numbering at the filename and therefore the output will overwirite itself all the time. if i also change the filename in the pre-render script backburnern has somehow issues writing these files at all (i think it is because in the job metadata there are still different filenames saved...) so for now I have to submit in a loop which takes some time if the scene is heavy - uncool. My Idea might be to have one dedicated job spawner in the farm where I just give instructions to...

2020-08-10, 10:27:44
Reply #16

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
So I did another test. this time with adaptivity on all the time. I just compared it in the last example because we were not able to utilize adaptive sampling with our current render distribution method. As I am writing here for the sake of making the median avalible internally and for local renders, I think this comparison is much more important

so while the fireflies are handled a lot better, the general noise with the median is higher, just as romullus mentioned it before.

However I still think that this would be helpful. If you render scenes with a lot of blur, may it be motion blur, or out of focus areas, the image median renders these areas cleaner, and they sometimes would never clear up fully otherwise.

in the second example, again a simple teapot scene, I can throw 500 passes at this and the blur will not clear up. if I split it to 10x50 or 20x25 passes the results look much more usable (and of course it rendered a lot quicker on 10/20 machines in my case)

2020-08-10, 10:36:47
Reply #17

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 8833
  • Let's move this topic, shall we?
    • View Profile
    • My Models
Just a random thought - since median merge result is mostly effective at suppressing fireflies, it would be interesting to see how it compares to lowering MSI parameter? Could you do additional test - single machine, 100 passes, but lower MSI. Maybe that would give you similar, or even better result than median merge?
I'm not Corona Team member. Everything i say, is my personal opinion only.
My Models | My Videos | My Pictures

2020-08-10, 11:19:48
Reply #18

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
I already thought about this as well. however - especially in this case - fireflies tend to have a very high intensity and will not be cut of by a low value. but as you wanted me to try it specificly, I did it with a value of 5 (tried 10 also)

I can also try to play with GIvsAA - not sure if this could help...

but thanks for the hint

2020-08-10, 14:14:26
Reply #19

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
here is 2vs64 GIvsAA

while 64 looks at least better, it took 2hours to finish those 500 passes. the image with a value of 2 rendered in 10 minutes :-D

-> and still not as good as median.

2020-08-10, 16:22:39
Reply #20

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 12752
  • Marcin
    • View Profile
1) When comparing different GIvsAA it is best to use a time limit (e.g. 10 minutes) and then compare the results. With lower GIvsAA you will get better quality of: AA, DOF, motion blur, fine textures at the cost of GI and direct light quality.
2) Your result look pretty much like clamped highlights. Are you saving your passes in a 32-bit format before blending them?
3) Can you try increasing Highlight clamping in the System tab (not Highlight compression in the VFB/camera!) and compare it with your method? You would have to use IR to find the optimal value. Too low values (like 1 or less) will completely clamp bright areas. Too high values will introduce fireflies. I would try with 2 and then increase until the clamping is barely visible on strong highlights.
Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us

2020-08-10, 17:01:50
Reply #21

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
thanks for the input

reg:1)
true - didn't think of that
reg:2)
if I blend 32bit exr's I get the same result. the highlights look clamped as they get more and more "underrepresented" as different blades with different noise patterns cover them differently. so if at a specific x,y coordinate from one blade a strong highlight is shown but on most of the other machines there is none it averages out and leaves just a "clamped looking" highlight. highlights however that are bright on most if not all the machines stay intact and as bright as they should be.
wich brings me to 3)
if I introduce the highlight clamping in the system I will loose all the highlights, especially the strong ones like on the teapot in the foreground although they are the kind of "prooven highlights" (they are shown in every image from every blade) that are sampled good and I dont have to worry about - however - you are right. I am able to reduce the background noise of the bokeh balls using the highlight clamping. I would have to render 2 or more images however and manually paint in the areas I want in photoshop (as i would loose some of the other highlights that do not cause any sampling problem)

2020-08-11, 07:42:16
Reply #22

Mohammadreza Mohseni

  • Active Users
  • **
  • Posts: 152
    • View Profile
    • Instagram
really an interesting topic. I will surely test this method.

2020-08-11, 11:12:21
Reply #23

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
Hi, are you using any sort of highlight compression? Because if yes, this is basically simulating the "subpixel mapping" that was used in VRay back in the days, but was abandoned because it decreases image quality (removed DOF and MB related highlights). The basic idea of the method used in astronomy is used in all realistic ray tracers since day 1 and is done in an optimal way, that you cannot improve upon by doing it manually.
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2020-08-11, 14:00:04
Reply #24

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
Hi

if you mean the color mapping highlight compression - then yes. i can however set this to 1 as a test and see what comes out then

basically i also thought that this technique musst be inherent to path tracing and progressive rendering. how else would different passes not just overwrite the previous one. also if i median merge more and more images together the effect of the result clearing up will also get less and less noticable - same with the progressive rendering in general. however - and thatswhy I wrote in the title that "one sampling pattern might not be enough" - it seems that the patterns that corona renders with are not too different. if i render without the sampling pattern locked - the results are really different from each other. I mean at the end, all what these out of focus highlighs really need is some more different rays to clear up but somehow very often these regions do never fully develop. they seem to be stuck in one sampling pattern.

2020-08-11, 15:00:29
Reply #25

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
Corona tries to keep the sampling pattern the same across multiple renders and we jumped through a LOT of hoops to do this, on purpose. It improves noise situation in animations, and helps reproducibility (when there is some random bug after many passes).  Yes, it also means you cannot average images yourself after render. But don't worry - during single rendering, when resuming rendering, and when doing DR, the sampling pattern is automatically unlocked.

When averaging images, the out of focus highlights are not really cleared, they are removed from the image because of the image clamping. If you average images without postprocessing applied, you should get +- the same result as when you just let it render in single image.

Also note that thanks to the blue noise sampling and DMC, Corona actually picks better noise patterns across passes than you could achieve with purely random sampling - either in real world photography, or re-rendering the image multiple times with sampling pattern unlocked. You can switch between DMC and purely random per-pixel sampler in experimental settings to see how much improvement this is bringing.
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2020-08-11, 15:04:07
Reply #26

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
you can also achieve the same effect with highlight clamping in system->VFB render settings
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2020-08-17, 16:07:31
Reply #27

Phasma

  • Active Users
  • **
  • Posts: 112
    • View Profile
I could not beliefe it but it is true

when rendering exr and with highlight compression set to 1 - both (100 and 10x10) look basically the same.

so I could still ask for my feature to be implemented I thought... but this would mean that the internal image blending within the vfb would need to be done with highlight compression turned up or in general at a lower bit depth. correct me if i am wrong here - but this would not be something I would like to have :-D

So median is something that can be added later when the color mapping is applied and can help reducing fireflies and is something nice to do when distributing stuff over multiple computers but nothing for the internal rendering process - because it basically does this automatically already on non color mapped images properly.

If that is all correct - we can close the thread :-)