I think something like this is already happening between each pass, and that's why the image sampling is "random".
If you just blend the passes on top of each other, you ignore some important features:
- pixel weights (that is why some of the pixels will appear as a bright firefly when you render, and that is why, I guess, your experiment has less fireflies and less intensive reflections)
- adaptivity
I am not a developer though, and spreading bro science is the last thing I would like to do ;) so I will log some more info here once I learn more.
Update:
Yep, confirmation from the Corona HQ:
- if you render many images each with random seed and combine them yourself and use mean as the combination method, you should get the same thing as rendering one image - so there is no advantage / no difference
- if you use median, you will most likely get a cleaner image, but some features may disappear (it is a bit like a simple firefly removal method)
Other than that, the pattern is changing during rendering by itself. Otherwise the rendered image would not continue improving.