Author Topic: AI super-resolution speedup  (Read 9538 times)

2019-04-15, 10:08:57
Reply #15

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 8779
  • Let's move this topic, shall we?
    • View Profile
    • My Models
Now that would be terrific feature to have. Thousand times more interesting than upscaling, IMHO.
I'm not Corona Team member. Everything i say, is my personal opinion only.
My Models | My Videos | My Pictures

2019-04-15, 13:27:36
Reply #16

burnin

  • Active Users
  • **
  • Posts: 1532
    • View Profile
Hmmm... quite doubtful about it. From "Performance Evaluation of Evotis within a VFX Environment" study by Tim Klink (August 2018) it seemed as there's way too much overhead.

As Ondra put it years ago: "not in the foreseeable future ;)"

« Last Edit: 2019-04-15, 13:31:20 by burnin »

2019-04-15, 15:50:42
Reply #17

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
Hmmm... quite doubtful about it. From "Performance Evaluation of Evotis within a VFX Environment" study by Tim Klink (August 2018) it seemed as there's way too much overhead.

As Ondra put it years ago: "not in the foreseeable future ;)"
I mean - Yeah. With 16 Passes you will have effectively wasted space of 16 Full-res EXRs. (Although the math works out differently)
That storage hit is no joke.

But Evotis had different goals in mid - subpixel perfect compositing. Deep Compositing mostly solved this in the high end VFX world. I would just use the samples to rebuild a dynamic resolution image. There is basically no overhead in terms of calculation. Although running 100 passes worth of samples through a reconstruction filter like the default "Tent" seems much, the averaging is very quick. (Considering every Progressive renderer like corona does this every time the VFB updates for one pass anyways...)
Compositing every frame of an animation that way is deadly to performance. Updating the VFB for a different res on a single image? Quite easy on resources.
Also this would be a checkbox type of thing, if such an idea would ever be implemented. Saving samples to disk is not really difficult to do. Which is why I wanna tackle it as a side project :]
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )

2019-04-15, 17:03:40
Reply #18

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
[rant]
Ever since a thread linked a product called Evotis (now their website is not public facing anymore, but there are press reports and the wayback machine), which saved samples instead of pixels to create subpixel perfect masks, I am very hopeful for a resolution independent renderer.
Instead of losing a sample's information in a pixel through the reconstruction filter, it would be interesting to save all the samples to disk. Then after rendering you could set the resolution after the fact.

Kinda like RAW allows you to set white balance, because debayering has not been done yet - saving samples would allow you to set a resolution after rendering finished, because the samples were not collapsed into pixels yet. Be it 480p or 4k. Ignoring pixel-grid alignment, 16 passes at 1080p would equate to 4 passes at 4k with no loss in sharpness, allowing you to switch back and forth.
In the world of offline rendering this would be way more useful than upscaling.
Hope to code up a prototype of this sometime this year.
[/rant]

Just to get an idea, we could very easily do this. Do some render, take your "samples/s" value, multiply it with 12 * number of your render elements. That is the bandwidth produces. If you can store it somewhere, then we can talk about coding this ;).

I wouldn't worry about devs jumping on bandwagon ;- ). It took half a year (or year?) to get nVidia denoiser which is amazing and less than day to get Intel one which is pure shit and no one asked for it. It's not done on basis of request intensity.

not sure how serious this jab was, but you need to consider that for nvidia denoiser we had to solve CUDA deployment in Corona and add concept of realtime denoising, which was not previously present. For intel denoiser we did not have to do anything but compile and link new library. Also we got requests for intel denoiser. As always, I have no problem with people asking me daily to implement something, but what I really hate is some people dissing feature requests of others as "nobody asked for it" or "that is useless" or "people want this only because they are noobs" etc.
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2019-04-15, 18:08:37
Reply #19

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
Was more reply to Romullus then jab ;- ). Still...the Intel one is rather crap..
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2019-04-15, 18:13:43
Reply #20

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
they seem to be keen to cooperate with us and improve it though, we are sharing our scenes with them to incorporate into the training set, and they also want to make the denoiser compatible with the new high quality filtering (nvidia might do the same, dunno if we got reply yet). The memory usage thing was already fixed.
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2019-04-15, 18:33:21
Reply #21

Frood

  • Active Users
  • **
  • Posts: 1903
    • View Profile
    • Rakete GmbH
The memory usage thing was already fixed.

Oh, that's great news!


Good Luck



Never underestimate the power of a well placed level one spell.

2019-04-15, 18:35:35
Reply #22

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
Was more reply to Romullus then jab ;- ). Still...the Intel one is rather crap..
I got to say, we just used the intel denoiser on an animation with Corona and it was by far the best denoiser we have used. And we used quite a bit. It cleaned all the noise without losing any detail in very fast time. It was a life saver. What exactly did you find that you do not like with it?

2019-04-15, 18:59:27
Reply #23

Juraj

  • Active Users
  • **
  • Posts: 4743
    • View Profile
    • studio website
What resolution was the animation ? I find that at 2k, even Corona native one is decently fast to use on animation with good quality.

Quality of both nVidia and Intel AI Denoisers are simply not good enough (or even close to good enough) for finals in my eyes at all (with Intel being worse at refraction), but at least nVidia is sky-high fast making for very cool IR compatriot. The only benefit I've seen for Intel are that Nodes don't have GPUs so denoising them can only be done with native one or Intel one. But if someone finds the AI denoise to be acceptable for final, that's to him, but I find it to be of very far bellow acceptable threshold.
I really don't want final images from one of the best ray-tracers on market focused on photorealism to be smeared and painterly like from photon mapping at 1995. I might as well fully switch to Unreal instead then and have sharp result in zero time.

Quote
without losing any detail

I don't find this to be true at all from my standpoint but you can post single frame if you would like (ideally before&after). If you are satisfied though that's good, that's all that matters.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2019-04-15, 22:11:33
Reply #24

burnin

  • Active Users
  • **
  • Posts: 1532
    • View Profile
Hmmm... quite doubtful about it. From "Performance Evaluation of Evotis within a VFX Environment" study by Tim Klink (August 2018) it seemed as there's way too much overhead.

As Ondra put it years ago: "not in the foreseeable future ;)"
I mean - Yeah. With 16 Passes you will have effectively wasted space of 16 Full-res EXRs. (Although the math works out differently)
That storage hit is no joke.

But Evotis had different goals in mid - subpixel perfect compositing. Deep Compositing mostly solved this in the high end VFX world. I would just use the samples to rebuild a dynamic resolution image. There is basically no overhead in terms of calculation. Although running 100 passes worth of samples through a reconstruction filter like the default "Tent" seems much, the averaging is very quick. (Considering every Progressive renderer like corona does this every time the VFB updates for one pass anyways...)
Compositing every frame of an animation that way is deadly to performance. Updating the VFB for a different res on a single image? Quite easy on resources.
Also this would be a checkbox type of thing, if such an idea would ever be implemented. Saving samples to disk is not really difficult to do. Which is why I wanna tackle it as a side project :]
"A general increase was to be expected, as in a flat only one set of values per pixel gets saved, regardless of its contents, producing, not accounting for compression, content-independent file sizes, whereas Evotis’ file sizes greatly depend on the images content. Nevertheless, a file size, on average, 140 times larger, especially for such a simple scene, for the adaptively optimized Evotis renderings, exceeds the scope of possibly being usable by far. Even the resampled 2-8 version is unlikely to be properly usable, as the files are, on average, 16.9 times as large as the flat rendering."-------------------------

It's not just the extra data, but also ~3x longer render times (power consumption) and after that extra artistic & engineering work. The latest beta tested, on which study was performed, didn't had deep support...

"In conclusion it is very difficult to predict whether Evotis will be successful and widely accepted in the industry this early in its development. The many advantages, non-uniform images, resolution independence, appending samples and sub-pixel-perfect object separation, as well as the disadvantages, no samples in depth, longer render times, insufficient optimization options and larger files, have all been explained in detail. While including depth sampling will be essential, improving render times and minimizing file size will be important, but not as critical for the short term, 1-2 years, progression. After having included deep support broadening the Nuke support and developing new techniques and approaches based on a sample workflow, not easily possible with flats, will be decisive, while constantly improving performance.
If this development phase will be successful and Evotis becomes an open standard it could well be possible for Evotis to be an industry-wide replacement for deep within the next 5-7 years, but it will probably never replace flats, just as deeps will never be able to replace flats.
The other question is: will this timeframe be fast enough considering all the movement within the industry at the moment? Possibly a new approach will emerge over the next few years making rendered images as an intermediate obsolete altogether."

Source: "Performance Evaluation of Evotis within a Visual Effects Environment" by Tim Klink
https://www.hdm-stuttgart.de/vfx/alumni/bamathesis/pdf_025

... and then some Flame2020, IFX Clarisse Builder, Houdini, Pixar, ChaosGroup, AMD, Apple, IBM... even Blender, humanity surprises me bit by bit. Interesting times for my humble little mind.
« Last Edit: 2019-04-15, 22:19:08 by burnin »

2019-04-15, 22:43:21
Reply #25

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
What resolution was the animation ? I find that at 2k, even Corona native one is decently fast to use on animation with good quality.

Quality of both nVidia and Intel AI Denoisers are simply not good enough (or even close to good enough) for finals in my eyes at all (with Intel being worse at refraction), but at least nVidia is sky-high fast making for very cool IR compatriot. The only benefit I've seen for Intel are that Nodes don't have GPUs so denoising them can only be done with native one or Intel one. But if someone finds the AI denoise to be acceptable for final, that's to him, but I find it to be of very far bellow acceptable threshold.
I really don't want final images from one of the best ray-tracers on market focused on photorealism to be smeared and painterly like from photon mapping at 1995. I might as well fully switch to Unreal instead then and have sharp result in zero time.

Quote
without losing any detail

I don't find this to be true at all from my standpoint but you can post single frame if you would like (ideally before&after). If you are satisfied though that's good, that's all that matters.
Here is an example, with 7.0 noise limit and Intel denoise, did a fantastic job and kept all detail especially in grass and vegetation.  Only added 10sec. onto a 21min render, but saved at lease a third of time rendering. These are of course straight out of VFB.

2019-04-16, 13:17:39
Reply #26

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
Thanks for the awesome resource! A really interesting read.
Hope that Bachelor Thesis got a 1.0 :]

So they already did the independent resolution thing, as was to be expected. 240p to 1080p - Results are basically perfect, as I would've imagined. (See image attached)
"500% zoom-in of the resulting scaled up Evotis (a), of a flat rendered natively at full HD (b), and of a flat, 462x260px, scaled up to full HD using the cubic filtering algorithm (c) are shown."

"but it will probably never replace flats, just as deeps will never be able to replace flats." - Yes, it's a checkbox-Sidegrade to your a workflow specifically for stuff like print. Obviously a minor improvement at the cost of insane space requirement...

If you can store it somewhere, then we can talk about coding this ;)
One harddrive per rendered frame, what's the issue? /s
Ohh shoot, passes, totally forgot :S
Well, it shall live on as a programming show case to bolster my portfolio and self-esteem...
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )

2019-04-16, 14:47:57
Reply #27

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 8779
  • Let's move this topic, shall we?
    • View Profile
    • My Models
Here is an example, with 7.0 noise limit and Intel denoise, did a fantastic job and kept all detail especially in grass and vegetation.  Only added 10sec. onto a 21min render, but saved at lease a third of time rendering. These are of course straight out of VFB.

Sorry, i don't get it. The two images are almost identical. Denoiser didn't do anything there, just added another 10 seconds to your rende time.
I'm not Corona Team member. Everything i say, is my personal opinion only.
My Models | My Videos | My Pictures

2019-04-16, 15:40:02
Reply #28

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
Here is an example, with 7.0 noise limit and Intel denoise, did a fantastic job and kept all detail especially in grass and vegetation.  Only added 10sec. onto a 21min render, but saved at lease a third of time rendering. These are of course straight out of VFB.

Sorry, i don't get it. The two images are almost identical. Denoiser didn't do anything there, just added another 10 seconds to your rende time.
Lol, you really don't see the difference? Denoise in production is only meant for the last 10% of noise left. But that last 10% a lot of times can mean 1/3 the render time. I can definitely tell the difference between the two and if we did not denoise the renders the animation would be a mess with dancing noise.

2019-04-16, 16:50:06
Reply #29

dfcorona

  • Active Users
  • **
  • Posts: 290
    • View Profile
Here is another version if this helps. It's at 12.0 Noise Limit, 8min 7sec no denoise and 8min 2sec with denoise.  Don't ask me how it rendered faster with denoiser, That wasn't the first render with no denoise.