Author Topic: Neural rendering - Nvidia RTX  (Read 703 times)

2025-01-23, 17:12:48

Jpjapers

  • Active Users
  • **
  • Posts: 1716
    • View Profile
Recently NVIDIA revealed a ton of neural features their new 50 series cards have.



A couple of them stood out to me.

RTX Mega Geometry:

Realtime LOD generation allowing for crazy amounts of geometric detail down to sub-pixel triangles allowing for highly detailed models without normal maps.

RTX Neural Materials:

Small AI models that aid shading through compression and faster material handling resulting in high fidelity materials that can render extremely quickly.



Whilst i understand that Corona is and always will be a CPU renderer for many reasons, each time i see a new development in realtime it feels like the quality gap is shrinking between offline and realtime content and it makes much of the offline workflow feel slow. Its often been the case that Corona has relied on the crutch of absolute realism to defend from this question of GPU/Hybrid rendering but now we have an upscaler built-in and have had a denoiser for some time both of which make assumptions and provide shortcuts that reduce overall accuracy. So can that really be an argument against a hybrid approach anymore?

My questions to the team.

-Can any of these features be used to speed up our offline workflow?
-Will you ever be looking at a hybrid approach at all now that the current generation of GPUs have so many neural capabilities?

When the result of things like neural shading looks 99% the same but the shader can be rendered to final quality in real-time, does absolute accuracy of your path tracing really matter?
Can rendering be accelerated by any of these developments at the cost of accuracy if the user deems it acceptable?
Could these technologies help reduce memory consumption?

It feels like there will inevitably be a limit with CPU rendering as to how far it can go without eventually needing to become a hybrid renderer in order to stay relevant. I think that time is coming around pretty quickly. Its not here just yet. But its becoming ever apparent that people will accept slightly lower quality images if it means getting them in less than half the time.  With that quality gap between real-time and offline continuing to shrink its only a matter of time before the quality gap becomes imperceptible to the vast majority of clients and audiences. Eventually clients are going to expect those faster results because of what they see in real-time and if offline doesn't find some major speed improvements somewhere, it'll become less and less important to be 100% accurate with your path tracing and more and more important to LOOK accurate.

« Last Edit: 2025-01-23, 18:14:55 by Jpjapers »

2025-01-23, 18:56:38
Reply #1

James Vella

  • Active Users
  • **
  • Posts: 666
    • View Profile
I would love to see some AI integration similar to how tydiffusion works in 3dsmax as an vfb operator, this would be awesome in cases where you can flag certain geometry like people or plants or whatever and have it built straight into the render. Keeping on topic Im sure this would be utilised well with the GPU. I tested tydiffusion for half a day it was pretty amazing