Author Topic: Nvidia real-time raytracing  (Read 30483 times)

2018-08-29, 14:01:30
Reply #30

cecofuli

  • Active Users
  • **
  • Posts: 1578
    • View Profile
    • www.francescolegrenzi.com

2018-08-29, 19:44:20
Reply #31

burnin

  • Active Users
  • **
  • Posts: 1591
    • View Profile
;) yup, aware of the Imagination tech, GPU arch in PowerVR RT cards since their demos in 2014 (later on Apple bought it, implemented in almost all their devices, optimizing code now...) why NV rush to release theirs, didn't surprised me - am wary about their strategy - hyping the ignorant crowd...

Some more food for thought from Panos (RS dev.):
Quote
At some point I’ll be preparing a longer post than this but just wanted to quickly offer some insight on ray tracing hardware acceleration and ensure that user expectations are reasonable.

Most renderers work by executing the following three basic operations:

1) They generate rays (initially from the camera),
2) They shoot these rays into the scene (i.e. they do ray tracing),
3) They run shaders at the intersection points of the rays.

Shading typically spawns new rays for reflection/refraction/GI/etc purposes which means going back to step 1.
This 1-2-3 process happens as many times as there are ray bounces.

Hardware accelerated ray tracing primarily speeds up the second step: i.e. the ‘core’ ray tracing. If the renderer uses really simple shading, then the ray tracing step becomes the most expensive part of the renderer. For example, if you use extremely simple shaders that (say) just read a flat texture and return it, you could easily find out that the ray tracing step takes 99% of the entire render time and shading just takes 1%. In that case, accelerating ray tracing 10 times means that the frame renders 10 times faster, since ray tracing takes most of the time.

Unfortunately, production scenes do not use quite as simple shading as that.

Both us and other pro renderer vendors have found cases where shading takes a considerable chunk of the render time. I remember reading a Pixar paper (or maybe it was a presentation) where they were claiming that their (obviously complicated) shaders were actually taking *more* time than ray tracing! Let’s say that, in such a scenario, shading takes 50% of the entire frame time and tracing takes the other 50% (I intentionally ignore ray generation here). In that scenario, speeding up the ray tracer a hundred million times means that you make that 50% ray tracing time go away but you are still left with shading taking the other 50% of the frame! So even though your ray tracer became a hundred million times faster, your entire frame only rendered twice as fast!

All this is to say that when you read claims about a new system making rendering several times times faster, you have to ask yourself: was this with simple shading? Like the kind you see in videogames? Or was it in a scene which (for whatever reason) was spending a lot of time during tracing and not shading?

In more technical terms: the RT cores accelerate ray tracing while the CUDA cores accelerate shading and ray generation. The RT hardware cannot do volume rendering and I think no hair/curve tracing either - so these two techniques would probably also fall back to CUDA cores too - which means no benefit from the RT hardware.

All this is not to say that we’re not excited to see developments on the ray tracing front! On the contrary! But, at the same time, we wanted to ensure that everyone has a clear idea on what they’ll be getting when ray tracing hardware (and the necessary software support) arrives. We have, as explained in other forum posts, already started on supporting it by re-architecting certain parts of Redshift. In fact, this is something we’ve been doing silently (for RS 3.0) during the last few months and in-between other tasks. Hopefully, not too long from now, we’ll get this all working and will have some performance figures to share with you.


Thanks

-Panos

PS
VRay demos are running on Quadros.
« Last Edit: 2018-08-29, 19:50:16 by burnin »

2018-08-29, 22:35:18
Reply #32

JViz

  • Active Users
  • **
  • Posts: 139
    • View Profile
    • Behance
nobody wants a software that needs a specific piece of hardware to run it. that piece of hardware will have a very high price look at quadro. that's because companies like nvidia/amd will take advantage of the market need for that hardware and will jack the price up. use what's there and keep it that way. use geforce instead of quadro or fancy RTX like fstorm does. cpu's will have hundreds of cores soon with new materials to create smaller dies to push way deeper into moor's law curve, we are set for great stuff in the cpu world. set tight
Although a purist, my work is anything but.
https://www.behance.net/ImageInnate

2018-08-30, 10:17:16
Reply #33

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1297
    • View Profile
    • My personal website
Well, I'm sitting here wondering... RTX utilizes the RT cores. Does DXR utilize RT cores too? Do you need to access the RT cores through the new RTX framework (I think its called Optix Ray Tracing)? What does that mean for AMD GPUs?

The biggest drawback of GPU rendering for me personally was always the closed off eco-system. You either run on Nvidia CUDA compliant GPUs or you don't run at all. I dislike Otoy for their weird corporate crap they do but supposedly they are getting closer and closer with their Vulkan implementation that will work on pretty much any popular vendor going forward (Intel?).

I am mentioning that because if DXR doesn't work well with RT cores and whatever AMD comes up with isn't compatible... We are even more locked to Nvidia. And we all know how that turned out when Intel was on top. Or Apple. Or Comcast... You get the picture.

Disclaimer: I by no means am saying that nvidia produces crap hardware. On the contrary, for the past couple of years they are the performance leaders, no doubt. I do remember the fun times when the Radeon 9700 Pro made things go round and how nice it is to for there to actually be competition on the high end as well.

edit: It would appear that the RT cores can be programmed through Nvidia's Optix library, DXR and Vulkan.
Source -> https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray
« Last Edit: 2018-08-30, 12:10:40 by nkilar »
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-08-30, 10:48:58
Reply #34

karnak

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 76
    • View Profile
@burnin Thank you for the quote.
Corona Academy (May 2017)

2018-09-16, 20:03:00
Reply #35

burnin

  • Active Users
  • **
  • Posts: 1591
    • View Profile
GPU Ray Tracing for Film and Design: RenderMan XPU by Max Liani, Pixar Animation Studios

Elsewhere...
Cards coming in...
It was also mentioned (speculated?) that NVlink memory stacking on non-Quadro RTXs is possible, but must be coded-in-engine & VRay dev. already did it.

2018-09-16, 20:09:46
Reply #36

Juraj

  • Active Users
  • **
  • Posts: 4797
    • View Profile
    • studio website
GPU Ray Tracing for Film and Design: RenderMan XPU by Max Liani, Pixar Animation Studios

Elsewhere...
Cards coming in...
It was also mentioned (speculated?) that NVlink memory stacking on non-Quadro RTXs is possible, but must be coded-in-engine & VRay dev. already did it.

Well of course he did :- ). I can't wait to see the results !
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-09-16, 22:06:50
Reply #37

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1297
    • View Profile
    • My personal website
...
It was also mentioned (speculated?) that NVlink memory stacking on non-Quadro RTXs is possible, but must be coded-in-engine & VRay dev. already did it.

Apparently that is the case. I would be mad furious if Nvidia decided to call it NVlink but disable any kind of memory pooling options (on the Geforce cards that is). Might as well call it SLI 3.0 in that case.

There is one publication that had something to say about that though, Guru3D.com - (https://www.guru3d.com/articles_pages/nvidia_turing_geforce_2080_(ti)_architecture_review,7.html).

"Unfortunately, though, the new NVLINK bridge is just used for SLI, there will be no GPU or memory sharing as previously expected. Think of NVLINK as SLI version three. It is just an interface yet many times faster."

I am unsure where they get that info from to be honest but they are pretty specific about it. Maybe they are just referring to gaming scenarios. From my quick glance at the Nvidia reviewers documents I couldn't find any info about that. In any case, I kind of wouldn't be surprised if they somehow managed to lock it at the driver level.

All that being said, only a couple of days to go and we'll have more info :)

edit: Given that these graphic cards are supposed to drive 4k games that 8 GB framebuffer seems to be almost too small. Hence I can see how the NVlink implementation would help with memory pooling in that regard.
« Last Edit: 2018-09-16, 22:28:58 by nkilar »
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-09-17, 20:37:46
Reply #38

SharpEars

  • Active Users
  • **
  • Posts: 103
    • View Profile
With regard to NVLink, Chaos Group have already stated on their blog that NVLink will support memory pooling on the new RTX cards, specifically the 2080 and 2080 Ti. Read the NVLink section at the following blog post by Vlado: https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

2018-09-17, 20:41:39
Reply #39

Juraj

  • Active Users
  • **
  • Posts: 4797
    • View Profile
    • studio website
He doesn't mention directly 2080&2080ti, but given he uses them as example:

Quote
For example, if we have two GPUs with 11GB of VRAM each and connected with NVLink, V-Ray GPU can use that to render scenes that take up to 22 GB

..I guess the answer is yes :- ). This will be a big one..
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-09-20, 07:19:49
Reply #40

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1297
    • View Profile
    • My personal website
So... The reviews have been out for like a day now. Any impressions? https://videocardz.com/78054/nvidia-geforce-rtx-2080-ti-rtx-2080-review-roundup
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-09-20, 09:54:26
Reply #41

Juraj

  • Active Users
  • **
  • Posts: 4797
    • View Profile
    • studio website
I would call that disappointing.

2070= No NV-Link, too weak to use RTX & DLSS properly. Worthless model in lineup.
2080= Same as 1080ti brute performance, 8GB is too little for games & viewports under Win10. Too weak to use RTX properly.

2080ti= OK brute speed-up, not worthy of price increase. Great speed-up with DLSS (really like). RTX in games is great, not willing to play in 1080 for it though. Maybe in next generation.

Who did the VRAY benchmarks that I've seen ? ChaosGroup itself ? They had RTX card for months and they don't actually utilize it yet in that benchmark. I find that pretty disappointing as well.

Right now this generation seems great only for GPU rendering guys who will benefit massively from NVLink.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-09-20, 11:25:56
Reply #42

Jpjapers

  • Active Users
  • **
  • Posts: 1684
    • View Profile
I would call that disappointing.

2070= No NV-Link, too weak to use RTX & DLSS properly. Worthless model in lineup.
2080= Same as 1080ti brute performance, 8GB is too little for games & viewports under Win10. Too weak to use RTX properly.

2080ti= OK brute speed-up, not worthy of price increase. Great speed-up with DLSS (really like). RTX in games is great, not willing to play in 1080 for it though. Maybe in next generation.

Who did the VRAY benchmarks that I've seen ? ChaosGroup itself ? They had RTX card for months and they don't actually utilize it yet in that benchmark. I find that pretty disappointing as well.

Right now this generation seems great only for GPU rendering guys who will benefit massively from NVLink.

As far as im aware absolutely nothing supports RTX right now that consumers can get their hands on?

2018-09-20, 13:09:58
Reply #43

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1297
    • View Profile
    • My personal website
I would call that disappointing.

2070= No NV-Link, too weak to use RTX & DLSS properly. Worthless model in lineup.
2080= Same as 1080ti brute performance, 8GB is too little for games & viewports under Win10. Too weak to use RTX properly.

2080ti= OK brute speed-up, not worthy of price increase. Great speed-up with DLSS (really like). RTX in games is great, not willing to play in 1080 for it though. Maybe in next generation.

Who did the VRAY benchmarks that I've seen ? ChaosGroup itself ? They had RTX card for months and they don't actually utilize it yet in that benchmark. I find that pretty disappointing as well.

Right now this generation seems great only for GPU rendering guys who will benefit massively from NVLink.

Well there are some reviewers who've used the V-Ray GPGPU benchmark. Here is one of them:
https://www.guru3d.com/articles_pages/geforce_rtx_2080_ti_founders_review,34.html

There were a couple of others that tested with Luxmark and VRay GPU. Can't seem to find the links right now.

edit:
One of the LuxMark benchmarks -> https://hothardware.com/reviews/nvidia-geforce-rtx-performance-and-overclocking?page=3

There was also this RTX benchmark preview posted on twitter and retweeted by Otoy. Supposedly this is using an early version of the RTX implementation via Optix. This is essentially saying that 1x 2080Ti = 2x 1080ti on the RTX code.

https://pbs.twimg.com/media/DnenQ7dV4AAhTzP.jpg:large

Not a fan of Nvidia at all. The cards seem OK for rendering though.

@jpjapers
I think you are correct :) Octane 2018.1 beta should be out this year and afaik that and V-Ray are currently in front in terms of RT core adoption.
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-09-20, 13:22:43
Reply #44

Juraj

  • Active Users
  • **
  • Posts: 4797
    • View Profile
    • studio website
I've seen numbers like 2times to 8times being mentioned after RTX is utilized in offline gpu rendering but not the actual (real world production scene) scenario where that happens.

Patiently waiting to see that being showcased first.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!