Author Topic: Nvidia real-time raytracing  (Read 28475 times)

2018-08-14, 13:51:00

soso

  • Active Users
  • **
  • Posts: 63
    • View Profile
Guys, here's the new Nvidia Quadro RTX (Real time ray tracing) GPU:
https://www.ustream.tv/NVIDIA

In other words, the GPU has much more powerful CPU (RT Cores) than the most powerful CPU's this days. Btw, the next Nvidia Geforce will launch with the ray tracing capability like CPU:
https://wccftech.com/nvidia-geforce-rtx-graphics-card-turing-launch/

And I just wondering, could it work with the CPU render engine like Corona? (because it has CPU capability now, not just rasterization)
« Last Edit: 2018-08-14, 14:38:16 by soso »

2018-08-14, 18:12:09
Reply #1

cecofuli

  • Active Users
  • **
  • Posts: 1577
    • View Profile
    • www.francescolegrenzi.com
"the GPU has much more powerful CPU (RT Cores) than the most powerful CPU's this days."

Are you sure? I mean, in real scene, in "normal" office?
Now we have 32 Core CPU (1700 euro). And, next year, maybe 64 single Core CPU
Do you know that the new Quadro with 48 GB price is 10.000 euro!?
And you need a specific software written to this architecture.
You cannot simple switch from CPU to GPU.
Also all our plug-ins must be converted etc...
Maybe in 10 years all rendering engines (also Corona) will run on GPU with RTX technology.
But, now, it's only a preview. It's like "look at the direction we are going". Only very big company can buy these card.
And you need a specific software.

Do you remember the first DEMO of V-Ray IR? 10 years ago.


Everybody were shocked! Now we are in the same situation with RTX =)
Now, 2018, after 10 years, IR is usable in production for almost everybody.
But we had to wait... 10 years to see this technology mature, stable and affordable =)

But yes, Corona must to watch in the GPU direction too, and not only for VFB or Denoise.
Otherwise they will risks to go out of the games.
but, it means more money, more Devs, more employees etc...
And, maybe, there are some "hidden" agreements between  Chaos and RL: "VRay will go in GPU direction, and Corona in CPU. Who knows... ;-)



 
« Last Edit: 2018-08-14, 18:53:46 by cecofuli »

2018-08-15, 04:16:21
Reply #2

soso

  • Active Users
  • **
  • Posts: 63
    • View Profile
"the GPU has much more powerful CPU (RT Cores) than the most powerful CPU's this days."

Are you sure? I mean, in real scene, in "normal" office?
Now we have 32 Core CPU (1700 euro). And, next year, maybe 64 single Core CPU
Do you know that the new Quadro with 48 GB price is 10.000 euro!?

Yes, I'm sure. Here:

Single Quadro RTX 8000                                              =        10.000.000.000     rays per second ($10,000)
Single Quadro RTX 5000                                              =          6.000.000.000     rays per second ($  2,300)
Single GeForce RTX 2080                                            =          8.000.000.000     rays per second ($     699)
Single Threadripper 2990WX 32-Core Processor  =                13.024.100      rays per second ($  1,799)

Source:
https://cdn.wccftech.com/wp-content/uploads/2018/08/NVIDIA-Turing-RTX-Die-Breakup.png
https://corona-renderer.com/benchmark/cpu/AMD%20Ryzen%20Threadripper%202990WX

Btw, the "AMD Ryzen Threadripper 2990WX 32-Core Processor (×4)" is only 1 CPU, not 4 CPUs. Look at the cores/threads count. "(×4)" is the die count in the 2990WX CPU itself (4 dies). To corona team, please fix this benchmark or update it to the recent version of Corona ;)


And you need a specific software written to this architecture.
You cannot simple switch from CPU to GPU.
Also all our plug-ins must be converted etc...
Maybe in 10 years all rendering engines (also Corona) will run on GPU with RTX technology.
But, now, it's only a preview. It's like "look at the direction we are going". Only very big company can buy these card.
And you need a specific software.
Do you know I'm just messing around with the Corona team to wake 'em up. And I hope they are aware of it. Look at this:
http://www.cgchannel.com/2018/08/video-chaos-groups-neat-new-real-time-ray-tracing-tech/

I think many render engines will really implement it in the next 1-2 years from now. And I don't wanna see this amazing render engine left behind with another render engines. You know, this days, the technology is growing more faster than we expected. Thanks to Nvidia technology, AMD VS Intel, and AI (Artificial Intelligence).

NOTE: GPU is the future, not the CPU anymore like in the Skynet Terminator's brain, if you know what i mean ;)
« Last Edit: 2018-08-31, 16:32:24 by soso »

2018-08-15, 16:25:16
Reply #3

Ryuu

  • Former Corona Team Member
  • Active Users
  • **
  • Posts: 654
  • Michal
    • View Profile
DISCLAIMER: Anything said in this post (and my subsequent replies) is just my personal opinion and it is not definitely any official statement.

In other words, the GPU has much more powerful CPU (RT Cores) than the most powerful CPU's this days.

You do understand that CPU core and GPU cores are vastly different and therefore comparing their counts does not make any sense, right? :) Also "more powerful" is kinda relative. Are GPUs generally more powerful than CPUs at trivial number crunching? Definitely. Are GPUs generally more powerful than CPUs at parsing C++ source files? I wouldn't be so sure about that.

Now we have 32 Core CPU (1700 euro). And, next year, maybe 64 single Core CPU

I kinda doubt we'll see 64-core CPU in next AMD generation or anytime soon. 48-core is a bit more likely, but I wouldn't still bet on that for the next generation.

Also all our plug-ins must be converted etc...

Yes, this is one of the major benefits of using CPU. Unless a plugin has any special requirements from the used renderer, any new sexy plugin you find will work from day one. If Corona was GPU renderer, you will have to request that we support this plugin, then wait at the very least a few days until we do, then you can finally try it with Corona, but you will still have to wait for us to debug it and then after many weeks when all this is finally done and you had the real chance to try this plugin - you find that it's really useless for your needs ;) Of course, reality is not that simple and some plugins may need compatibility tweaking for CPU rendering and with good APIs, most plugins may work out of the box with a good GPU renderer.

But yes, Corona must to watch in the GPU direction too, and not only for VFB or Denoise.

We'll definitely start with baby steps by moving all the image post processing to GPU and then we'll see where we will get from there.

Single Quadro RTX 8000                                              =        10.000.000.000     rays per second ($10,000)
Single Quadro RTX 5000                                              =          6.000.000.000     rays per second ($  2,300)
Single GeForce RTX 2080                                            =          6.000.000.000 ?  rays per second ($     699 ?)
Single Threadripper 2990WX 32-Core Processor  =                13.024.100      rays per second ($  1,799)

What exactly does "ray" mean in this context? Is it just computing a single ray-triangle or ray-box intersection? Is is traversing the whole scene and finding out which primitive did the ray hit? Does it also include shading the hit, evaluating all the maps, etc.? Is this just for coherent primary rays, or are the numbers still the same for the wildly incoherent secondary rays? You're comparing two sets of numbers which can mean very different things.

My home path tracing code can process 30 megarays per second on a single core. I don't really think this means its more powerful than Corona :)

I'm definitely not saying that GPUs are not powerful. Optimized GPU renderer may be able to process more data than optimized CPU renderer (depending on specific GPU and CPU). But these numbers don't really prove it, unless we know what exactly do they mean.

Ad speculation about 2080 - I'm not following the news, are future consumer GPUs supposed to feature the tensor cores, or are these just a quadro feature?

Btw, the "AMD Ryzen Threadripper 2990WX 32-Core Processor (×4)" is only 1 CPU, not 4 CPUs. Look at the cores/threads count. "(×4)" is the die count in the 2990WX CPU itself (4 dies). To corona team, please fix this benchmark or update it to the recent version of Corona ;)

Yep, I know about that. It's not nice, but on the other hand it's not really that critical to warrant releasing a new version of benchmark just because of that. Releasing a new version of benchmark would mostly invalidate all the previous results. We might do a new version once we finish porting Corona to another platform.

Do you know I'm just messing around with the Corona team to wake 'em up. And I hope they are aware of it. Look at this:
http://www.cgchannel.com/2018/08/video-chaos-groups-neat-new-real-time-ray-tracing-tech/

I guess that I don't really have to mention that we knew about project Lavina before this blog post went public, right? ;)

2018-08-15, 17:39:39
Reply #4

agentdark45

  • Active Users
  • **
  • Posts: 577
    • View Profile
While we are talking about GPU rendering, the biggest upside for me is the relatively low cost to performance benefit.

For example, you could stick 4x1080ti's in a barebones machine and end up with an absolute powerhouse of a system for FStorm.

In a few years when Nvlink is the norm and shared GPU memory is a thing I can quickly see CPU only renderers dying out (especially when factoring in ludicrous DDR4/DDR5 prices). Add in networked GPU rendering + Nvlink and it's game over.

Ideally what I'd like to see is CPU/GPU hybridisation (for rendering, not just VFB tasks). I realise what a mammoth task this might be from reading the above post, but there just seems to be so much untapped performance to be had.

Maybe Chaos group can lend a helping hand? ;)
Vray who?

2018-08-16, 10:33:55
Reply #5

soso

  • Active Users
  • **
  • Posts: 63
    • View Profile
Watch here:

Arnold render (GPU) with RTX at 0:35

Arnold render with Threadripper 2990WX 32 Core at 5:10

2018-08-16, 11:39:19
Reply #6

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
Watch here:

Arnold render (GPU) with RTX at 0:35

Arnold render with Threadripper 2990WX 32 Core at 5:10

Don't think it is wise for anyone to compare such vastly different scenes.

I have a 4790k that will render a clay scene in 10 seconds while I have another clay scene that takes 20 seconds just to load the scene geo on a 2x 2969v3 . I mean really, those are two vastly different scenes. Can't come to any conclusions like that imho.

As far as the RTX itself goes... Nvidia has hyped things up in the past. To their "credit" so did AMD. As always it will be a wait and see game to see how easily this technology can be leveraged and if it really helps speed up the rendering process. Anyone following the industry for a long(er) time undoubtedly remembers that every few years we get THE MOST DISRUPTIVE TECH ON THE PLANET (8x-10000x faster than what we had before yada yada) which ultimately never takes off.. In general though, I like the push towards DRX and RTX. Seems like a legit thing.

What I like is that some companies are accessing RTX through DRX which means we aren't locked to a single hardware vendor - at least from my understanding.

Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-08-17, 02:24:02
Reply #7

soso

  • Active Users
  • **
  • Posts: 63
    • View Profile
Do you know I'm just messing around with the Corona team to wake 'em up. And I hope they are aware of it. Look at this:
http://www.cgchannel.com/2018/08/video-chaos-groups-neat-new-real-time-ray-tracing-tech/

I guess that I don't really have to mention that we knew about project Lavina before this blog post went public, right? ;)
I mean, to tell you that with this new invention, CPU render is not interesting anymore, maybe in 1 year later from now. Who knows? And people will try, or even switch their pipeline to GPU related render engine. Nvidia is already predicts it since some years ago, that GPU is the future for Film/CG/DCC/Design/Visualization industry. That's why they increase the VRAM in Quadro and Tesla like crazy, more than their gaming GPU (Geforce). Then, they add the Tensor core and RT core in their new GPUs.

Quadro RTX has 48 GB GDDR6
RTX 2080     has   8 GB GDDR6
RTX 2080 Ti has 11 GB GDDR6

Btw, according to the leak, the next Geforce RTX will have tensor core, and support NVLINK.
« Last Edit: 2018-08-17, 03:07:55 by soso »

2018-08-17, 08:06:11
Reply #8

soso

  • Active Users
  • **
  • Posts: 63
    • View Profile
From what I watch in:

Are GPUs generally more powerful than CPUs at parsing C++ source files? I wouldn't be so sure about that.
I'm not a developer. I'm just an artist. But, can it be done in hybrid (CPU+GPU+Tensor core) way ?

New generation of hybrid rendering
00:33:37.936

What about this:
https://developer.nvidia.com/how-to-cuda-c-cpp
https://developer.nvidia.com/gpu-accelerated-libraries

What exactly does "ray" mean in this context? Is it just computing a single ray-triangle or ray-box intersection? Is is traversing the whole scene and finding out which primitive did the ray hit? Does it also include shading the hit, evaluating all the maps, etc.? Is this just for coherent primary rays, or are the numbers still the same for the wildly incoherent secondary rays? You're comparing two sets of numbers which can mean very different things.
I think yes, it really has the CPU ray tracing capability, according to Jensen Huang. Watch from:
00:29:39.616 - 00:30:53.620

That's why they research the RT core since ten years ago.

Please watch the cornell box from this to see the the CPU capability with this RTX:
00:36:47.092 - 00:46:34.572



4 x Tesla v100 ($60,000) = 5 rays per pixel -> reflection, area lights, dynamic area lights, soft shadows.
00:20:59.000

1 x Quadro RTX ($10,000) = 10 Giga rays per second (from RT core)

Ad speculation about 2080 - I'm not following the news, are future consumer GPUs supposed to feature the tensor cores, or are these just a quadro feature?
I think yes, like the Tesla V100 and Titan V (both are volta architecture). Both have tensor cores. I think it applied to the Geforce RTX too.


Btw, here's the realtime ray tracing capability from the RTX GPUs:

Realtime ray tracing "Porsche car"
00:52:56.689

Realtime ray tracing "Dancing Robot"
01:20:57.423

2018-08-17, 14:35:07
Reply #9

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
I've been keeping tabs on the rumor mill for the new RTX series and what excites me the most at this point is that it I've read quite a couple of rumors that say that even the Geforce cards will have a NVLink connector. If it works then that means we'll effectively be able to double the RAM by going multi-gpu (afaik it works only in 1+1 mode so with an 8gb baseline card you'd get 16gb of VRAM if you slot more of them).

I'm still not sold on the RTX thing yet. The way it was presented made it seem like its going to offer a buttload of performance which usually means its only marginally faster. Time will tell :)

@soso
As for you comment about nvidia predicting stuff... Theatrical marketing promos sure do predict a lot but the actual reality is sometimes not quite as clear cut as most companies would like to admit. Just remember that real time raytracing was about to breakthrough in like the early 2000s, or was it the 90s? We still aren't quite there are we? :)

I like innovative stuff so I'm hopeful both CPU and GPUs get better each year :)
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-08-17, 15:25:41
Reply #10

soso

  • Active Users
  • **
  • Posts: 63
    • View Profile
I've been keeping tabs on the rumor mill for the new RTX series and what excites me the most at this point is that it I've read quite a couple of rumors that say that even the Geforce cards will have a NVLink connector. If it works then that means we'll effectively be able to double the RAM by going multi-gpu (afaik it works only in 1+1 mode so with an 8gb baseline card you'd get 16gb of VRAM if you slot more of them).

Here, if you wanna see more:

Titan RTX      12G  nvlink 2x
RTX 2080 ti   11G  nvlink 2x
RTX 2080        8G  nvlink 1x
RTX 2070        8G  nvlink 1x
GTX 2060       6G  nvlink no


@soso
As for you comment about nvidia predicting stuff... Theatrical marketing promos sure do predict a lot but the actual reality is sometimes not quite as clear cut as most companies would like to admit. Just remember that real time raytracing was about to breakthrough in like the early 2000s, or was it the 90s? We still aren't quite there are we? :)

Maybe path tracing using GPU that you mean in 2000. The real time path tracing using GPU is an old news. It can be done a long time ago before turing. But, for ray tracing, it's so complex to do that, more than just using a CPU. Like Arnold render GPU, they even already have the prototype in 2014:
http://www.cgchannel.com/2014/08/solid-angle-to-preview-gpu-based-version-of-arnold/

But, why don't they just finished and sell it in 2015-2016? Why it take so long to release it? That's because of the limitation in the GPU to computing some complex things, like reflection, GI, caustics, etc. It need to be "faked", or need to make some "magic" code to do that.

But, if you just need a path tracing for your scene using unreal engine, you can do it so fast, but it still have those limitations. You need to "fake" them to hide the limitation on the GPU.

I think it's better to start the GPU render project now, than it's too late later, and nothing can be done to compete the other GPU render engine. No offense guys :)
« Last Edit: 2018-08-18, 17:09:04 by soso »

2018-08-17, 15:40:47
Reply #11

agentdark45

  • Active Users
  • **
  • Posts: 577
    • View Profile
I think it's better to start the GPU render project now, than it's too late later, and nothing can be done to compete the other GPU render engine. No offense guys :)

Or Chaos group could buy out the FStorm dev. Could you imagine what he could do with their resources and team behind him?
Vray who?

2018-08-17, 15:54:04
Reply #12

soso

  • Active Users
  • **
  • Posts: 63
    • View Profile
Or Chaos group could buy out the FStorm dev. Could you imagine what he could do with their resources and team behind him?
Then the FStorm guy resign, and make his renderer again based on Chaos source code  :))

2018-08-18, 11:22:20
Reply #13

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
I've been keeping tabs on the rumor mill for the new RTX series and what excites me the most at this point is that it I've read quite a couple of rumors that say that even the Geforce cards will have a NVLink connector. If it works then that means we'll effectively be able to double the RAM by going multi-gpu (afaik it works only in 1+1 mode so with an 8gb baseline card you'd get 16gb of VRAM if you slot more of them).

Here, if you wanna see more:

Titan RTX      12G  nvlink 2x
RTX 2080 ti   11G  nvlink 2x
RTX 2080        8G  nvlink 1x
RTX 2070        8G  nvlink 1x
GTX 2060       6G  nvlink no


@soso
As for you comment about nvidia predicting stuff... Theatrical marketing promos sure do predict a lot but the actual reality is sometimes not quite as clear cut as most companies would like to admit. Just remember that real time raytracing was about to breakthrough in like the early 2000s, or was it the 90s? We still aren't quite there are we? :)

Maybe path tracing using GPU that you mean in 2000. The real time ray tracing using GPU is an old news too. It can be done a long time ago before turing with rasterization. But it's so complex to do that, more than just using a CPU. Like Arnold render GPU, they even already have the prototype in 2014:
http://www.cgchannel.com/2014/08/solid-angle-to-preview-gpu-based-version-of-arnold/

But, why don't they just finished and sell it in 2015-2016? Why it take so long to release it? That's because of the limitation in the GPU to computing some complex things, like reflection, GI, caustics, etc. It need to be "faked", or need to make some "magic" code to do that.

But, if you just need a simple ray tracing for your scene using unreal engine, you can do it so fast, but it still have those limitations. You need to "fake" them to hide the ray tracing limitation on the GPU.

I think it's better to start the GPU render project now, than it's too late later, and nothing can be done to compete the other GPU render engine. No offense guys :)

Thanks for the info (probably from videocardz, right?) but at this point I don't think anybody from Nvidia confirmed that the NVLink on the Geforce RTX line will actually be enabled. I mean it sure looks that way but I guess we'll have 100% confirmation on Monday. Don't want it to be just a PCB tongue that does nothing. Exciting times for sure though!

As for the ray-tracing on the GPU in games... From my understanding, yeah, you can do partial ray-tracing which is what the DXR / RTX do. You can trace reflections, maybe a couple of light bounces but its still fairly crude compared to an offline renderer. Like really crude. So I politely need to disagree with you on that.

Yes, pathtracing on the GPU is being done since they went the whole GPGPU route. That said, even today, even in the Quadro 6000 offline renderer demos it still is not real time - you still get noise, you still need to wait for things to clear up.

For games the rasterization is still what they are doing and that is why it is real time. With the addition of DXR / RTX however they are now introducing _some_ raytracing into the whole picture. The main thing is the reflections which can now be off-screen (a big limit of rasterization) and a few extra things. I don't see how we can call that real time ray tracing though. Partial real time ray tracing sure but there is a long way to go still :) I do like the initiative.

A good read about the basic differences between rasterization  and raytracing -> https://blogs.nvidia.com/blog/2018/03/19/whats-difference-between-ray-tracing-rasterization/
« Last Edit: 2018-08-18, 11:25:50 by nkilar »
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-08-18, 13:54:06
Reply #14

soso

  • Active Users
  • **
  • Posts: 63
    • View Profile
Thanks for the info (probably from videocardz, right?) but at this point I don't think anybody from Nvidia confirmed that the NVLink on the Geforce RTX line will actually be enabled. I mean it sure looks that way but I guess we'll have 100% confirmation on Monday. Don't want it to be just a PCB tongue that does nothing. Exciting times for sure though!
No, it's from a chinese guy some days ago. Well, just wait until monday tho...


As for the ray-tracing on the GPU in games... From my understanding, yeah, you can do partial ray-tracing which is what the DXR / RTX do. You can trace reflections, maybe a couple of light bounces but its still fairly crude compared to an offline renderer. Like really crude. So I politely need to disagree with you on that.
Sorry, my previous comment is wrong. I already edit that. Yeah, the quality from real time path tracing from unreal isn't good enough if we compare it with the raytraced CPU render quality (offline render). But now, they support the real time raytracing using RTX GPU (Hybrid rendering using GPU, RT core & tensor core).


Yes, pathtracing on the GPU is being done since they went the whole GPGPU route. That said, even today, even in the Quadro 6000 offline renderer demos it still is not real time - you still get noise, you still need to wait for things to clear up.
Do you mean this video?
http://www.cgchannel.com/2018/08/video-chaos-groups-neat-new-real-time-ray-tracing-tech/

It's pure ray tracing, not path tracing and realtime not offline render. They said:
Quote
This video is a screen capture of one of several demos being presented using standard vrscenes exported from 3ds Max and Maya. It has Project Lavina running on a Lenovo ThinkStation P920 workstation with a single Quadro RTX 6000 for its GPU. Everything you see is purely ray traced and runs at real-time frame rates at HD resolution. The materials and lighting are direct conversions from the vrscene, and we’re enabling one bounce of global illumination.

In the youtube, they said:
Quote
You’re looking at over 300 billion triangles, rendering in HD at 24-30 frames per second – in real-time, with no loss in detail.
It doesn't mean offline render. They said "screen capture", it can be done using Bandicam or etc. The settings is using 720p, and it gets 24-30 fps in realtime, with over 300 billion triangles on that scene. It using real-time Chaos denoiser written in HLSL that also allows it to run on almost any GPU. Btw, they just enabling one bounce of global illumination.

If you compare it to "SIGGRAPH 2018 - NVIDIA CEO Jensen Huang - Reinventing Computer Graphics" video, they are using real time hybrid rendering (GPU, RT core & tensor core) in Unreal. The realtime denoiser is used using tensor core. Here's the explaination in the cornell box realtime render:
00:36:47.092 - 00:46:34.572
« Last Edit: 2018-08-18, 17:31:07 by soso »

2018-08-19, 16:26:03
Reply #15

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
Yeah, I was not referring to project Lavina. What I meant was this video ->

As for the NVLink, it sure does seem like we'll have that on all the RTX GPUs. Including the 2070. Should be an interesting presentation on Monday.

Still, I would prefer if AMD had something ready as well. I really dislike the current high end monopoly. It reminds me of Apple and how they kind of shafted their own content creators...

Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-08-20, 06:34:33
Reply #16

soso

  • Active Users
  • **
  • Posts: 63
    • View Profile
Yeah, I was not referring to project Lavina. What I meant was this video ->
I don't think they using hybrid GPU rendering in thad video (GPU+RTcore+Tensor core). Maybe It's just using RT core only. I'm not sure. And I don't know how effective this experimental V-Ray GPU build with a pre-release Quadro RTX 6000 and driver. If you compare it to Unreal RTX video, they are using hybrid rendering (GPU+RTcore+Tensor core). I'm not sure too if maybe unreal engine build itself in the RTX video, is better than 3ds Max & vray for realtime ray tracing capability.

Well just wait the RTX news from other GPU renderer too tho.

2018-08-21, 14:14:15
Reply #17

Juraj

  • Active Users
  • **
  • Posts: 4768
    • View Profile
    • studio website
Was that video supposed to be impressive ? Maybe it is, I didn't test it with 1080ti but it's simple non-GI empty scene at 1000px...

I really need to know if it speeds up the Optix Denoiser. If the RTX engine or the tensor cores inside 2080/2080Ti somehow magically alters the performance, that could be dealbreaker.

Otherwise they can go f*** with these prices.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-21, 18:33:24
Reply #18

danio1011

  • Active Users
  • **
  • Posts: 361
    • View Profile
Was that video supposed to be impressive ? Maybe it is, I didn't test it with 1080ti but it's simple non-GI empty scene at 1000px...

I really need to know if it speeds up the Optix Denoiser. If the RTX engine or the tensor cores inside 2080/2080Ti somehow magically alters the performance, that could be dealbreaker.

Otherwise they can go f*** with these prices.

I know it's just proof of concept but I'm surprised they would show such a simple scene as the first public test of RTX on RT.  People have been seeing 'single object' scenes on VRay RT for years.  I can't really tell what's going on with VRay for Max these days, it has made me hold on upgrading to Next even though I find it (GPU, adaptive dome, auto exposure, etc) intriguing.

2018-08-22, 09:17:42
Reply #19

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
- Bare in mind people that apparently the RTX 2070 will not be getting NVLink support so there is some product segmentation within the RTX line itself.

(https://videocardz.com/newz/nvidia-geforce-rtx-2070-does-not-have-nvlink) & (https://www.guru3d.com/news-story/nvidia-announces-geforce-rtx-2080-and-2080-ti.html)

- Also, NVLink for the Geforce brand appears to be limited to a speed of 50gbps instead of 100gbps. That is apparently a quadro vs geforce segmentation issue. I don't have any sources to share but I think its listed on their website (nvidia).

- Another thing to note... Some people are saying that the Geforce cards will not have NVLink support for a card by card setup. That means that you will not be able to link more than two GPUs together in a single case because those two will use up four slots (I think it was mentioned there will be  a 3 slot nvlink bridge or something like that). That is unless you are running a fancy setup with PCIE risers I guess. I don't have a source to share but apparently it "makes sense" because the FE cards are not blower style cards anymore and so it is not wise to stack them together. It goes without saying that the Quadro line will be able to have smaller NVLink bridges so there is some segmentation there too I guess.

So all in all... Its weird :))
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-08-23, 14:59:15
Reply #20

Juraj

  • Active Users
  • **
  • Posts: 4768
    • View Profile
    • studio website
The GeForce NVLink still seems fast enough. After all the full NVLink is made to be sufficient for the top range Teslas and Quadros, some of which feature 6k+ Cuda cores, so if the GeForce has half the bandwidth, that is still more than enough for the kind of performance 2080ti might yield.

But does it have all the features in functionality ? Is the memory stacked ?

22GB of Vram is A LOT for GPU rendering.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-23, 16:07:53
Reply #21

burnin

  • Active Users
  • **
  • Posts: 1536
    • View Profile
Have no doubt - "If it's not clearly written, then it doesn't exist."
(Either contracts, bills, invoices, inquiries, news, legislation, obituaries, articles, theses, ... - it's common knowledge & every person should be aware of that.)

So,
a.) for top tier gaming cards (2080/Ti) specs states: NVLink (SLI ready)
b.) while on the other hand, for whole RTX Quadro line it clearly states: "NVIDIA NVLink® to combine two GPUs with a high-speed link to scale memory capacity up to 96GB and drive higher performance with up to 100GB/s of data transfer."

Go for a long walk and drink water. ;)

2018-08-23, 17:31:28
Reply #22

Juraj

  • Active Users
  • **
  • Posts: 4768
    • View Profile
    • studio website
:- D
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-29, 06:33:49
Reply #23

soso

  • Active Users
  • **
  • Posts: 63
    • View Profile

2018-08-29, 09:50:58
Reply #24

Juraj

  • Active Users
  • **
  • Posts: 4768
    • View Profile
    • studio website
So Vlado hints that NVLink is identical in terms of memory pooling for all RTX cards.

This is super revolution for GPU rendering ?! Christ, I can't even imagine what some people with 4x2080ti will do.

Maybe time to try Vray GPU once they implement support for RT cores. Ultra-speed, support for Corona materials plus all the nice things we are waiting forever for ( VR SCANS ).
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-29, 11:53:55
Reply #25

bluebox

  • Active Users
  • **
  • Posts: 268
    • View Profile
Hey there guys. We're on the crossroads here. Soon we will have to get some more raw rendering power and will have to decide either going CPU or as it seems viable finally (?) GPU. Is my understanding of the topic and the things you guys suggest correct - that I am able to stack 4x 2080ti in one machine and get 44GB VRAM ? As far as I know 44GB Vram would be enough to fit (at least with Fstorm and it's inbuilt texture resizing) virtually any scene no ?

2018-08-29, 12:05:05
Reply #26

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
Hey there guys. We're on the crossroads here. Soon we will have to get some more raw rendering power and will have to decide either going CPU or as it seems viable finally (?) GPU. Is my understanding of the topic and the things you guys suggest correct - that I am able to stack 4x 2080ti in one machine and get 44GB VRAM ? As far as I know 44GB Vram would be enough to fit (at least with Fstorm and it's inbuilt texture resizing) virtually any scene no ?

Technically yes, that appears to be true. We still need to wait for the official response from Nvidia whether the Geforce NVLink = Quadro NVLink or if it is just a cutdown SLI version of some sorts. Considering that it is called NVLink, it shares the same physical connector and is known to have sufficient speed (50gbps compared to 100gbps on Quadros) I think that it is pretty much the same thing.

That being said, we do know that Nvidia currently only sells 3 and 4 slot NVLink bridges for the Geforce cards. That means you can probably NVLink a bunch of 2080ti although you'll need to use PCI-Express risers in order to fit them on your motherboard.

I have 6 PCI-E slots ready but with the Geforce NVLink bridges I could probably only fit 2 cards in there in a standard chassis without any risers. Technically though, I could easily fit 4 dual slot cards without NVLink (4 x 1080 for example).

Quadros come with 2 slot NVLink bridges so with those you can actually easily NVLink 2 + 2 cards in your system and make it all fit much like with the current gen GPUs.

Hope that helps :) I am looking into it myself too but I will wait till the cards are actually release to see if everything is "as it should be" for rendering :)
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-08-29, 12:10:17
Reply #27

Juraj

  • Active Users
  • **
  • Posts: 4768
    • View Profile
    • studio website
Do you actually see multiple cards being connected ? I only see two cards, even in Quadro/Tesla range.

The 3-4 slot are only to accomodate wider spread between pci-e slots not to cover 3-4 cards, at least that is my understanding.

So with RTX 2080ti gen, 22GB Vram would be the current limit. Which is heck of a lot in gpu-rendering.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-29, 12:20:25
Reply #28

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
My bad, I totally didn't address that properly.

You can only connect two cards (hence 2+2) which obviously gives you access to the combined VRAM of the two cards. So if you have 2x 2080 your VRAM total will be 16GB. That being said you can connect 2 + 2 + 2 ... GPUs together. The combined VRAM total will still be the same but your rendering performance will be obviously higher. :)

So yeah, you are totally right. The 3 & 4 slots are there to create some room between the cards. Thats pretty much it afaik.
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-08-29, 12:25:15
Reply #29

Juraj

  • Active Users
  • **
  • Posts: 4768
    • View Profile
    • studio website
I really can't wait for Vlado to post his benchmarks once NDA falls out. His words: You will be positively surprised :- ).

And that is yet without the RTX cores I believe.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-29, 14:01:30
Reply #30

cecofuli

  • Active Users
  • **
  • Posts: 1577
    • View Profile
    • www.francescolegrenzi.com

2018-08-29, 19:44:20
Reply #31

burnin

  • Active Users
  • **
  • Posts: 1536
    • View Profile
;) yup, aware of the Imagination tech, GPU arch in PowerVR RT cards since their demos in 2014 (later on Apple bought it, implemented in almost all their devices, optimizing code now...) why NV rush to release theirs, didn't surprised me - am wary about their strategy - hyping the ignorant crowd...

Some more food for thought from Panos (RS dev.):
Quote
At some point I’ll be preparing a longer post than this but just wanted to quickly offer some insight on ray tracing hardware acceleration and ensure that user expectations are reasonable.

Most renderers work by executing the following three basic operations:

1) They generate rays (initially from the camera),
2) They shoot these rays into the scene (i.e. they do ray tracing),
3) They run shaders at the intersection points of the rays.

Shading typically spawns new rays for reflection/refraction/GI/etc purposes which means going back to step 1.
This 1-2-3 process happens as many times as there are ray bounces.

Hardware accelerated ray tracing primarily speeds up the second step: i.e. the ‘core’ ray tracing. If the renderer uses really simple shading, then the ray tracing step becomes the most expensive part of the renderer. For example, if you use extremely simple shaders that (say) just read a flat texture and return it, you could easily find out that the ray tracing step takes 99% of the entire render time and shading just takes 1%. In that case, accelerating ray tracing 10 times means that the frame renders 10 times faster, since ray tracing takes most of the time.

Unfortunately, production scenes do not use quite as simple shading as that.

Both us and other pro renderer vendors have found cases where shading takes a considerable chunk of the render time. I remember reading a Pixar paper (or maybe it was a presentation) where they were claiming that their (obviously complicated) shaders were actually taking *more* time than ray tracing! Let’s say that, in such a scenario, shading takes 50% of the entire frame time and tracing takes the other 50% (I intentionally ignore ray generation here). In that scenario, speeding up the ray tracer a hundred million times means that you make that 50% ray tracing time go away but you are still left with shading taking the other 50% of the frame! So even though your ray tracer became a hundred million times faster, your entire frame only rendered twice as fast!

All this is to say that when you read claims about a new system making rendering several times times faster, you have to ask yourself: was this with simple shading? Like the kind you see in videogames? Or was it in a scene which (for whatever reason) was spending a lot of time during tracing and not shading?

In more technical terms: the RT cores accelerate ray tracing while the CUDA cores accelerate shading and ray generation. The RT hardware cannot do volume rendering and I think no hair/curve tracing either - so these two techniques would probably also fall back to CUDA cores too - which means no benefit from the RT hardware.

All this is not to say that we’re not excited to see developments on the ray tracing front! On the contrary! But, at the same time, we wanted to ensure that everyone has a clear idea on what they’ll be getting when ray tracing hardware (and the necessary software support) arrives. We have, as explained in other forum posts, already started on supporting it by re-architecting certain parts of Redshift. In fact, this is something we’ve been doing silently (for RS 3.0) during the last few months and in-between other tasks. Hopefully, not too long from now, we’ll get this all working and will have some performance figures to share with you.


Thanks

-Panos

PS
VRay demos are running on Quadros.
« Last Edit: 2018-08-29, 19:50:16 by burnin »

2018-08-29, 22:35:18
Reply #32

JViz

  • Active Users
  • **
  • Posts: 139
    • View Profile
    • Behance
nobody wants a software that needs a specific piece of hardware to run it. that piece of hardware will have a very high price look at quadro. that's because companies like nvidia/amd will take advantage of the market need for that hardware and will jack the price up. use what's there and keep it that way. use geforce instead of quadro or fancy RTX like fstorm does. cpu's will have hundreds of cores soon with new materials to create smaller dies to push way deeper into moor's law curve, we are set for great stuff in the cpu world. set tight
Although a purist, my work is anything but.
https://www.behance.net/ImageInnate

2018-08-30, 10:17:16
Reply #33

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
Well, I'm sitting here wondering... RTX utilizes the RT cores. Does DXR utilize RT cores too? Do you need to access the RT cores through the new RTX framework (I think its called Optix Ray Tracing)? What does that mean for AMD GPUs?

The biggest drawback of GPU rendering for me personally was always the closed off eco-system. You either run on Nvidia CUDA compliant GPUs or you don't run at all. I dislike Otoy for their weird corporate crap they do but supposedly they are getting closer and closer with their Vulkan implementation that will work on pretty much any popular vendor going forward (Intel?).

I am mentioning that because if DXR doesn't work well with RT cores and whatever AMD comes up with isn't compatible... We are even more locked to Nvidia. And we all know how that turned out when Intel was on top. Or Apple. Or Comcast... You get the picture.

Disclaimer: I by no means am saying that nvidia produces crap hardware. On the contrary, for the past couple of years they are the performance leaders, no doubt. I do remember the fun times when the Radeon 9700 Pro made things go round and how nice it is to for there to actually be competition on the high end as well.

edit: It would appear that the RT cores can be programmed through Nvidia's Optix library, DXR and Vulkan.
Source -> https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray
« Last Edit: 2018-08-30, 12:10:40 by nkilar »
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-08-30, 10:48:58
Reply #34

karnak

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 76
    • View Profile
@burnin Thank you for the quote.
Corona Academy (May 2017)

2018-09-16, 20:03:00
Reply #35

burnin

  • Active Users
  • **
  • Posts: 1536
    • View Profile
GPU Ray Tracing for Film and Design: RenderMan XPU by Max Liani, Pixar Animation Studios

Elsewhere...
Cards coming in...
It was also mentioned (speculated?) that NVlink memory stacking on non-Quadro RTXs is possible, but must be coded-in-engine & VRay dev. already did it.

2018-09-16, 20:09:46
Reply #36

Juraj

  • Active Users
  • **
  • Posts: 4768
    • View Profile
    • studio website
GPU Ray Tracing for Film and Design: RenderMan XPU by Max Liani, Pixar Animation Studios

Elsewhere...
Cards coming in...
It was also mentioned (speculated?) that NVlink memory stacking on non-Quadro RTXs is possible, but must be coded-in-engine & VRay dev. already did it.

Well of course he did :- ). I can't wait to see the results !
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-09-16, 22:06:50
Reply #37

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
...
It was also mentioned (speculated?) that NVlink memory stacking on non-Quadro RTXs is possible, but must be coded-in-engine & VRay dev. already did it.

Apparently that is the case. I would be mad furious if Nvidia decided to call it NVlink but disable any kind of memory pooling options (on the Geforce cards that is). Might as well call it SLI 3.0 in that case.

There is one publication that had something to say about that though, Guru3D.com - (https://www.guru3d.com/articles_pages/nvidia_turing_geforce_2080_(ti)_architecture_review,7.html).

"Unfortunately, though, the new NVLINK bridge is just used for SLI, there will be no GPU or memory sharing as previously expected. Think of NVLINK as SLI version three. It is just an interface yet many times faster."

I am unsure where they get that info from to be honest but they are pretty specific about it. Maybe they are just referring to gaming scenarios. From my quick glance at the Nvidia reviewers documents I couldn't find any info about that. In any case, I kind of wouldn't be surprised if they somehow managed to lock it at the driver level.

All that being said, only a couple of days to go and we'll have more info :)

edit: Given that these graphic cards are supposed to drive 4k games that 8 GB framebuffer seems to be almost too small. Hence I can see how the NVlink implementation would help with memory pooling in that regard.
« Last Edit: 2018-09-16, 22:28:58 by nkilar »
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-09-17, 20:37:46
Reply #38

SharpEars

  • Active Users
  • **
  • Posts: 103
    • View Profile
With regard to NVLink, Chaos Group have already stated on their blog that NVLink will support memory pooling on the new RTX cards, specifically the 2080 and 2080 Ti. Read the NVLink section at the following blog post by Vlado: https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

2018-09-17, 20:41:39
Reply #39

Juraj

  • Active Users
  • **
  • Posts: 4768
    • View Profile
    • studio website
He doesn't mention directly 2080&2080ti, but given he uses them as example:

Quote
For example, if we have two GPUs with 11GB of VRAM each and connected with NVLink, V-Ray GPU can use that to render scenes that take up to 22 GB

..I guess the answer is yes :- ). This will be a big one..
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-09-20, 07:19:49
Reply #40

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
So... The reviews have been out for like a day now. Any impressions? https://videocardz.com/78054/nvidia-geforce-rtx-2080-ti-rtx-2080-review-roundup
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-09-20, 09:54:26
Reply #41

Juraj

  • Active Users
  • **
  • Posts: 4768
    • View Profile
    • studio website
I would call that disappointing.

2070= No NV-Link, too weak to use RTX & DLSS properly. Worthless model in lineup.
2080= Same as 1080ti brute performance, 8GB is too little for games & viewports under Win10. Too weak to use RTX properly.

2080ti= OK brute speed-up, not worthy of price increase. Great speed-up with DLSS (really like). RTX in games is great, not willing to play in 1080 for it though. Maybe in next generation.

Who did the VRAY benchmarks that I've seen ? ChaosGroup itself ? They had RTX card for months and they don't actually utilize it yet in that benchmark. I find that pretty disappointing as well.

Right now this generation seems great only for GPU rendering guys who will benefit massively from NVLink.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-09-20, 11:25:56
Reply #42

Jpjapers

  • Active Users
  • **
  • Posts: 1667
    • View Profile
I would call that disappointing.

2070= No NV-Link, too weak to use RTX & DLSS properly. Worthless model in lineup.
2080= Same as 1080ti brute performance, 8GB is too little for games & viewports under Win10. Too weak to use RTX properly.

2080ti= OK brute speed-up, not worthy of price increase. Great speed-up with DLSS (really like). RTX in games is great, not willing to play in 1080 for it though. Maybe in next generation.

Who did the VRAY benchmarks that I've seen ? ChaosGroup itself ? They had RTX card for months and they don't actually utilize it yet in that benchmark. I find that pretty disappointing as well.

Right now this generation seems great only for GPU rendering guys who will benefit massively from NVLink.

As far as im aware absolutely nothing supports RTX right now that consumers can get their hands on?

2018-09-20, 13:09:58
Reply #43

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
I would call that disappointing.

2070= No NV-Link, too weak to use RTX & DLSS properly. Worthless model in lineup.
2080= Same as 1080ti brute performance, 8GB is too little for games & viewports under Win10. Too weak to use RTX properly.

2080ti= OK brute speed-up, not worthy of price increase. Great speed-up with DLSS (really like). RTX in games is great, not willing to play in 1080 for it though. Maybe in next generation.

Who did the VRAY benchmarks that I've seen ? ChaosGroup itself ? They had RTX card for months and they don't actually utilize it yet in that benchmark. I find that pretty disappointing as well.

Right now this generation seems great only for GPU rendering guys who will benefit massively from NVLink.

Well there are some reviewers who've used the V-Ray GPGPU benchmark. Here is one of them:
https://www.guru3d.com/articles_pages/geforce_rtx_2080_ti_founders_review,34.html

There were a couple of others that tested with Luxmark and VRay GPU. Can't seem to find the links right now.

edit:
One of the LuxMark benchmarks -> https://hothardware.com/reviews/nvidia-geforce-rtx-performance-and-overclocking?page=3

There was also this RTX benchmark preview posted on twitter and retweeted by Otoy. Supposedly this is using an early version of the RTX implementation via Optix. This is essentially saying that 1x 2080Ti = 2x 1080ti on the RTX code.

https://pbs.twimg.com/media/DnenQ7dV4AAhTzP.jpg:large

Not a fan of Nvidia at all. The cards seem OK for rendering though.

@jpjapers
I think you are correct :) Octane 2018.1 beta should be out this year and afaik that and V-Ray are currently in front in terms of RT core adoption.
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-09-20, 13:22:43
Reply #44

Juraj

  • Active Users
  • **
  • Posts: 4768
    • View Profile
    • studio website
I've seen numbers like 2times to 8times being mentioned after RTX is utilized in offline gpu rendering but not the actual (real world production scene) scenario where that happens.

Patiently waiting to see that being showcased first.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-09-25, 16:11:45
Reply #45

Marian

  • Active Users
  • **
  • Posts: 42
    • View Profile
According to Linus, RTX series will not allow to share (expand) graphics memory through NVlink.
  Just hit 04min13s

2018-09-26, 08:19:48
Reply #46

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
So far we've got confirmation from Nvidia that memory pooling isn't built into NVLink per say but the developers will have the option to implement it themselves.

To quote somebody from the GPU groups on other sites:
"Memory pooling is possible for GeForce RTX according Nvidia’s Director of Technical Marketing, Tom Peterson, during HotHardware on their 2.5 Geeks podcast:

Petersen explained that this would not be the case for GeForce RTX cards. The NVLink interface would allow such a use case, but developers would need to build their software around that function.

“While it's true this is a memory to memory link; I don't think of it as magically doubling the frame buffer. It's more nuanced than that today,” said Petersen.

According to Jules from OTOY NVLink memory pooling is going to be implemented in Octane 2018.1. I think ChaosGroup has hinted on doing that as well in some of their blog posts.

It is not a 100% but it does seem fairly clear still.
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us

2018-09-26, 11:54:41
Reply #47

agentdark45

  • Active Users
  • **
  • Posts: 577
    • View Profile
From what I can gather, memory pooling is not possible on the RTX cards with gaming due to the massive amounts of data throughput needed + not inducing realtime lag. For this to happen the NVlink bridge would need to be as fast as the GDDR6 chips to stop bottlenecking.

However rendering seems to be a different use case and should in theory be possible as it's not as hamstrung by realtime demands (and taking into account everything Vlado has mentioned in blog posts).
Vray who?

2018-10-09, 21:49:25
Reply #48

burnin

  • Active Users
  • **
  • Posts: 1536
    • View Profile

2018-10-12, 19:25:30
Reply #49

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 12789
  • Marcin
    • View Profile
Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us

2018-10-13, 11:55:28
Reply #50

Juraj

  • Active Users
  • **
  • Posts: 4768
    • View Profile
    • studio website
Performance of 6x1080ti and 22GB Vram. That's pretty solid setup :- )
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-10-13, 16:28:19
Reply #51

bryanwrx

  • Active Users
  • **
  • Posts: 66
    • View Profile
some gpu benchmark testing


2018-10-15, 07:54:10
Reply #52

Nejc Kilar

  • Corona Team
  • Active Users
  • ****
  • Posts: 1254
    • View Profile
    • My personal website
There is this cool reddit topic (is it called a topic? Oh, nvm...) where this user "daffy_ch" is posting all the bits and pieces that devs / gpu renderirsters post online regarding the new RTX cards. I recommend checking it out.

https://www.reddit.com/r/RenderToken/comments/9j0zdq/10_gigarays_translate_to_32_gigarays_in_real/
Nejc Kilar | chaos-corona.com
Educational Content Creator | contact us