Chaos Corona for 3ds Max > [Max] General Discussion

Reltime PT engine for videogame

(1/2) > >>

cecofuli:
I don't know if this place is correct for this thread, I think you know well this guy
, but I'm very impressed by this last video!


I ask you. where is the secret? Does he use some "fake"? Because I don't understand how can be possible to have "near realtime" with PT+PT.
When all engines (VRay, Octane, Corona, MR etc...) need minutes/hours to render an images!

EDIT: he wrote:  "that kernel I was talking about is unbiased path tracing without any filtering.
It has a specific optimization which costs about 10-15% in performance but can drastically reduce the noise in
interiors compared to using multiple importance sampling alone."

"the beauty of Brigade is that you don't need to do tesselation, we can do 60 billion polygons at 30 fps :)
We'll be showing that very soon at the GTC. Also, yesterday we tested a ZBrush model with 28 million unique polygons on Octane
(over 4 GB of mesh and texture data) and we were able to get 40 fps on 1 GTX Titan with path tracing enabled.
Just to say that extremely detailed meshes are not a problem any longer :) We also have some ideas to animate these meshes in real-time."

Chakib:
Impressed by the speed and realism of brigade, i've tested the demo octane (since this programmer works in OTOY) and it's really fast, but i'll tell you one thing, when if new great programmers  join ondra's team i'm sure this gpu speed etc won't impress us again, i have faith in Corona !

cecofuli:
Octane Standalone is very good. The 3ds max integration is done by Karba (Multiscatter programmer). From my point of view, is slow (the integration in max, not Octane).
This is the main reason I didn't buy the max license. But I'm Octane Studio licensed user. I bought the first beta, many years ago, at 49 euro =)
I hate Studio and Standalone software. I remember the first version of Maxwell. It's not for me and  my job. Go back and forth between 3ds max and Studio version.. brrr...
But the main question, like Sam Lapere said, is that every year, GPU doubles the power. Do you remember the good, hot GTX480, 3-4 years ago? In comparison with the Tytan is x10 time faster. 
And if I look Brigage, it can handle many polygons! 480 was with 2 GB RAM. Now Tytan with 6. Next GPU will have 10 or more VRAM!

Can we say the same for the TOP CPU pro-onsumer (not dual/quad Xeon)? do  they double the power every year? Absolutely no!
The CPU development , from my point of view, has stagnated for many years!
And change PC is very expensive (motherboard, CPU, RAM, cooler, Windows license, etc... ) and you need space and many cables. 
Two Tytan, in price,  are like one PC, but with more power, no spece, no cables, nothing. Very scalable system.
I don't image how Corona  will be fast in GPU version!!!!
But Ondra is clean: no GPU version for Corona. But... guys, run inside a city in realtime (30FPS in 1080p) with PT/PT+SKINNED MESH+RAGDOOL+RIGID BODY SIMULATION is really cool!
 
I know well. The Corona programmers will be very quick and good =)
But Nvidia GPU programmers/engineers run very fast! Every year more and more...
Look the first post in the Sam blog (2010?). And look, after 2 years, thanks to the new Nvidia GPU, what they have been able to achieve: a realtime PT videogame!

I am stupid and ill-informed. And I will not criticize anyone.
I'm just trying to understand and discuss with you. With people definitely more prepared than me ^__^


Polymax:
What size limit textures in octane?
And I think that in this video how the trick. Number of bounces is likely to be 1. And that is not physical correct!
If inthe octane number of bounces set to infinity, it will also quickly render?
Unfortunately so far only for games (

Ondra:
it looks almost too good to be true. It has about 100times more rays/s than Corona. I've seen previous highly optimized GPU implementations that had 10 times more. But problem with such solutions is that it does not scale well with renderer complexity. As soon as you start requesting things like translucency, shadows disabling, custom texmaps, background overrides, etc., the speed falls down drastically (or it is not possible to integrate at all). This is why I don't do GPU implementation - it would be maybe up to 5 times faster, but unusable for any serious work outside of such nice demos. That being said, it is absolutely plausible to use this thing in a game ;)

BTW: this is nothing new. Such ultrafast specialized implementations have existed for ages. See for example this paper:
http://www.google.cz/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CDYQFjAA&url=http%3A%2F%2Fgraphics.cg.uni-saarland.de%2Ffileadmin%2Fcguds%2Fpapers%2F2001%2FWald_2001_IRCRT2%2FInteractiveRenderingWithCoherentRayTracing.pdf&ei=EdVjUfeiKYXb7AaO7IDoDQ&usg=AFQjCNE1mr1LrucavR74T_Pju56jXqiQmw&bvm=bv.44990110,d.ZGU
They report 200k-1.5M rays/s.... in year 2001! Imagine what kind of hardware you had in 2001. They use dual pentium III @ 800MHz and have similar rays/s to Corona on modern CPUs. Imagine the performance if this thing was scaled to modern CPUs - it would be as fast as brigade ;)).

Navigation

[0] Message Index

[#] Next page

Go to full version