Chaos Corona Forum
General Category => General CG Discussion => Topic started by: Fluss on 2020-04-29, 14:57:40
-
Have you seen this?
Lightfields are almost there!
Imagine rendering some views of a scene and then be able to visit it in VR? Futur is exciting!
paper : http://www.matthewtancik.com/nerf
edit : paper link
-
Crazy thing to me is it only needed to generate depth map primarily? Cant we feed it cgi data with zdepth (and world normals, etc..) and get even better results faster?
-
Crazy thing to me is it only needed to generate depth map primarily? Cant we feed it cgi data with zdepth (and world normals, etc..) and get even better results faster?
That's how I understand it too, it does not seem that different from photogrammetry on the first steps... Then the AI brings the magic. As for the CGI input, that's immediately what comes to my mind. Also using the passes to help solve the tricky parts such as reflection and refractions. I Did not read the paper yet but this stuff looks really Impressive.
edit : Also
-
As for getting faster results, imagine reconstructing a whole scene with just a couple of 360s finely placed.
-
Ah... I actually imagined the exact opposite usage :- ).
I never do 360s "VR" for clients, rarely people ask for it. But I can imagine if we could look around in them... they would suddenly look lot more impressive. And LOT MORE than Unreal, which naturally can already provide this effect because the data in real-time is there...
That was my first thought of seeing this being used, moving around in 360s static rendered images. High quality from off-line ray-tracing, but the freedom to look like in real-time with static position (which is the future, even Half-Life Alyx has statis position because they did the research and apparently locomotion sucks).
-
Ah... I actually imagined the exact opposite usage :- ).
I never do 360s "VR" for clients, rarely people ask for it. But I can imagine if we could look around in them... they would suddenly look lot more impressive. And LOT MORE than Unreal, which naturally can already provide this effect because the data in real-time is there...
That was my first thought of seeing this being used, moving around in 360s static rendered images. High quality from off-line ray-tracing, but the freedom to look like in real-time with static position (which is the future, even Half-Life Alyx has statis position because they did the research and apparently locomotion sucks).
I tried this and I wasn't that impressed, I take ages to render and definition is bad. Not to mention that refraction is not handled nicely: https://www.presenzvr.com/