Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - genesys

Pages: [1]
1
It should be not to hard to use the motion-vectors of the current frame to look up samples of the previous frame (something that's done in gaming often for example to implement temporal antialiasing). If the framebuffer was initialized this way for each frame, this could greatly decrease render times especially for fly-through animations since noise threshold could be reached much more quickly.

2
When using propagate masks for CGeometry_Zdepth with reflection, the returned distance is the distance from the reflecting surface (i.e. the mirror) to the reflected surface (whatever is reflected).

This however isn't a useful value and unlikely the value people using this feature are interested in.

What you really want to return is distance from camera to reflecting surface plus distance from reflecting surface to reflected surface, since that's the value you need if you for example want to correctly render depth-of-field effects in reflections (in the same way as you want distance from camera to refractive surface plus distance from refractive surface to refracted surface in order to properly render depth-of-field for objects behind windows)

3
https://coronarenderer.freshdesk.com/support/solutions/articles/12000066916-how-to-use-masks-with-reflection-refraction states that CGeometry_Zdepth should be working with propagate masks. However, when I create a mirror and set the mirror material's propagate masks to 'through reflection', I still get the distance of the mirror surface instead of the distance of the reflection in the CGeometry_Zdepth output.

Using corona 6.

4
I have a static scene (no animations) for which I need to render ten thousands of low res frames. I have 48 render cores and rendering speed itself is decent, but scene parsing becomes a considerable part of overall render duration when there are many frames of low resolution. Other than the camera location, nothing moves, so the same parsed scene could be used to raytrace all frames. Is there any was to avoid reparsing for every frame - or to cache the parsing results and load them rather than recreating?

I understand that corona uses mentee, and by my understanding of embers it should be no problem at all to perform ray queries from different camera locations against the same acceleration structure.

So, is there any way to do it?

Thanks!

5
Corona renders out CGeometry_ZDepth with objects far away as black (0.0) and objects close as white (1.0)
This representation has several disadvantages:

1. precision
32bit floating-point numbers (which depth values are stored in when saved as exr) have higher precision around 0.0 than they have around 1.0. If we map for example a depth-range of 100km to the [0-1] interval, we get a precision of about 6mm at the high end (100km * 2^-24) and on the low end a resolution close to the planck length (the smallest physical size there is). With white = close and black = far, we get that low 6mm resolution close to the camera and the high resolution far away, which doesn't make sense. It'd be better to store it inverted (especially when saving to half-floats).

2. clamping
it'd be nice if values beyond 1 could be rendered out, so that depth isn't clamped at the far value. With the current mapping with far going towards zero, unclamped values would get negative, which isn't very elegant. So having close to the camera = 0, far = 1 and everything beyond far > 1 (proportionally obviously) would make most sense.

6
How can I add a mirror to a scene and render only the mirror object, but still see the whole scene reflected in it (but only in the reflection, not otherwise)

7
[Max] I need help! / Fastest way to render depth only
« on: 2021-08-12, 14:12:12 »
What's the fastest way to render out ONLY CGeometry_ZDepth for a sequence? preferrably without changing the materials in the scene. Is there an easy way to deactivate all lighting computation and all shading alltogether and only output CGeometry_ZDepth of primary rays?

8
I would like to add some planar mirror objects with perfect reflectivity to a scene but instead of getting the distance to the mirror surface per pixel in the depth buffer, I would like to get the total depth of the reflection (ie. the distance the ray traveled from camera to mirror surface plus the distance from the mirror surface to the primary intersection of the reflection ray.
How can I do that?

9
Let's say we render a 4x4 pixel image for a camera with 90° vertical and horizontal FOV and disable antialiasing using

bool shading.enableAa = false

Are the sample locations for the 16 rays generated guaranteed to be in the pixel centers and do the outer borders of the outer pixels align exactly with the camera frustum?

if the sample locations are jittered or offset instead, how can I get the exact sample location for each pixel?

10
Porting and API / how to develop custom camera?
« on: 2021-06-29, 15:27:48 »
I developed a custom camera plugin for V-Ray (3dsmax) to capture the lightfield within a specific volume.

How can I do the same for corona renderer? How can I develop a custom camera? is there a plugin SDK?

Pages: [1]