Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - genesys

Pages: [1] 2
1
It should be not to hard to use the motion-vectors of the current frame to look up samples of the previous frame (something that's done in gaming often for example to implement temporal antialiasing). If the framebuffer was initialized this way for each frame, this could greatly decrease render times especially for fly-through animations since noise threshold could be reached much more quickly.

2
When using propagate masks for CGeometry_Zdepth with reflection, the returned distance is the distance from the reflecting surface (i.e. the mirror) to the reflected surface (whatever is reflected).

This however isn't a useful value and unlikely the value people using this feature are interested in.

What you really want to return is distance from camera to reflecting surface plus distance from reflecting surface to reflected surface, since that's the value you need if you for example want to correctly render depth-of-field effects in reflections (in the same way as you want distance from camera to refractive surface plus distance from refractive surface to refracted surface in order to properly render depth-of-field for objects behind windows)

3
https://coronarenderer.freshdesk.com/support/solutions/articles/12000066916-how-to-use-masks-with-reflection-refraction states that CGeometry_Zdepth should be working with propagate masks. However, when I create a mirror and set the mirror material's propagate masks to 'through reflection', I still get the distance of the mirror surface instead of the distance of the reflection in the CGeometry_Zdepth output.

Using corona 6.

4
perfect, thank you!

to clarify:
Quote
However any further bounces (diffuse, reflections, ...) will still be random.
what exactly does that mean for a perfectly reflective surface? will the origin of the reflection ray be exactly the intersection of the primary ray (which originates from the pixel center), or will the origin of the reflection ray be offset to some other location within the area of the rendered pixel?


Out of curiosity - why do you need to know this? :)

We're using corona to generate data for machine learning and need to reconstruct the ray that was used for each rendered pixel, so knowing the exact sample location is important to not introduce small errors.

5
Select the mirror and use "Render Selected' - while it renders only that object, it does correctly render shadows on the object, reflections and refractions in that object, etc.

I tried this, but this doesn't work. When only the mirror is selected I don't see any objects in its reflection. Only selected objects are visible in the reflection (but then they are visible in the primary rays as well)

6
Great I will try that. What about the clamping though?

7
How about adding an option that allows to write out the scene parsing to a file and give the option to load it from file rather than rebuilding it? Then one could reuse the same acceleration structure for all frames. Perfect for flythroughs of scenes without animated content.

8
I have a static scene (no animations) for which I need to render ten thousands of low res frames. I have 48 render cores and rendering speed itself is decent, but scene parsing becomes a considerable part of overall render duration when there are many frames of low resolution. Other than the camera location, nothing moves, so the same parsed scene could be used to raytrace all frames. Is there any was to avoid reparsing for every frame - or to cache the parsing results and load them rather than recreating?

I understand that corona uses mentee, and by my understanding of embers it should be no problem at all to perform ray queries from different camera locations against the same acceleration structure.

So, is there any way to do it?

Thanks!

9
Corona renders out CGeometry_ZDepth with objects far away as black (0.0) and objects close as white (1.0)
This representation has several disadvantages:

1. precision
32bit floating-point numbers (which depth values are stored in when saved as exr) have higher precision around 0.0 than they have around 1.0. If we map for example a depth-range of 100km to the [0-1] interval, we get a precision of about 6mm at the high end (100km * 2^-24) and on the low end a resolution close to the planck length (the smallest physical size there is). With white = close and black = far, we get that low 6mm resolution close to the camera and the high resolution far away, which doesn't make sense. It'd be better to store it inverted (especially when saving to half-floats).

2. clamping
it'd be nice if values beyond 1 could be rendered out, so that depth isn't clamped at the far value. With the current mapping with far going towards zero, unclamped values would get negative, which isn't very elegant. So having close to the camera = 0, far = 1 and everything beyond far > 1 (proportionally obviously) would make most sense.

10
I'm creating simulation data for machine learning and need exact reflection depths. But I think i figrued out how - seems using propagate masks this should be possible https://coronarenderer.freshdesk.com/support/solutions/articles/12000066916-how-to-use-masks-with-reflection-refraction

11
How can I add a mirror to a scene and render only the mirror object, but still see the whole scene reflected in it (but only in the reflection, not otherwise)

12
What about depth propagation for reflections?


What would be best would be to make use of OpenEXR's deep-data feature which allows to store multiple values per pixel. Of course with multiple reflections AND refractions or glossy reflections/refractions that require multiple samples there would be some ambiguity on how to store these values, but even just storing three values per depth pixel for non glossy reflections and refractions like [primary ray depth][first reflection depth][first refraction depth after exit] could be very helpful

13
[Max] I need help! / Fastest way to render depth only
« on: 2021-08-12, 14:12:12 »
What's the fastest way to render out ONLY CGeometry_ZDepth for a sequence? preferrably without changing the materials in the scene. Is there an easy way to deactivate all lighting computation and all shading alltogether and only output CGeometry_ZDepth of primary rays?

14
I would like to add some planar mirror objects with perfect reflectivity to a scene but instead of getting the distance to the mirror surface per pixel in the depth buffer, I would like to get the total depth of the reflection (ie. the distance the ray traveled from camera to mirror surface plus the distance from the mirror surface to the primary intersection of the reflection ray.
How can I do that?

15
Let's say we render a 4x4 pixel image for a camera with 90° vertical and horizontal FOV and disable antialiasing using

bool shading.enableAa = false

Are the sample locations for the 16 rays generated guaranteed to be in the pixel centers and do the outer borders of the outer pixels align exactly with the camera frustum?

if the sample locations are jittered or offset instead, how can I get the exact sample location for each pixel?

Pages: [1] 2