Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - pokoy

Pages: [1] 2 3 ... 123
[Max] I need help! / Re: VR 8k issue
« on: 2024-02-27, 17:16:36 »
I guess you could just increase render size in steps and see where it 'breaks' and what RAM consumption looks like when it happens.
I've rendered out bigger spherical images with 64GB RAM so most likely yes, it is RAM and 30/32GB probably means it's at the limit. When this happens you should also see the OS becoming slow to work with, freezing for a few seconds etc.

Edit: Specifically how to remove this clear reflection without using roughness. see attached.
EDIT 2: And without using fake anisotrophy
Yeah that area is not looking good, same for the harsh reflection/highlight cutoff on the right box. What would be needed to get rid of these?
At least similar fireflies are there in Fstorm too, these would be my main concern in both renderers, wonder if they'd clean up without clamping highlights.

[Max] Daily Builds / Re: New VFB Skin and functionality
« on: 2024-02-20, 14:27:20 »
If I had a wish, for me it's how the denoiser workflow is implemented:
1. Let us denoise *during* a final render, with whatever the denoiser is set to. I'd love to see if the noise level is good enough while i'm rendering. Sometimes I'll set way too many passes and having a way to preview the denoiser without stopping/cancelling the render would be great.
2. Currently, a denoised render will save to the history only if it was a final render, IR will save to history without the denoiser... This is inconsistent and makes it impossible to compare IR vs final renders since you can't save/view the denoised version from IR.
3. Denoiser checkbox in the VFB is inconsistent - for final rendering it will work only after a render is stopped/finished. For IR it can't be disabled without re-rendering (and has no function there), plus, it has to be disabled in a different place. It just doesn't do what it implies the way it's designed now.

For me it's the small things and consistency between final/IR rendering is not where it should/could be wrt to denoising.

1) Is it possibile to link and hdri image to a camera so that when you rotate the camera the environment follows?

Corona bitmap allows you to link environment rotation to arbitrary object's transform.
Yeah totally forgot about that - true, way easier than wiring parameters.

Thank you! I will check every step! This is the HDRI that I am using: 

As you can see, also in the attached screenshot, it is quite difficult to understand if the height of the camera was 1.8 m or 2m for example.

And I have other 2 questios. I didn't find solution yet on this forum. (Do I have to create a new thread?)

1) Is it possibile to link and hdri image to a camera so that when you rotate the camera the environment follows?

2) If i change camera's focal lenght  the hdri changes automatically. Is there a way to detach this connection? That would be stupid? Thank you!

1.8m or 2m - with a value range so narrow it will probably be impossible to tell without knowing some dimensions in the HDRI, for example the distance between the tile borders or those cylindrical whatever-they-are. Also, I'm not sure the roof in that HDRI is level, if it isn't this introduces another level of uncertainty.

1. I guess it should be possible with parameter wiring, where you'd wire the HDRI's U offset to camera Z rotation. But I'm not sure this will help as parameter wiring in Max tends to be slow and with the texture being recalculated whenever you rotate/change the camera it might be too slow to work interactively.
2. Yes, this is correct and expected. If you think about it, changing the FOV should of course show less of more of the world surrounding you. I don't think you can expect this to work independently without creating other problems - for example you could change the scaling of the U/V parameters of the HDRI but this will cause problems where the HDRI wraps around, the seams wouldn't match anymore. Plus, you're likely to see that something's not right.

I wonder if offering numeric inputs wouldn't be good, along with a color swatch so you can either enter RGB values in 0-255 OR click the swatch to open the color picker and input there as you would now, whatever is more convenient to the user. So basically offer the same input for LDR values as you do for HDR linear.

Opening the picker, assigning the correct color space, choosing the color, pressing OK and still having to use the dropdown is quite a lot of mouse clicks for a single color assignment. I totally get why OP feels this is a bit too much for such a simple thing.

For outdoor HDRIs, the horizon line will be at the vertical center probably for all HDRs that you come across, unless the panorama was taken really high up which is improbable.
For indoor HDRIs, dome mapping really makes sense since light sources and objects will be closer to the camera and will make a difference when rendered from the correct vs incorrect height. Even using a wrong camera height can be interesting sometimes.
Dome mapping is preferable too when you render an object on a surface that uses the HDRIs projection on the ground with a shadow catcher for example . Anything close to the surface will look definitely better as there's quite some difference between how the surroundings look to an object at 0.1m vs 1.6m.

In order to determine a somewhat correct height at which the cam was placed you'll need to know the physical scale of a feature visible the pano, best one that's on the ground. It can be anything like street markings, a tile, a curb, something that you know to some certainty, the bigger the object/feature the better.
Create a plane at 0/0/0 big enough to cover the ground of your HDRI and assign to it an UVW modifier in spherical mode.
Create a material for the plane, assign the HDRI in question to it and display it in the viewport.

Once you have the plane display the HDRI on it, you can move the UVW mod's gizmo up/down and see how the projection/mapping of the HDRI on the plane changes. You might have to increase the plane's tesselation for a better result in the viewport.
Now create an object similar in size to the feature in the HDRI and place it somewhere where the feature is projected on the plane. If you move the gizmo and the feature projected/displayed on the plane and the object you created match in size then read what the Z value for the UVW mod's gizmo says - that's roughly the height the pano was taken at.

I'm saying 'roughly' because the method is not accurate. Accuracy will improve with increasing size of the feature though, that's why it's important not to use small objects as features. It will also work only somewhat reliably on the things that are on the ground closer to the camera. The farther the object from the camera the more it gets distorted when projected on the plane, making it hard to determine the correct height.

Still, this method can work surprisingly well since it 'maps' the surroundings on a plane and helps to get an idea of its physical scale. Once you have some other 3d geometry in the scene to compare the projected HDRI's feature to, you quickly see whether the scale (or better projection center / UVW mod's gizmo) is far off or not.

Note - I'm not in front of the PC right now so can't check - I've done this a lot in the past but can't remember if I used camera projection or spherical projection from a UVW modifier on the plane. I know one of these methods had an advantage because it works better in the viewports. If the above method doesnt work for you, let me know so I can check.

[Max] Feature Requests / Re: Faster Rendering
« on: 2024-02-06, 15:17:40 »
This is something that would probably be a nightmare to solve w.r.t. UI/UX but it's actually a useful thing to have. Let's say Corona would have a dedicated material for this where you can override any slot/value with a single one across all materials, but keep other material properties.
Brazil r/s had something like this. The concept was you would use a special material and everything you'd want to stay would get a special pass-through map (meaning it wouldn't be changed und using the object's material properties in that place > it would pass through and not be replaced) but everything you assign a map to would use that map instead. Hard to explain but it was designed to work well with minimal setup work/time.

Not passing geometry to other tools in Max makes no sense, especially since you provide a way to convert Scatter instance to Max instances. Please just make sure it's passing its output internally, no need for additional scripts then.

I am not sure what you mean. What I was saying, is that e.g. array modifier is placing the geometry as-is, and it can be readily converted to editable mesh/poly. Scattering plugins on the other hand, place/show instances in the viewport, generate mesh on the render time, there is no readily available/interactive geometry to be converted. In fact, not even full mesh display scatter can be converted to editable poly. And when listening the maxscript when converting scatter geometry into max, it actually reads the data (transform) of the instances and creates the instances one by one. That is another reason why it can take very long times to do the conversions, not mentioning the display of high poly meshes in the viewport.

I think if a user needs/wants to convert a scatter object to a mesh he's probably aware of the mesh polycount, there is no need to 'protect' him from a wrong choice if the choice was a deliberate decision or needed for whatever reason in the first place.
If the mesh is shown in the viewport it already *exists* as a mesh, Corona is just not passing the data internally (Frood is hopefully proving me wrong with his script above, didn't test).

I guess the story went like this:
1. Corona develops a Scatter plugin, doesn't add passing geometry data internally because of either being overly protective (Scatter needs to be purchased/installed) or because no one thought of it.
2. Users request an option to convert to Max instances, it is added. Because of that, no one thinks passing geometry internally is needed anymore so it's left in that state.

We might have valid ideas and requests that are not making sense for developers/support. A user might have to hand out his scene to another studio that doesn't use Corona. Or I might want to convert all my Scatter objects because I know I'm going to have to re-render the scene in the coming years and I don't want to lose the scene's content to whatever the future holds for me as a cllient or Corona as a software/company. There are many reasons why I might want to use meshes or need them.

Also, not everyone has access to latest Max tools like Array - for example, I personally don't as I didn't go with the SAAS licensing for Max and am stuck with Max 2022.

Below my tests of different values...

int lights.envResolution = 1000 vs int lights.envResolution = 8000

Green = sun object, red = HDR image aligned to the sun.
Left is a circle shaped hole, right is a square shaped hole in the camera obscura's wall facing the lights.

I remembered I had a camera obscura test setup for it and tested now with the string option:

int lights.envResolution = {value}

And yes, it actually is the reason why HDR environment maps produce a 'pixelated' sun shape or shadows.
Setting it to 1000 - which I believe is the default value - will produce a square shaped sun in a camera obscura, setting it to the map's original resolution (8000 in my case) will produce a rounded shape.

Not really a bug but a side effect of Corona downsizing HDR images internally. This should probably be added to the online help with instructions how to use the string option.

Thanks but this.... didnt do anything, it created a new scatter object with same properties, no new meshes :(

Oh sorry, I assumed it must work since it works with anything else that produces/instances geometry procedurally. Seems Scatter was deliberately blocked from passing geometry to these methods in Max.

Not passing geometry to other tools in Max makes no sense, especially since you provide a way to convert Scatter instance to Max instances. Please just make sure it's passing its output internally, no need for additional scripts then.

Hi there, I received some Models from a client where it has more than 200 scatters objects and I need to convert them to mesh, is there a way to convert them in batch or an easy way to do it, like I said, theres more than 200 scatters, its gonna take a lot of time to do this.

It will be great to have this operation in the Lister too.
Making a mesh snapshot is probably a fast way to do it (Choose the Scatter object you want to convert > Top Menu > Tools > Snapshot, choose Mesh), but it would not retain the individual instances, every Scatter object would become a mesh, not sure if that's what you're after. The advantage is that you can select all of the Scatter objects and when choosing Snapshot it will convert all of them in one go.

Small bug in Corona Decal.

In the object include/exclude.

If an object is included to receive the decal. Any object you link in a child/parent relationship to that object will also be included to receive the decal. Highly doubt this is an intended behavior.

Generally, this behavior has to be optional as it's found everywhere in Corona (for example in mask render elements). Please devs, make it an optional per-case thing, it's really problematic in scenes where you just can not change hierarchies, for example in animations. It's a nice feature if you really need it but a real headache if you don't.

I wondered if this has something to do with the fact that internally Corona downsizes a HDR quite drastically for lighting, basically resulting in the sun being a super bright pixel which would then display as a tiny square light source. Earlier tests might not have looked like this because this behavior might have been introduced at a later point.

Pages: [1] 2 3 ... 123