Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Frood

Pages: 1 2 3 [4] 5 6 ... 129
46
[Max] I need help! / Re: Slow DR Parsing etc
« on: 2023-11-14, 10:37:13 »
But out of interest, when optimizing.  What's more important, Texture Resolution or Texture File Size?  i.e. if the texture is 8k but the file is only a few MB does it need optimizing?

The size on disk does not matter, it gets extracted into memory anyway. So for the rendering process, it's no difference if you load 8k from a small JPG or any uncompressed file format. But if you use CoronaBitmaps, you save a lot of memory when using the out of core feature.

Anyway my first test, I've just been watching the render with all of the windows open via remote desktop for one of the nodes.

The "slowwwww" parts just come from loading assets imho. Your master already has a lot of them loaded, to set up the viewports for example. Network/disk usage (depending on the location of your stuff) should be high at the same time.

Max Process (underlined blue) does not exceed 90GB at any point during any of the above.

Those insane commit sizes you listed are exactly the issue I monitor at most jobs. And they are responsible for crashes, even if the process seems not to need or use the provided virtual memory. It is just crazy to see the system paging gigabytes of ram for a scene that can actually render with a fraction.

I was also expecting these circled numbers to match or have I got that wrong

No W11 here, the memory page in task manager would have been useful, don't know what W11 shows here. If it is like W10, then yes. Additionally I don't know how Corona exactly displays the values there (Gibibyte vs. Gigabyte, that is 2^30 vs. 10^9 bytes per "GB"). But I assume the values are fractional Gibibytes.

Edit: Taskmanager seems to show used ram in your screenshot while DR tab shows (system) commit size.

There are a couple of things I find odd.

Agree. I would like to know the answers as well. Except for the duplicated scene Max stores, I have no hint what is causing all that trouble.

But as for the slow DR, same as above: DR server spawns a Max instance and all stuff that is already loaded when pressing render interactively has to be processed first. If you look at your max.log when loading a scene, you will notice a line like "Done loading file: (...)". Note the timestamp and see how many minutes you have to wait for Max getting responsive when loading a scene interactively. That time between loading the scene and having a "renderable" scene adds to DR nodes because the scene is (currently) loaded on the slave every time you start rendering on the master.


Good Luck





47
Should i try to update GPU drivers, or maybe Corona team forgot to package needed dll in the installer?

Looks like just a wrong additional "optix" directory (I haven't used the installer but if I extract the package, it's there as well). Go to

C:\Program Files\Corona\optix

and if there is another "optix" subdirectory, copy (or move) the dll in there one level higher.


Good Luck




48
Hi,

shot in the dark, but could you please try to replace

C:\Program Files\Corona\Corona Renderer for 3ds Max\2022\LegionLib_Release.dll

by the same file of another, working node (or an extracted Corona 10.2 package).


Good Luck


49
[Max] I need help! / Re: Slow DR Parsing etc
« on: 2023-11-10, 10:19:19 »
First question, less than 2 minutes.

Oh amazing. Starting Max alone takes about 30 seconds on my computer :) So it loads, parses and starts rendering in 2 minutes in fact?

Edit: seen your edit :)

The point is, regardless of task managers details etc etc. I can render it on all 3 individually, if I open the scene on each and just render.  But I can't through DR.

That's interesting, more later.

Another issue while I remember.  When I first open a scene, fresh 3ds max, open scene and render.  It struggles with all of the parsing and loading and I get ram warnings.  However if I cancel the render and render again, it all goes through much quicker with no issues.

That's even more interesting because with scenes using all available ram, I struggle with this phenomenon since ever. Look at the graph (older one I captured, but still applies). It shows a BB render node crunching a job. At the fist bubble, I logged in and started another Max session and rendered some scene locally (flat part of passes graph). Second render stops at the second bubble. Look at the ram consumption of the first job: It drops from 15+GB to about 6GB and stays at that level to the end. I can observe this until now, also with larger scenes and larger impact. Sometimes it's even enough to log on and do anything on the node.

One possible explanation is: 3ds max holds an entire copy of the scene while the renderer has another. At some point, a kind of purge seems to happen. I never found out how to trigger it by purpose.

But such a gulf between them seems crazy.

I still don't see that until you check the working set (and not the commit size) of the dr node max process, curious what you will find.


Good Luck




50
[Max] I need help! / Re: Slow DR Parsing etc
« on: 2023-11-10, 09:20:49 »
Why are DR nodes so slow to join in?

Counter question: how long does it take on the master to load the scene, press render and to have it actually rendering the first pass?

Also I can't help but come back to that "DR nodes use more RAM than the main workstation issue" from years ago.  It still blows my mind.  It shoots up to over 180GB+ on the nodes which then causes them to fail rendering.  But the main PC renders it fine and happily churns along at 75GB RAM usage.

You are comparing the wrong data. Those 75GB of the render status window is the active working set of the 3ds Max process, while the 180GB of the DR status display is the ram commit size (additionally: for the complete system. This should not make a significant difference here if the nodes are dedicated and nothing else is running). You'd have to compare the 180GB to the second value of the render status window of the master: 235GB. So the master uses more ram according to your screenshots, not the other way round.

Best is to compare using taskmanager and activating all ram relevant columns in the "Details" tab. Do this on master and slave. Still the question is: who/what requests (and does not use) such a large amount of memory. Do you use any fancy plugin here? I'm aware of strange commit sizes using Max generally, but 75GB vs. 180+ (commit size) looks in fact extreme. Does the master uses "only" 75GB for the process from start or is it much larger when loading/starting to render and later goes down to 75GB?

As for the fail: do the nodes actually crash? As always: logs, logs, logs from the slave will help:

- 3ds Max log: "Max.log"
- DrServer log: "DrLog.txt"
- Corona log: "CoronaMax2024_log.txt", "CoronaMax2024_errors.txt"


Good Luck




51
You can use the Corona render stamp to save important information in CXR files along the way, because Corona always stores it in CXRs - even if switched off in the frame buffer settings.

1. Define a render stamp with the information you want to store, for example:

%d | %i | %c / %ct | Passes: %pp | Noise: %pe | Total time: %pt | Corona %bn - %b

2. Render CXRs

3. To read that information later, use exiftool like this against a CXR:

exiftool -s3 -CoronaRenderstamp <filename>


Good Luck




52
[Max] I need help! / Re: object attributes in Corona
« on: 2023-11-01, 10:26:10 »
but something isn't playing nicely now.

Maybe you are just not aware of the fact that you need to restart interactive rendering when changing object properties to see the result. Because your example just works here :)

And yes, if assigning IDs would be ok for you, plug all maps into a CoronaMultimap node, check only "Object GBuffer ID" and that's it.

You can user it in combination with CoronaMultiMap of course.

What exactly are you referring to? While you can use both, OSL named attribute and CoronaUserProperty to get a value, you cannot use it to either control CoronaMutimap nor CoronaSelectMap afaik.


Good Luck




53
One of the main things I'm concerned about is if a slave is rendering and someone closes the slave, what it was rendering (within the sync interval) won't be transferred.

Yes, that's true, but as said, it's just one sync interval render time that is lost at maximum (If you stop DR at the master, even fractions get transferred in contrast). So if you render a pano for 2h, it's neglectable imho.

Will another system pick this up or will it create a dodgy area on the final render where that system was rendering?

Render should be fine, slaves render full format. The "area" is the entire image from which parts get transferred (according to the "Max pixels to transfer at once" setting. You just need enough passes all in all. Problems may arise when dr slaves are very different in render speed, but that's a general issue of using dr.

In the case of VRs that go on headsets this'd cause issues with the images.  We haven't used DR for years and when we did (corona v5/v6) it'd create dodgy blocks on the render.

We also do not use dr much these times, mainly because dr server does not handle max/scene cool down times correctly (large scene dr rendering, cancel, start again -> hickup because dr server want's to control a max instance that still loads the first job). But for a "final" scene it works fine generally. Would be interesting to see such a bad result you mentioned.

Generally I'd try to avoid dr rendering, since single frame rendering is always better in every aspect. What is working good when it comes to top/down panos is to split top and bottom to different nodes using a crop render, so submitting two crop jobs to Backburner/Deadline - whatever. Also lowers ram consumption, especially if you render a lot of render elements or light mix stuff alongside.

If another system doesn't pick this up then maybe it's best to keep the interval to it's default setting.

If it does, what would you suggest I up the sync interval to based on what I'm currently rendering?

It depends more on your network. If 11 nodes deliver render results every minute in average (sent as CXRs strips, so quite large but/and depending on "Max pixels to transfer"), you may run into transfer issues. So I'd be cautious and set the interval to something higher like 3 minutes while keeping the max transfer value.

Do you know if the sync interval is scene or system based?

It's scene based (renderer property "dr_synchInterval")


Good Luck




54
[Max] Feature Requests / Re: Corona Material Library
« on: 2023-10-31, 09:53:00 »
You can convert the current library to a standard max library easily if you want to keep it. Not sure if this is allowed by eula though.


Good Luck




55
If I understand this correctly, you want to know what happens if a dr node shuts down during rendering? You would only miss some render progress, at maximum the result the node would be able to render in the time specified in "Syncronization Interval" of the "Distributed Rendering" tab. It's 60 seconds by default, so not really much loss (I'd go higher when rendering on 11 nodes at 8k though). We run dr server as windows service so the dr servers can be started and stopped randomly, depending on system load and this has never been an issue. There are other glitches using dr :)


Good Luck




56
but it can get a bit tricky when you are importing 100+ people proxies.

I'm sure it will be fixed since even starting to render does not load the proxy - definitely a bug.


Good Luck




57
anyone experiencing this?

Yes, me, simply pressing "Reload from disk" works for the moment (if you mean merging into a scene by "copying").


Good Luck





58
General CG Discussion / Re: Scripting
« on: 2023-09-29, 08:54:59 »
Hi,

just check max script help and search the web. http://www.scriptspot.com and https://forums.cgsociety.org are good resources for script stuff if you are stuck.

Although this is has been the "I need help" section regarding Corona, here is a quickly written script as a start for you. It converts to Poly, adds a vertex weld modifier with a threshold of your choice (thres) and then collapses the stack:

Code: [Select]
thres=0.01 -- vertex weld threshold

for o in $selection do (

try (
format "Processing object '%1'\n" o.name
ConvertTo o Editable_Poly
addmodifier o (vertexweld())
o.vertexweld.threshold=thres
maxOps.CollapseNodeTo o 1 true
)
catch (
format "Error processing object '%1'\n" o.name
)
)



Good Luck




59
It's a proof that Corona did not load at all. And at the end of the log (starting at 12:45:36) you can see a failed spawn attempt of the CoronaDRServer and a quite unambiguous line:

2023/09/22 12:45:59 INF: [14196] [25392] SYSTEM: Production renderer is changed to Missing Renderer. Previous messages are cleared.

I installed the corona render-NODE version

Not sure what you mean here. Could it be that you only installed DrServer accidentally? DrServer is just a "spawner" that launches and controls Max. You need a Max with fully installed Corona for that version to use it as render node - no matter if using BB or DrServer or any other client/server setup. You'd have to tick "3ds Max 2023" if you do a custom install. Reinstalling with the correct options should solve it.


Good Luck



Pages: 1 2 3 [4] 5 6 ... 129