Hi Maru
If you look at the other thread I posted a link to, all of this information was in that thread and it was yourself I was discussing it with, the thread was almost 2 years long and then just went cold.
I haven't done this through the helpdesk as that previous thread looked like it was gaining traction - the reason for keeping it to the forums was also so other users could chip in if they had the same issue - which they did. I've also opened 3 official tickets previously, all of which are now closed and only 1 was solved (by me). I tend to have much more success in the forums.
But to answer the questions here for the benefit of others, and I'll copy and paste into an official support ticket -
Which Corona Version - every version that I can remember back to whatever version I was using in 2017 in that original thread
Which Max Version - I use Max 2018 (can't vouch for anyone else)
Which Network Rendering Method - Corona DR - fire up DR Server on the nodes, press render on the master workstation.
Is this happening in a specific scene - nope, every single scene I've created since first noticing the issue.
Is this happening in a scene with just 1 teapot - probably but I've never tested, but because RAM usage would be so low you'd probably not notice - it appears cumulative, the heavier the scene the bigger the discrepency between master and nodes.
Are you using some 3rd Party Plugins - the only plugins I use are Forest Pack, Railclone very very rarely and Multitexture/Floorgen - nothing unusual
Can you send any problematic scene to us - this would suggest you can't replicate the issue at your end? Is that right? Does this not happen if you open any random heavy scene you have access to and render using the method above?
Have you tried using the conserve memory option? - no - because render speed isn't a hit I can afford to take 90% of the time especially after investing heavily in 128GB RAM loaded machines, I don't see how this would solve the discrepency either surely it would just make RAM usage reduce by the same amount on each machine but maintain the fact the nodes use more
Following the RAM Guides - this isn't really about how much RAM is being used in order to save RAM, the scenes aren't using as much RAM as I have in my machines when I render them locally (i made sure of this by spending big on lots of RAM) it's only when I render using DR, then they use more, so I shouldn't really have to spend time on every scene reducing the RAM when it should really just work.
This is the bit I get frustrated with too - whenever I have issues, the solutions always feel like backwards steps or counterintuitive processes, or go against what I've been led to believe previously from Corona's own documentation - delete this, delete that, don't use this, don't use that, compress this, compress that, or they just don't get resolved and I have to just deal with the issue. The biggest consumer of RAM in most of my scenes is displacement, no doubt about it. Enter 2.5D Displacement - and we know how that turned out
https://forum.corona-renderer.com/index.php?topic=26782.0 - the solution to that thread was use the old displacement so back to square one and keep your high ram usage issue, or subdivide the geometry to the point the artifacts became unoticable - not very practical on entire scenes because this in turn increases the number of polys and subsequently RAM usage, completely negating the RAM saving in the first place. It's almost at the point where I need to factor in 'troubleshooting' to every project timeline I issue - if only I could charge that time back to clients and have them understand why the renders might not be ready on time. I genuinely can't remember the last project that ran trouble free whether it be displacement, ram, DR, caustics, tonemapping or some other issue.
Finally the other reason I like to stick to the forums is because if more people chirp in with the same problem, it becomes less easy to be passed off as user error. Other users may have the same issue, and not report it thinking it's something they're doing wrong. Had I gone through the support system nobody would know I too have the same issue.
Mini rant over :)