Author Topic: Memory consuption in 1.3  (Read 8249 times)

2015-10-26, 17:39:29

Nekrobul

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 1028
    • View Profile
I will try to describe the isue in few words.

I have an exterior scene with 30M polys And 26,000M instaces and 16GB of RAM+50GB page file (wich covered easily even larger scenes in previous builds)

But now some sort of black magic happens. Wehen the memory consuption comes to 99% of ram parsing becomes insanly long(se image 1) and when the parsing copletes number of rays\s is just gramatic 40-50k max not 1.5M-2.5M as it should be(image 2).

After that i uninstaled 1.3 and instaled back 1.2 opend the scene and on my suprise i got the numbers from the image 3. And the scene it self rendered without any problems.

PS - I can upload the scene but it is god damn large in archive 2-2.5 GB or more.

---------------------------------------------------------------
https://www.blackbellstudio.com/
https://www.behance.net/blackbell3d
CEO at "Blackbell Studio"

2015-10-26, 18:02:32
Reply #1

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 13625
  • Marcin
    • View Profile
Can you tell us more about the scene? Show screen/final render from 1.2? Displacement?
Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us

2015-10-26, 18:05:35
Reply #2

Nekrobul

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 1028
    • View Profile
Can you tell us more about the scene? Show screen/final render from 1.2? Displacement?

No displacement only scatterd objekts and normalmaps. Nothing special mostly.

I can not share the renders due to commercial agreement but i can try to share the scene.

Or recreate the situation in some other scene later today.

PS - is there any other way to share the archive? Google drive may be?
---------------------------------------------------------------
https://www.blackbellstudio.com/
https://www.behance.net/blackbell3d
CEO at "Blackbell Studio"

2015-10-27, 20:18:10
Reply #3

Nekrobul

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 1028
    • View Profile
Minor update on the situation.

I sucsided in recreating parsing this long in 1.2 but still. After the parsing is complete it renders and the numbers are ok 1.6M rays\s not 40K as in 1.3 also i tryed 1.3RC3 and the problem is still there.

PS - Uploaded the scene to the dropbox file called SCENE_PROBLEMATIC_EXT

« Last Edit: 2015-10-28, 09:58:49 by Nekrobul »
---------------------------------------------------------------
https://www.blackbellstudio.com/
https://www.behance.net/blackbell3d
CEO at "Blackbell Studio"

2015-10-28, 11:47:08
Reply #4

Nekrobul

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 1028
    • View Profile
Most probably today i will chek this scene on the 2x core xeon wirh 48Gigs of ram + 4 nodes 2xcoore Xeons blades with 12 gigs of ram +50GB pagefile just to see if the problem ocure on the distributed render too.
---------------------------------------------------------------
https://www.blackbellstudio.com/
https://www.behance.net/blackbell3d
CEO at "Blackbell Studio"

2015-10-28, 14:16:43
Reply #5

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
Well the scene takes 23.7 GB RAM in 1.3 and 23.6GB in 1.2 here. If you try to fit that into 16GB RAM, you are going to have problems no matter what Corona version you use. The fact it somehow worked in previous version is a miracle ;)
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-10-28, 15:01:20
Reply #6

Nekrobul

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 1028
    • View Profile
Well the scene takes 23.7 GB RAM in 1.3 and 23.6GB in 1.2 here. If you try to fit that into 16GB RAM, you are going to have problems no matter what Corona version you use. The fact it somehow worked in previous version is a miracle ;)

In previous version it used pagefile if there was not enough memory now it does not. That is what i am telling all this time (

For example our render server have 12GB of RAM for each blade because the ram for those motherboarsds it is a little expensive that is why each of them have a separete HDD with 50 GB pagefile to increase a virtual memory limit. And it will be a big problem if it will not use it during render. Making server copletly useless.

Aloso it was working not only 1.2 it was working in A6 wich is one of the main reasons to chose corona as a main render engiene. Each scene which was a problem to calculate geometry in for example vray was not a problem for corona.

UPD - About distributed just confirmed, different scene wich consumes 17 GB of ram wont render too. Guest that we got to stay in 1.2
« Last Edit: 2015-10-28, 15:20:33 by Nekrobul »
---------------------------------------------------------------
https://www.blackbellstudio.com/
https://www.behance.net/blackbell3d
CEO at "Blackbell Studio"

2015-10-28, 15:39:20
Reply #7

atelieryork

  • Active Users
  • **
  • Posts: 283
    • View Profile
    • Atelier York
If it can be fixed, this would be great. But in my opinion 12GB for rendernodes is really not sufficient for much these days. You'd surely find your renders are faster and more reliable if you have a decent amount of ram in them, even if this issue is fixed.

For clarity I have not come across this issue. My nodes have 24/32/64GB ram in.
Alex York
Atelier York
www.atelieryork.co.uk
max 2016 sp1, corona 1.3 final, win 8.1. pro

2015-10-28, 15:43:23
Reply #8

Nekrobul

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 1028
    • View Profile
If it can be fixed, this would be great. But in my opinion 12GB for rendernodes is really not sufficient for much these days. You'd surely find your renders are faster and more reliable if you have a decent amount of ram in them, even if this issue is fixed.

For clarity I have not come across this issue. My nodes have 24/32/64GB ram in.

The thing is that there are 16 of them in one stack 3 stacks in cold box. And replacing 6x2 GB plates to 6X4GB plates for 48 nodes will be very expensive.
---------------------------------------------------------------
https://www.blackbellstudio.com/
https://www.behance.net/blackbell3d
CEO at "Blackbell Studio"

2015-10-28, 15:46:36
Reply #9

atelieryork

  • Active Users
  • **
  • Posts: 283
    • View Profile
    • Atelier York
If it can be fixed, this would be great. But in my opinion 12GB for rendernodes is really not sufficient for much these days. You'd surely find your renders are faster and more reliable if you have a decent amount of ram in them, even if this issue is fixed.

For clarity I have not come across this issue. My nodes have 24/32/64GB ram in.

The thing is that there are 16 of them in one stack 3 stacks in cold box. And replacing 6x2 GB plates to 6X4GB plates for 48 nodes will be very expensive.

No doubt it will be expensive. That's quite a nice farm there. But you were always going to run into a memory barrier with 12GB sooner or later. I got to that point about a year ago and upgraded them to 24. At some point soon you'll be forced to upgrade them whether you want to or not I guess. RAM prices have come down a lot recently too. I take it they're not DDR4 ECC? That stuff is crazy expensive...
Alex York
Atelier York
www.atelieryork.co.uk
max 2016 sp1, corona 1.3 final, win 8.1. pro

2015-10-28, 15:52:21
Reply #10

Nekrobul

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 1028
    • View Profile
If it can be fixed, this would be great. But in my opinion 12GB for rendernodes is really not sufficient for much these days. You'd surely find your renders are faster and more reliable if you have a decent amount of ram in them, even if this issue is fixed.

For clarity I have not come across this issue. My nodes have 24/32/64GB ram in.

The thing is that there are 16 of them in one stack 3 stacks in cold box. And replacing 6x2 GB plates to 6X4GB plates for 48 nodes will be very expensive.

No doubt it will be expensive. That's quite a nice farm there. But you were always going to run into a memory barrier with 12GB sooner or later. I got to that point about a year ago and upgraded them to 24. At some point soon you'll be forced to upgrade them whether you want to or not I guess. RAM prices have come down a lot recently too. I take it they're not DDR4 ECC? That stuff is crazy expensive...

You do not get the idea. Tpe page file is needed exactly for the situations when you run out of RAM. 24 GB is great but tho this amount could not solve all the problems. Our exterior scenes mostly are overkills we had a couple of situations when we had to rebuild almost half of the street to recive a disiered quality and the scenes were eating up like 30-35GB, and if it was not for the help of the pagefile we would be in the big trouble.
---------------------------------------------------------------
https://www.blackbellstudio.com/
https://www.behance.net/blackbell3d
CEO at "Blackbell Studio"

2015-10-28, 15:55:38
Reply #11

atelieryork

  • Active Users
  • **
  • Posts: 283
    • View Profile
    • Atelier York
I do understand, but I also wonder why you're allowing the pagefile to kick in at all when throwing enough RAM into your farm would solve all your worries. It would be costly but you wouldn't have to worry about this again, for a long time. Isn't there a significant penalty in terms of rendertime when falling back to the PF over loading it fully in RAM? It just sounds to me like your farm is not adequate for your needs any more in terms of RAM.
Alex York
Atelier York
www.atelieryork.co.uk
max 2016 sp1, corona 1.3 final, win 8.1. pro

2015-10-28, 16:02:54
Reply #12

Nekrobul

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 1028
    • View Profile
I do understand, but I also wonder why you're allowing the pagefile to kick in at all when throwing enough RAM into your farm would solve all your worries. It would be costly but you wouldn't have to worry about this again, for a long time. Isn't there a significant penalty in terms of rendertime when falling back to the PF over loading it fully in RAM? It just sounds to me like your farm is not adequate for your needs any more in terms of RAM.

As i said before, increasing amount of RAM will not solve ALL of the problems the page file is a reserve wich actualy provides waranty that everything will work no mather wat is going to happen.
---------------------------------------------------------------
https://www.blackbellstudio.com/
https://www.behance.net/blackbell3d
CEO at "Blackbell Studio"

2015-10-30, 10:56:29
Reply #13

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
sorry, but corona was never intended to work with more than half of its memory in page file. If it worked before, it was a happy coincidence, not intended behavior
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2015-10-30, 11:35:13
Reply #14

Nekrobul

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 1028
    • View Profile
Then we will just use 1.2 ...
---------------------------------------------------------------
https://www.blackbellstudio.com/
https://www.behance.net/blackbell3d
CEO at "Blackbell Studio"