Author Topic: About Corona 1.4 denoise CPU usage, temperature and... overclocking  (Read 6655 times)

2016-06-01, 08:57:04

albatetra

  • Users
  • *
  • Posts: 4
    • View Profile
Hello,
my first post here as proud and happy user of Corona Render.
Now, I've some question for the experts.
I'm lately rendering high resolution still frames and, as you know, each minute spending rendering cost money, so I've opted for a moderate and safe overclocking of my workstation. I'm overall gaining about 30% in rendering time overclocking from 3.6GHz to 4.6GHz.
During the rendering time, of course, all 6 cores in my CPU are 100% used, and the temperature stay stable at a safe range of about 76 degree celsius. But, once the rendering is finished, and (ONLY) during the denoise process, the temperature raises to a warning level of 86 degree celsius (the cores of course stay at 100% usage).
So, clearly, the denoise function is way more CPU intensive than the rendering itself, or if this is not the case, the rendering process is not as efficient as the denoise function is.
Do you have an explanation for this behaviour?

Thanks

2016-06-01, 09:07:11
Reply #1

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
moderate [...] overclocking of my workstation.
3.6GHz to 4.6GHz
Holy moly, our definitions of moderate vastly differ :D
Welcome to the forum.
No, the Code is not more inefficient. This has to do with how the CPU architecture crunches numbers. Data types and calculations are being broken up into the FPU and the ALU. One crunches Floating point operations, the other integer and logic operations. Rendering in itself relies heavily on Floating point calculation, thus when the heavy Floating point crunching comes into play, logical operations may or may not idle during that time, even if only for milliseconds. By definition, the CPU has reached it's operational limit and CPU usage is 100%, but physically only on one part is mostly crunching numbers.
Thus temperature does not rise as high, as in a workload, which happens to strike the perfect balance between the two. If have noticed this aswell, denoise strikes this very perfect balance by random, delivering just enough logical operations, for the FPU to keep up, thus both are being used just perfectly and everything cooks.

On my FX 8350, and all AMD FX for that matter, we only have 4 FPU for the supposed 8 core chip. I can run the corona benchmark and cinebench at almost 5ghz and have thus the fastest 8350 in the benchmark :D
This is because I basically "only use half the chip".
As soon as I run prime 95 with small FFT or any other FPU light, but logical heavy operation, the system crashes withing a second or if bump the voltage, cooks itself to metal melting temperatures.

edit:
This is the same reason, that sometimes at 100% CPU load you can still use windows, while at workloads, which eat up the logical part to 100%, still the same 100% usage, but now windows is unusable.
« Last Edit: 2016-06-01, 09:11:03 by SairesArt »
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )

2016-06-01, 09:15:12
Reply #2

albatetra

  • Users
  • *
  • Posts: 4
    • View Profile
Hi SairesArt,

thanks for taking the time for this clear explanation. Now is clear what's going on. I think I will have to reduce a bit the OC or install some more efficient cooler on the CPU if I want to keep using the denoise function safely.

Greetings

2016-06-01, 09:51:54
Reply #3

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
actually the rendering itself is much more ALU/general instructions heavy than denoising, denoising is hardcore FPU stuff. I think it has more to do with the fact that there are no cache misses and branch mispredictions in denoising, since the workload is fairly simple and predictable. CPU does not have to wait for pipeline to repopulate after branch misprediction, or for data to be loaded from RAM, so the FPUs are utilized much better.
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2016-06-01, 10:23:39
Reply #4

Jann

  • Active Users
  • **
  • Posts: 142
    • View Profile
How was the OC stability and temps tested? I always use Linx, even though it overloads the CPU more than rendering does. If it doesn't crash, and temps are acceptable, I'm sure it won't overheat during extended rendering sessions etc.

2016-06-01, 10:24:35
Reply #5

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
actually the rendering itself is much more ALU/general instructions heavy than denoising, denoising is hardcore FPU stuff. I think it has more to do with the fact that there are no cache misses and branch mispredictions in denoising, since the workload is fairly simple and predictable. CPU does not have to wait for pipeline to repopulate after branch misprediction, or for data to be loaded from RAM, so the FPUs are utilized much better.
Thanks for the clarification! We had a raytracer building assignment in university and I was surprised, how most things, polygon checking by ray intersection and so on were actually devoid of the need for float precision. This clears it up nicely.
Also reminds me how the Intel BurnIn test is done using Linpack for Floating point benchmarking and how it supposedly is the most heat intensive task for intel cpus.
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )

2016-06-01, 11:07:17
Reply #6

albatetra

  • Users
  • *
  • Posts: 4
    • View Profile
How was the OC stability and temps tested? I always use Linx, even though it overloads the CPU more than rendering does. If it doesn't crash, and temps are acceptable, I'm sure it won't overheat during extended rendering sessions etc.

All overclocking, testing and monitoring are done within the same application Intel Extreme Tuning Utility.
Here is a screenshot with some detail about a rendering that is currently running. Is not finished yet so I don't have denoise temperature to show right now. I may upload the new chart later if may be of any interest.

Thanks to everybody bringing light to this and helping to keep safe my beloved CPU.


2016-06-01, 11:12:00
Reply #7

FrostKiwi

  • Active Users
  • **
  • Posts: 686
    • View Profile
    • YouTube
Try the intel burn in test, while you are at it and see how toasty it gets.
Quote
Use the same stress-testing engine that Intel uses to test their products before they are packed and put on shelves for sale
I'm 🐥 not 🥝, pls don't eat me ( ;  ;   )

2016-06-01, 12:49:39
Reply #8

albatetra

  • Users
  • *
  • Posts: 4
    • View Profile
Here attached is the screenshot when is visible the denoising where temperature reached 88 degrees.