Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - marchik

Pages: [1] 2 3 ... 20
1
Hardware / Re: Low benchmarks on Ryzen 9950X
« on: 2025-08-25, 10:28:38 »
Thank you Marchik!
This is invaluable info.
Yes, I don't know all the nuances of course, but this is the basis from which you can start.

And for those who suffer because of the launch of 4 RAM modules, the MSI X870E Carbon board surprisingly shows itself well in this, people without additional settings were able to launch 4 modules of 48-64GB of mediocre memory at frequencies up to 6400 with good timings

2
I get the idea(tion) about fabric material; current fabric material is conceptually a bit different, more importantly - a base to grow further on. It covers the very core of fabrics, actual weave patterns (above, below) comprised of threads (made of plies, made of fibers). We have our own ideas what can be improved, but we don't want to make it half-scatter-half-pattern like thing. Current (other) tools give a very good room for added detail and functionality (e.g. scatter for random microfibers, pattern modifier for custom geo-modules etc.). One of the main points of this fabric mtl was to make the special kind of reflectivity and "anisotropy" of fabrics which is otherwise not achievable with other shaders. It is not so simple to control, but hopefully some good examples will be available soon.

I fully understand why this shader was made and it will definitely make my work easier.I have gone through all the existing properties one by one and they are all logical and easy to work with

 I work with fabrics all the time and yes, a separate shader that allows me to convey complex visual properties of fabrics is exactly what I wanted. I'm just writing down my wishes.

In the current implementation, it basically repeats exactly what we can achieve with textures and geopatterns (mostly). Of the entire list of fabrics, only fabrics with dense fine pile on top of the weaved base remain something that is very difficult to achieve using physically correct methods. Scatters and ornatrix are difficult to manage (especially mapping the rotation of fibers) and they consume a lot of resources, so I'm asking for things that could also make our work easier.

And I am not writing about scatter and individual fibers at all, I am just mentioning the dense pile of which such fabrics consist and it is they that determine their overall appearance, so I believe that this can be realized with a shader and physical simplifications."Velvet properties" are something that is rarely found in the shader of any rendering engine and that will always cause difficulties in creating for 3D artists, and you could close this need. I am ready to provide full assistance in this. Thanks!

3
I tested it a bit, so far it seems like all the new features work fine, the spinners in CoronaPhysicalMtl are a bit twitchy, but when I turn on interactive rendering everything becomes fine.

I'm very glad that you took up my old request with a separate shader for fabric.
A couple of comments/suggestions:
  • Add a separate bump slot that would be superimposed on top of the parallax, with support for normal maps of course, after all, working with fabrics is unthinkable without additional maps for folds & wrinkles.
  • And most importantly, the shader does not completely cover our need to create velour and velvet with a small pile, sheen performs a similar function, but in a very distant approximation. It would be great to create an additional section in the same material that would imitate small hairs and their brdf using the same parallax, for example.

    And give us 4 parameters
    • Length
    • Width. Together with the general tiling this would allow for the overall regulation of the pile density
    • Angle relative to the surface (mappable using a black and white map). 0.0 would correspond to the pile direction coinciding with the surface normal, 1.0 - crushed pile, 90 degrees to the surface normal. With this parameter we could regulate how much the surface will shine.
    • Pile rotation along its own Z-axis (mappable parameter by analogy with Anisotropy rotation in a regular CoronaPhysicalMtl). The original rotation value can be set using a object's UVs as in the case of anisotropy.
If you will add this, it would turn the game with fabrics in Archvis upside down.

4
Hardware / Re: Low benchmarks on Ryzen 9950X
« on: 2025-08-14, 18:17:03 »
I've spent the last few months tuning various 9950Xs, including x3d versions, and I can say that your result is quite standard for stock values.

The score in your case will be affected by:
  • The processor sample, the silicon lottery allows some samples to take higher frequencies at lower voltage and lower temperatures
    it's a pure lottery, if you really want to and have free funds, you can sell your sample on the secondary market and buy a new one in the store in the hope that you'll get lucky or consider the X3D option, I believe that AMD uses selected chips for them and the result is always better on average
  • The speed and latency of the RAM. The ability of 4 modules to work at the declared frequencies depends on the
    • processor sample and its Integrated Memory Controller
    • motherboard sample and BIOS version
    • sample of RAM modules and the chips themselves (SK Hynix will be preferred for 2-rank modules)
    • the number of PCB layers on your motherboard, for a value below 8 it will be very difficult to run 4 32+ GB modules at high frequencies.
    All this requires manual adjustment and long tests, but changing the frequency from 3600 MT/s & 90ns latency , to 6000 MT/s & 75ns will give about 500,000 (or slightly less) additional points in Corona Benchmark 10.
    I think that almost any memory can be made to work at 4800 MT/s by lowering the impedances, if the motherboard is from the mid-range + segment. I advise you to contact a specialist with this question. You can also look at the QVL list of your motherboard and look for kits of the required volume there, they will work "out of the box" probably
  • Cooling system: A regular 360 AIO based on Asetek will allow dissipating about 220-230W of heat from this processor, 420 AIO about ~ 240-250W, options with 6 fans in the Push-Pull configuration 260 + W, Custom systems with several radiators up to about 320W, then this value rests precisely on the heat sink cover of the processor itself, then you need to remove it and put liquid metal for direct-die cooling. A larger reserve for the heat package gives more room for increasing the processor frequency using the PBO algorithm or manual overclocking. You can check how much your processor consumes in the render using monitoring utilities. for example HWInfo. Look for CPU Package or similar parameter
    My recommendation would be the Arctic Liquid Freezer III 360-420 with 6 fans in Push-Pull configuration, it's quiet, inexpensive and gives great cooling headroom.
  • BIOS settings and the motherboard itself and its VRM.
    When you enable PBO, change the scalar to 10x and enable boost override by +200MHz, you unlock the frequency increase within the temperature limit and the processor will work as fast as possible and increase the voltage until it hits its temperature limit. (The behavior of this algorithm is also affected by many settings and factors, up to the BIOS version). But in simple words - if you have good cooling, the processor will overclock faster. In your case, it works at a frequency of 5000 MHz, but most of the "nice" results that you see are obtained at frequencies of 5200 MHz +
    At this stage, Curve Optimizer (and Curve Shaper) comes into play, which reduces the voltage supplied to each core while maintaining the frequency, which means the PBO algorithm will overclock your processor even further, because there is still headroom to the temperature limit. If you are lucky with your CPU sample (your cores do not need much voltage to work stably at high frequency) then you can get a decent increase in performance.
    You can find many guides on youtube on how Curve Optimizer works and how to find optimal values.
  • Testing conditions:
    The ambient temperature greatly affects the result, there is a big difference if you run the test at an air temperature of 30C or 21C, 9 degrees will be added to your processor, because that is how much hotter the liquid in the cooling system will be.

    Launched additional programs. The highest result will be if you load Windows in safe mode, where nothing will interfere with the work of your CPU (up to 1,000,000  additional points, however, in this mode the minimum processor frequency is shown in the results of the benchmark. (Smth around 4300 MHz i believe, and you can always recognize such results in the list). Each additional process takes up the resources of your computer and the result will be worse. Utilities that simultaneously monitor sensor readings (hwinfo, motherboard software such as Armory Crate from Asus, iCue from Corsair, etc.) have a particularly strong effect; they must access the CPU every X ms to to read the readings, and therefore reduce the result.
    Run the test after a clean Windows startup and remove everything in the processes and tray that was activated in startup and set your air conditioner to the minimum temperature :D

so 14kk+ is OK for how 9950x works on average at 4.8-5GHz and 3600 MT/s memory speed. Using the tips above you can get around 16kk without any problems and with acceptable temperatures. I got results from the first and second pages of the list. But it requires some luck with the CPU sample and usually everything that is on the top of the list uses 2 memory modules at high frequency, manual overclocking with forced frequency and voltage setting + good cooling up to liquid nitrogen, this is more of a competitive thrill than a working mode for your tasks, so do not worry.

5
On the first, the idea is already floated internally of having a checkbox for "include Corona data" for CXRs, on by default but can be turned off. This would leave the cxr comparable in size to an exr, by removing the extra data that Corona can use for denoising later, resuming renders, etc. And that goes along with the idea of allowing 16 bit rather than 32 bit as an output (except forcing 32 bit output for those elements that still need it, but letting all the other data be 16 bit - V-Ray does something similar, for example). Between those two, that should give full control over "cxr file size vs. cxr functionality" to our Coronauts :) Nothing set for development timeframe yet, just as a heads up.

On the second, I'd have thought batch processing via Photoshop or other image or video editor would have been the way to go there, to load and resave while stripping alpha from a non-Corona specific format like .tif. You can pop the idea on the Ideas Portal if you like, but this is just a pointer for what may save you doing this manually in the meantime. Quite a lot of powerful batch editing out there for generic image formats, so no need to wait for Corona to develop something that handles those.
Thanks for your reply, Tom. Nice to hear about the first point!

I would definitely gladly use batch processing in Photoshop or any other package if it made sense. Let me explain my second problem one more time. I needed to get 16 bit images from .cxr with specific tone mapping that I set up in Corona VFB and without alpha channel. The whole point of .cxr is that I can get anything from them" - after all, they contain "all" the data. The catch is that any format that supports 16 bit outputs an image with premultiplied alpha, and any format without premultiplied alpha outputs an image in 8 bits. Batch processing from Corona Image Editor is the only way to get an image from CXR with the tone mapping I need, and I do not have the ability to "uncheck" include alpha channel.

It would be great if batch processing was built right into the Corona Image Editor interface with all the options, I hope you will consider this for the new announced version of Corona Image Editor.

6
Recently I needed to render 90 frames for a video and output AOVs, including several Cryptomatte passes. My renders had an alpha channel, but I also needed to get the same sequence as 16-bit images without an alpha channel with tone mapping from Corona VFB.
I rendered in 4K and did not expect the pain in the ass that awaited me.
Firstly, due to the very unfinished implementation of cryptomatte, in order to transfer the finished files to the compositing artist, I had to send 200 GB of .cxr files over the Internet, although I could have gotten by with a couple of GBs if they were separate render elements. Of course, I will not mention the questionable compatibility of .cxr with AE and the need to explain how to import them.

Secondly, when I tried to save several .tif files for a preview sequence using CoronaImageBatch, I realized that no matter what format I output the images in, the alpha will always be there, and if I output a format without an alpha channel, I get 8 bit. I tried adding arguments to the .bat file, but none of them worked. I tried to find documentation for CoronaImageCmd.exe, but there simply isn't any, except for a couple of paragraphs in docs.chaos.com. Plus, it doesn't respond to the -- help command
As a result, I had to manually open each .cxr file and save it in the required format (maybe that's why my request sounds a little irritated).

I understand that Corona Renderer is not intended for serious production and that VRay fills this niche, but this upsets me, because the features themselves exist, they just all feel unfinished.

Yes, you can consider this request as a feature request, but please fix this :'(

7
I would ask to add mappable "Amount" value for Thin Film and min-max thickness values for more usability.

I understand that you wanted to simplify the approach, but this only complicates it.

Yes, adjustable amount can be emulated by adding a map to the thickness / IOR slots, but such setups only become more complicated.
1) To simulate the mappable amount value, you need to use sufficiently contrasting maps, otherwise we will get a rainbow transition instead of a smooth fading at the boundaries even if we don't want it.
2) Despite the fact that the film thickness is set in nanometers, when I add a Corona color with a HDR [1.0 1.0 1.0] value to the slot, it corresponds to 5000 nm (and I expect 1 nm).
And when I use Solid Color with 1.0 output multiplier the shader performs linear interpolation and white 1.0 corresponds to the nm value set in the spinner. 2 completely different logic aprroaches that make no sense to me as a user.

In this case, it would be logical if white 1.0 corresponded to the value of 1 nm and then I could use Corona Color with output multiplier to accurately set the values ​​of the minimum and maximum thickness (using 2 CoronaColor maps as the input for the Noise map for example), but now I have to calculate which color from the [0 : 1] range corresponds to the value I need in nanometers. This does not make life easier.

8
Hardware / Re: GPU - 3ds max
« on: 2025-04-01, 12:55:37 »
Also, the 5xxx series seem to be having more issues than we've seen in a generation of cards possibly ever :)
btw, Intel GPU AI interactive denoiser doesn't work on rtx 5090 with Corona 12 update 1 hotfix 1
Juraj, if it's not too much trouble, pass this on to the developers

9
Hardware / Re: AMD 9950x
« on: 2025-03-20, 23:39:14 »
ps can you please provide a link to the reddit thread you were talking about?

It's this one: https://www.reddit.com/r/overclocking/comments/1i1cmwi/9950x_with_192gb_4x48gb_at_expo_6000cl30_stable/

64gb pr stick on sounds crazy for am5, but maybe worth waiting a couple of months to see what happens? But then again, we can always juuust wait a bit longer for something new and better. My old xeon at home (which this build will replace) scores only 5mil rays in the corona benchmark, and running on 64gb 3200mhz ram :D So no matter what, I will feel the upgrade. Last night I was tempted to just go with a 9900x and 2x48gb and save a lot of money and probably headache. The 9900x is almost half price of the 9950x3d and I would still get over 2x performance compared to my old system. With ai upscaling, I don't see a huge need to be able to render out more than 3-4k native and for that I think the 9900x might be enough and use the saved money to save up for a better gpu. End of day it's only for the occasional freelance job, but I can feel the old pc at home is troubled + win10 updates will end this year and it so old its not allowed to upgrade. Curious if the 9000 series will have smoother IR and material editor performance than the 3990x threadrippers we use at work.

I just looked and saw that I have exactly this memory, as described in this post on reddit. I'll try to overclock it :D In any case, I advise you to take at least 9950x, even without x3d and high overclocking, this is a processor with exceptional performance for its power consumption.
I hope that soon all these lags when working with it in Corona will be fixed.

By the way, I saw that in the neighboring topic you were choosing a case and a cooling system for a quiet build. I'm using the FD Meshify XL 2 with dark tempered glass (I'm also a fan of the "Big Black Box That Works"). There are no fancy RGB lights, it's very well ventilated, and there's enough room for 2*420 radiators and any GPU.

I assume you're leaning toward air cooling, but I can recommend you my setup. I'm using the Arctic Liquid Freezer 420 III without RGB lights, and it cost me under $100 on Amazon. I also bought 2 sets of 5 high-speed 140 mm Arctic P14 Max fans for $45 per set. And 2 Arctic fan controllers for $9 each to control them. So, 6 fans are on the AIO radiator in a push-pull configuration, 3 fans are in the front of the case for intake, and another 1 is in the back for exhaust.

So for ~$200 I got a cooling system that dissipates 260 watts of heat in virtually silent mode. It would seem that 10 fans should make a lot of noise and yes, at 3000 RPM it sounds like a jet engine. But the trick is that each fan up to 1000 RPM is virtually silent and even at these speeds it transfers quite a large volume of air (especially considering the number of fans), and the area of ​​the large 420 * 38 mm radiator is enough to dissipate a lot of heat without actually turning on the fans. Therefore, under everyday load without rendering (web surfing, single-core applications, etc.) the fans are almost always off or are within 600 RPM) And when rendering I raise the speed to 1000-1400 RPM and they remain very quiet.
Considering that I also have PBO activated and the limits are disabled, I believe that in stock mode within 200-220W it will work almost silently.

10
Hardware / Re: AMD 9950x
« on: 2025-03-19, 20:05:29 »
cheers marchik, good insight! Should (in theory) any 6000mhz cl 30 48gb modules work in a 4x configuration? If having to lower ram speeds as you did, does that have a real life performance slow down in production work or is it just for gamers? (I don't care about the gaming performance).

Reason I ask is that the modules I saw by other reddit user who managed to run 4x48gb at 6000mhz, is currently sold out.
I think that with timing selection and OC skills, you can run a lot of RAM sticks at 6000 MHz, but among the "plug & play" solutions I only know that recently Biwin announced 192GB DDR5-6000 and DDR5-6400 memory kits, but they are not yet on sale (and I'm not sure that any motherboard can handle them). Personally, I use 4*48 GB VENGEANCE 6000 CL30 AMD EXPO CMK96GX5M2B6000Z30 (2 different kits of 2 sticks) and it works quite well, I think with experience you can make them work at 6000, it will just take days or weeks of fine tuning.

Memory speed does not dramatically affect performance in our tasks, within 3% I would say. This is purely a race for numbers and ego.

For most users, a 4*48 set generally works at 3600 or 4400, so there is no reason to complain. But now almost all boards with 8+ PCB layers can withstand 5200 without problems, and this is already enough.

and by the way, I noticed that the latest BIOS updates for my motherboard claim support for 256 GB kits (4x64GB), and I think I've already seen them on sale (I can't confirm for sure right now, I need to study this issue in more detail), so soon we'll be chasing completely different ddr volumes and solving other problems :D

ps can you please provide a link to the reddit thread you were talking about?

11
Hardware / Re: AMD 9950x
« on: 2025-03-19, 02:38:06 »
thank you for the update, I think I will go for the 9950x3d too
In any case, I would recommend waiting for the mass benchmark results. It is quite possible that reviewers now just have selected samples of chips and all the advantage over 9950x is explained by this (for the same reason, my sample is just slightly worse than others). And I do not see objective reasons why it should give a result better than 9950x. In the end we don't know at what settings of curve offsets and PBO limits and with what cooling the existing benchmarks were executed and what had to be done to achieve frequencies of 5.45-5.5 GHz

After all, this processor is mostly a marketing story, since it is the most expensive in the line but offers gaming performance identical to a much cheaper 9800X3D, which means that it will be in demand only among enthusiasts who need both productivity and gaming performance (in other words, who just want the best possible from the non-HEDT segment because there is no objective sense for any other group of buyers to overpay for it) and AMD can create artificial hype around it for this reason (this certainly sounds like a conspiracy theory, but it definitely worked for me :D)

12
Hardware / Re: AMD 9950x
« on: 2025-03-17, 22:55:50 »
I recently bought my 9950x.

After some tests (testing with 4 of my scenes), the improvement over the 7950x is 16-18%.
(Scene 1: 16.63% improvement
Scene 2: 17.71% improvement
Scene 3: 22.32% improvement
Scene 4: 16.44% improvement)

BUT, I didn't have the 7950x overclocked because the temps were already high at stock.
AMD claims the 9950x is more efficient so I enabled PBO, which led to a higher difference.

Hope this helps.

How is life with the 9950x so far Lupaz? I'm considering to build a new system with that cpu. What amount of ram and motherboard are you using?

My short review. At first it worked great, I assembled it on the basis of Asus Rog Strix x870e-e + 4*48gb 6000 MT/s, which I was able to make work at 5200-5600MT/s without errors.

The raw performance is amazing.
The only thing that bothered me was a slightly lower score in the benchmark than my friends with the same cpu, despite the fact that my cooling system removes ~280 W of heat without a problem and I got a fairly good (Gold/Platinum) silicon sample with good curve offsets, but I can’t make it give a higher result no matter what I do.

I also encountered problems lately, I use Win11 23H2, and disabled core parking and on the latest Corona updates my workflow with enabled interactive rendering became laggy (like for many other users of this CPU, as far as I know).  In the end, only turning off XMP profile and returning the RAM frequencies to 3600 helped me, which further reduced the result in the benchmark.

Now I'm looking at the 9950X3D, considering the tests that appear recently after the release (it shows even better results than the regular 9950x), I'll probably buy it for the main workstation and put my 9950x in the secondary one.

13
As marchik already said, you can bake procedural advanced wood to a texture, so it would be accessible to anyone in any app.

Yes, I know but they are not high quality textures. You cannot depend on advanced wood in interior production renderings.

yes, they require some refinement after baking using e.g. Substance Painter, but they offer a great start and I've used them a lot in interior production renderings, it all depends on the artist

14
Any ideas how to apply texture to this leg without seams? in 3DS MAX
to achieve the same result in the image or close to it.


You can use built-in Advanced Wood map or its OSL version with some modifications and you can even bake it to the texture.

15
Hi Marchik, could you share this scene? (via PM/support portal/directly on the forum)
Thanks for the response, maru, of course! I edited the scene in such a way that 2 problems became immediately visible.

Pages: [1] 2 3 ... 20