Author Topic: 10 Gigabit Ethernet Performance  (Read 9942 times)

2018-07-29, 00:31:36

Dalton Watts

  • Active Users
  • **
  • Posts: 210
    • View Profile
Hello guys,

So, I was thinking about buying another dual socket workstation and connect it directly to my other dual Xeon workstation via 10gbe PCI cards so I could further improve speed.

What performance should I expect from this sort of setup? Will the slave computer react swiftly (as in participating in the rendering) just like the main one when dealing with interactive rendering, for instance?

If so, what PCI cards would you recommend? ASUS XG-C100C maybe? Thanks!

2018-08-01, 23:35:51
Reply #1

Dalton Watts

  • Active Users
  • **
  • Posts: 210
    • View Profile
Anyone...? :)

How does Corona handle this type of setup regarding interactive rendering?

Also, will the slave participate in the "rendering initial pass" as quickly as the main workstation when starting a production render?
« Last Edit: 2018-08-01, 23:44:10 by Dalton Watts »

2018-08-02, 12:36:08
Reply #2

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 12714
  • Marcin
    • View Profile
Hi, here are some answers:
1) Corona does not support any kind of interactive rendering + distributed rendering combo. One of the main reasons for this is that we want to keep IR as smooth as possible, and using DR with it would introduce lags. On the other hands we know that some other renderers offer this, so it might be added in the future (however no specific plans for now, so definitely distant future).
2) DR is not instant, especially with heavy scenes. The scene has to be sent to the node computers, 3ds max has to start, it takes some time. With an extremely simple scene I am guessing that you can get multiple computers working on the same image in a few seconds (10 maybe?). In heavier scenes it can be much slower, depending for example on the amount of assets that have to be sent to the node PCs. (not that the assets will be sent from the master to the nodes only in case it's necessary)

Other than that I cannot really tell you how specific hardware with specific bandwidth will behave as I do not have experience with this. Maybe someone else can share their findings.
Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us

2018-08-02, 21:10:49
Reply #3

Dalton Watts

  • Active Users
  • **
  • Posts: 210
    • View Profile
Thank you for your clear answer, Marcin!

Anybody with direct experience with a similar kind of setup would be great to hear. Here I am thinking I can connect the upcoming Threadripper 2990X with my dual 2696 v3 setup and gather 10k+ cinebench power under one roof (kind of...) over 10gbe.

Do you know if vray IPR works in tandem with DR?

2018-08-23, 17:54:46
Reply #4

Juraj

  • Moderator
  • Active Users
  • ***
  • Posts: 4743
    • View Profile
    • studio website
I've been running 10GBE setup for 2 years now. It's drastic improvement, and imho absolute must for Distributed.

With 1GB, distributed would regularly completely stop because all those large .exr files being sent are simply too big. This happens if you have more then 1 node and render something like 8k renders.
Making the intervals too long or amount of pixels being send lower invalidates the usability of distributed so I never opted for this.

DR is still too slow to start, I don't know why this is. The scenes on slaves always take like 10 times the time during the "pre-pass" stages than they do on workstation. I always check task manager and see that the damn slave is using 1-core for like 2 minutes before it moves to other part of prepass before finally starting to render.
I basically gave up on using DR for test renders, it's uselessly slow so I only use it for finals. I don't have 5 minutes before it starts sending passes, which is what it usually takes.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-24, 16:06:51
Reply #5

Dalton Watts

  • Active Users
  • **
  • Posts: 210
    • View Profile
That was precisely my fears Juraj! Thanks for chipping in!

I reckon that my actual setup is ideal for dealing with IR. With the recent addition of "denoise during render" on new daily builds I can't even justify buying a new GTX 1080Ti because, to me, Corona Denoiser is pretty close to "real time" feedback. I guess I'll just have to wait until CPU technology doubles the performance :) Or maybe GPU will catch on quicker.

2018-08-24, 16:16:31
Reply #6

Juraj

  • Moderator
  • Active Users
  • ***
  • Posts: 4743
    • View Profile
    • studio website
Early next week I will complete the build of 32core 2990WX for Veronika and then slap on some heavy GPU, either 1080ti which can now be bought for 500 Euros, or maybe wait for the overpriced 2080/2080ti. I am really interested how fast will this make previews with the nVidia Optix denoiser during interactive, I have high hopes it will be quite responsive.

Anyway, 10GBE is great improvement for other workstations tasks, like saving 3dsMax scenes onto fileserver, loading assets from Connecter, multiple users accessing the fileserver. If you have filserver with SSDs, 10GBE is absolute must.

I didn't answer the last question, but the Asus (Made by Aquantia) is fine card as long as you adjust one setting which I never remember from head (Under IP6 settings). The Intel is superior though, but you need to order it from China to get good price.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-24, 17:13:45
Reply #7

arqrenderz

  • Active Users
  • **
  • Posts: 990
  • https://www.behance.net/Arqrenderz1
    • View Profile
    • arqrenderz
We bought the asus 10 gigabit cards, and we found that the aquantia drivers works wonders :) ! Really cheap card and running perfect !

2018-08-24, 17:49:27
Reply #8

Juraj

  • Moderator
  • Active Users
  • ***
  • Posts: 4743
    • View Profile
    • studio website
From my testing, Intel card maintained full speed on all three standards Cat 5E/6/6A across 20 meters cable (No reason to test Cat7 as it is fake standard for audio).
Asus/Aquantia required Cat 6A to maintain full top speed on that length and I didn't even get it to 10GBE at all on Cat 5E.

If you don't want to upgrade all your cabling, I would strongly recommend the Intel cards. The price is identical (100 +/- euros), just you need to wait two weeks to get it from Hong-Kong mostly.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-24, 18:37:51
Reply #9

Dalton Watts

  • Active Users
  • **
  • Posts: 210
    • View Profile
What GTX 1080 Ti would you buy if you chose that route, Juraj? I've heard good things about the EVGA GTX 1080 Ti SC2 11Gb.

I assume 500€ is without tax, right?

2018-08-24, 18:55:06
Reply #10

Juraj

  • Moderator
  • Active Users
  • ***
  • Posts: 4743
    • View Profile
    • studio website
Absolutely any, I do not care :- ). None of their differences are interesting to me, not even Blower vs Dual/Tripple fan. I might google if it perhaps doesn't have excessive coil whining, but that would be it.

I would just take the cheapest. 500 Euro is with Vat, but second-hand. I don't see the brand new ones being heavily discounted yet, if they even will be.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-25, 15:12:05
Reply #11

arqrenderz

  • Active Users
  • **
  • Posts: 990
  • https://www.behance.net/Arqrenderz1
    • View Profile
    • arqrenderz
Juraj, were do you find the Intel cards at. $100? Thx

2018-08-25, 15:23:00
Reply #12

Juraj

  • Moderator
  • Active Users
  • ***
  • Posts: 4743
    • View Profile
    • studio website
I myself buy everything from ebay for buyer's protection, but if you're adventurous you can get it cheaper by more direct means, i.e Alieexpress,etc.

119 Euros, free shipping. https://www.ebay.de/itm/OEM-Intel-X540-T110-Gigabit-10GBe-10Gbit-Dual-Port-Converged-Server-Adapter-PCIe/142201958859?hash=item211be5b1cb:g:TKsAAOSwcUBYRX1R

Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-25, 20:02:40
Reply #13

arqrenderz

  • Active Users
  • **
  • Posts: 990
  • https://www.behance.net/Arqrenderz1
    • View Profile
    • arqrenderz
thx jurag!
Im waiting on your veredict of the threadripper 2, bsod , memory compatibility and so, i kind of need a new workstation or render node...

2018-08-25, 20:14:25
Reply #14

Juraj

  • Moderator
  • Active Users
  • ***
  • Posts: 4743
    • View Profile
    • studio website
I wanted to build it this weekend but I am still waiting for the MSI board :- (.

I ordered from single shop in Europe which claimed to have sample sooner but looks like they blatantly lied to me and they will ship it on 30th August like everyone else.
So next weekend it will be (I hope!)

Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-27, 11:33:42
Reply #15

hrvojezg00

  • Active Users
  • **
  • Posts: 270
    • View Profile
    • www.as-soba.com
I've been running 10GBE setup for 2 years now. It's drastic improvement, and imho absolute must for Distributed.

With 1GB, distributed would regularly completely stop because all those large .exr files being sent are simply too big. This happens if you have more then 1 node and render something like 8k renders.
Making the intervals too long or amount of pixels being send lower invalidates the usability of distributed so I never opted for this.

DR is still too slow to start, I don't know why this is. The scenes on slaves always take like 10 times the time during the "pre-pass" stages than they do on workstation. I always check task manager and see that the damn slave is using 1-core for like 2 minutes before it moves to other part of prepass before finally starting to render.
I basically gave up on using DR for test renders, it's uselessly slow so I only use it for finals. I don't have 5 minutes before it starts sending passes, which is what it usually takes.

In my experience, parsing on slaves depends heavily on total (archive) size of the file being rendered. Alot of 8k textures and so take time to read. Also, after first render of the scene, every other starts sooner due to downloaded assets on each slave. Personally, I use all slaves on all drafts/finals, rarely takes more then 2 min for all slaves to start rendering. DDR3 memory fills up alot slower then DDR4, Juraj how many slaves with DDR3 you have?

2018-08-27, 11:51:25
Reply #16

Juraj

  • Moderator
  • Active Users
  • ***
  • Posts: 4743
    • View Profile
    • studio website
All my slaves are DDR4 (2400Mhz CL13 ECC), SSD, 2698v4.

Yeah, small scenes can take 2 minutes, but even that is too long. The scene transfer takes <1 second, the 3dsMax is already opened, for some reason "downloadding assets" happens rather long regardless of instant access to 10gbe fileserver, this one is mystery to me as well, and the the endless one thread parsing happens.
The whole is much faster if I just open the scene and hit render. So something is much slower if it's done externally by DR. I am most buffled by the long period of one-thread usage.

(( Why is it downloading assets at all ? Does it mean "Reading assets" ? Because I don't have anything stored locally, everything is fileserver linked))

So the fastest are 2 minutes, but the average before first pass comes is 4-5 minutes. Yes my scenes are rather big but that just highlights how inefficient the process is.

In 4-5 minutes I want to have fully rendered preview not wait for passes to start coming. I'll see how much the 2990WX + 1080ti/2080ti will help in this process.


PS: Hrvoje, absolutely kickass NYC project  (Kent) !! Top grade
« Last Edit: 2018-08-27, 11:55:52 by Juraj Talcik »
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2018-08-27, 12:56:20
Reply #17

hrvojezg00

  • Active Users
  • **
  • Posts: 270
    • View Profile
    • www.as-soba.com
All my slaves are DDR4 (2400Mhz CL13 ECC), SSD, 2698v4.

Yeah, small scenes can take 2 minutes, but even that is too long. The scene transfer takes <1 second, the 3dsMax is already opened, for some reason "downloadding assets" happens rather long regardless of instant access to 10gbe fileserver, this one is mystery to me as well, and the the endless one thread parsing happens.
The whole is much faster if I just open the scene and hit render. So something is much slower if it's done externally by DR. I am most buffled by the long period of one-thread usage.

(( Why is it downloading assets at all ? Does it mean "Reading assets" ? Because I don't have anything stored locally, everything is fileserver linked))

So the fastest are 2 minutes, but the average before first pass comes is 4-5 minutes. Yes my scenes are rather big but that just highlights how inefficient the process is.

In 4-5 minutes I want to have fully rendered preview not wait for passes to start coming. I'll see how much the 2990WX + 1080ti/2080ti will help in this process.



Agreed, hope dev team will work it out! Corona is very power hungry, so flawless DR is a must!

PS: Hrvoje, absolutely kickass NYC project  (Kent) !! Top grade
Thanks, well appriciated!

2021-05-04, 12:29:24
Reply #18

EmerSharif

  • Users
  • *
  • Posts: 3
    • View Profile
Hi guys,

sorry for waking up this thread!
Like you i was tired of experiencing slow 3dsmax open and saving huge scene, slow "downloading assets" on DR servers, etc.

Last week i tried a 10gb network with SFP+ cables not ethernet.

On our network via windows all is extremly fast, the 10gb bandwith is fully used wille transfering data (we have working assets and library on fast NVMe drives on the server.

BUT, in 3dsmax (2020) it's like hell!!! Every action who need network access is slow as i never seen before (refreshing thumbnail on material library), loading scene, etc.... It's throtelling on few Mbits transfert??!

I tried few differents settings on networks cards and switch like "jumbo frame" on multiple sizes.
Nothing help!

I'm really tired of 3dsmax sometimes.................

Any ideas?

2021-05-04, 13:47:51
Reply #19

alexyork

  • Active Users
  • **
  • Posts: 699
  • Partner at Recent Spaces
    • View Profile
    • RECENT SPACES
We also tried 10Gbe but found little to no real-world improvement with 3ds max reading/writing. Or even Photoshop for that matter. Some read performance benefits here and there but nothing anywhere near worthwhile compared to the cost. As you say, max itself appears to be the main bottleneck. For stuff like DR or video editing that relies heavily on file transfer it might be well worth it.

But maybe others have had more success lately with this?
Alex York
Partner
RECENT SPACES
recentspaces.com

2021-05-04, 14:09:35
Reply #20

Dalton Watts

  • Active Users
  • **
  • Posts: 210
    • View Profile
I've given up DR since 2016, that's when I bought my dual Xeon 2699V3 workstation. But I only tried DR through 1gbe and generally it took too long for slave(s) to kick in. From those with 10Gbe setups how much time does a slave take to actively participate in rendering?

I've also given up on centralizing every file on a NAS (on 1Gbe). It was paaaaiiiinfully slow for Max to read.

So, my next best bet would be to hook up a 5950X workstation to my dual Xeon 2699V3 through 10Gbe with files being stored on the main Workstation, but I'm unsure how much time would take for the slave to participate.



2021-05-04, 15:20:36
Reply #21

EmerSharif

  • Users
  • *
  • Posts: 3
    • View Profile
We also tried 10Gbe but found little to no real-world improvement with 3ds max reading/writing. Or even Photoshop for that matter. Some read performance benefits here and there but nothing anywhere near worthwhile compared to the cost. As you say, max itself appears to be the main bottleneck. For stuff like DR or video editing that relies heavily on file transfer it might be well worth it.

But maybe others have had more success lately with this?

Hi Alex,

exacltly same feeling here, muche money for... almost nothing! 3Dsmax an d network access with huge scene and assets, just hell.
Technicaly, the best we tried here, is to agregate mutiple gigabit ethernet ports (4 x 1Gbits) to make a unique 4Gbits. It's pretty nice working, and easy to setup under Windows server 2019. It's help for the server to send data on render slaves during computing on DR or animation.

I'm going to send back all our 10Gbits hardware.


Best,

M.

2021-05-05, 19:05:07
Reply #22

Vuk

  • Active Users
  • **
  • Posts: 113
    • View Profile
I have the same experience. I am using a Raid 0 NVME Storage pool consisting of 4x2TB 970 Evo Plus's just for the PROJECTS pool with all the WS's running 10gbe Nic's and 10gbe cabling throughout the whole office.
The 10gbe speeds are completely saturated during simple copy/paste file transfers yet in real-world use, at least for "US", there is absolutely no difference.

Now I could sit here and write 5 other pages on how I tried several configurations from running Raid 5 and 6 on standard HDD's to running single ssd's and finally running a raid with Nvme m2 ssd's but this should be as simple as possible. I assume this would only be interesting to a few "geeks" of us here and most people wouldn't understand and I don't blame them. I myself lost a lot of time to learn this and in the end, it turned out to be a complete disappointment in both performance and my wallet...

I tried every possible test that suits our workflow. Scene loading times vs 1gbe, Render node loading times vs 1gbe, Photoshop, you name it :). As I wrote before the most important thing is always the CPU and it comes ahead of everything (network, ssd, ram) the faster the cpu the faster the loading times will be :).
Just by changing our old xeons from the farm with newer threadrippers we have seen a massive gain in performance. Those machines work on a much higher boost clock than the Intel Xeons and load heavy scenes ( we had files of around 3-4gb in size ) much faster. A quick real-world example would be that we used a Dual Xeon Platinum 2x24 core machine as a Backbruner Server with DR rendering on. This machine vs the 3970x took sometimes even up to 15 minutes just to load a scene and start rendering while the 3970x did the same scene loading in 5 minutes. We had to deliver 18 images that day in 5k resolution. 18 images X 10 minutes more time to load on a Xeon machine is 180 minutes more spent on just loading the same scene 18 times over on a different machine. In my book 180 minutes more is a lot of time lost in an 8-hour working day.

I haven't compared exporting the passes at the render end and saving those big CXR files. But at the end of the day how many times are you doing that during your whole working day? It's not even a tiny 1% of your time so who cares if it's a few seconds more right?

I burned a lot of cash on this 10gbe venture and to all people who plan on doing the same and still using the same software (99% of us use on this forum -3ds Max, Corona, Photoshop), mainly for 3d visualization purpose I say, save your money and invest in a high-end workstation or a rendering node instead of wasting it on something you won't get any real benefit.

Now I suppose Juraj will write (as he did before :P) that I am completely in the wrong with this one and that he has some amazing performance gains. But from what I see I am not the only one complaining about this and I have no real reason to lie either :). On the other hand, if you do video editing then this is definitely the route to go since those tasks mainly use sequential read and write operations that fully benefit from 10gbe speeds.




« Last Edit: 2021-05-06, 12:44:40 by Vuk »

2021-05-31, 17:31:57
Reply #23

arqrenderz

  • Active Users
  • **
  • Posts: 990
  • https://www.behance.net/Arqrenderz1
    • View Profile
    • arqrenderz
HI Vuk, thx for all the details, in my case scene parsing times got 10 to 15 seconds faster on 10gb vs 1gb network. I think its a good thing to have on a server that has to share its files with multiple users, if the server has just 1gb network it will saturate in a second, one thing that worked was 3dsmax saving and loading times on a saturated network.

2021-06-09, 19:07:13
Reply #24

Juraj

  • Moderator
  • Active Users
  • ***
  • Posts: 4743
    • View Profile
    • studio website
10G network will not increase any performance anywhere unless there was previously a bottleneck.

Since for reading/writing assets from 3dsMax/Photoshop/etc.. the main bottleneck are those softwares, you won't see much faster loading/saving.
For distributed rendering alone, 10G is massive benefit. Streaming 8K renderings using some medium fraction of megapixels every 60 seconds, even single node can reach the limits, let alone multiple.

Network is like memory, unless you ran against the wall previously, increasing it will not do anything, there is nothing magical about it.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!