Author Topic: Sharing drive in Windows vs NAS  (Read 3423 times)

2021-10-14, 10:56:21

Marcin Pabich

  • Active Users
  • **
  • Posts: 11
    • View Profile
I have been a user of NAS type servers for many years, but now everything is full and I am looking for a new solution to store mainly libraries I need every day. I am tired of dedicated NAS solutions and mechanical HDDs.

I wanted to build my own custom NAS with SSD only but while doing research I came to the conclusion that the best components for building servers with the possibility of connecting multiple PCI, M2 and SATA disks are the most expensive components such as AMD Threadripper Pro, HPTX or XL-ATX motherboards, Super Tower cases or Ultra Tower, and these parts we use to build our workstations and it is not profitable to double the expenses to build a separate server.

Hence my question, maybe it is better to buy several SSD drives for the current workstation, move libraries there and share them in Windows 10 Pro with another computer in the studio. What do you think about it?

Thanks for your feedback and help.

2021-10-14, 18:00:08
Reply #1

Ink Visual

  • Active Users
  • **
  • Posts: 159
    • View Profile
I think the main downside of the solution you described, so using single machine for both - production and file server purposes, is that it is more prone to failures than a one-task orientated server.

-3ds max crashes on daily basis,
-renderings freeze,
-there's often a need to install new software etc,
In every scenario above when your machine needs a restart, the files you're sharing with others are not available anymore. That often causes scene/rendering errors for other users.

Also the 100% processor and RAM usage during rendering may reduce the file transfers and read/save speeds for other network users.

So in my opinion, the sole purpose of a file server should be to store, share and backup files.
If you happen to build it powerful enough to sporadically use as a render node too, there's probably nothing wrong with that, and this is what we sometimes do in our studio.

Perhaps tell us more about why NAS solutions and HDDs are not working for you anymore?

Our file server is a Windows machine built specifically for this purpose. It's has an i9 9900K processor overclocked to 4.9Mhz, built on AsRock Z390 motherboard, with 64gb ram.
Probably a bit of an overkill, but as I mentioned wewanted to have a possibility to use it as a render node from time to time.
We only keep the library of assets on a fast M2 drive.
All the projects are stored on 7200 HDDs and we're rather happy with scene opening/saving and data transfer speeds.







 

2021-10-14, 19:36:39
Reply #2

Vuk

  • Active Users
  • **
  • Posts: 113
    • View Profile
@ Ink Visual - all that you said seems spot on to me. We use a NAS(Qnap) in our office and for now, it works fine.

We also use both HDD's in raid 5 and Nvme SSD's in raid 0 for our project and asset shares and so far haven't spotted any difference in speed between the HDD raid and the SSD raid in terms of scene loading, dr server machine loading, render pass saving, etc... The only speed difference is in the sequential "department" but pretty much all the work that we do is based on random read and write tasks. We all have 10gbe nic's and a 10gbe switch.

@ Ink Visual - Would you mind telling me how many of you access the windows machine that you made as a server? Is it also running win server or normal win? What is your setup in terms of hdd's, ssd's and raid?

2021-10-19, 11:11:27
Reply #3

dia.interactive

  • Active Users
  • **
  • Posts: 6
    • View Profile
We also use both HDD's in raid 5 and Nvme SSD's in raid 0

Hi Vuk, could you please describe your NAS setup more in detail. I have QNAP TS-473 with HDDs and M.2 SSD 256gb for Cache Acceleration and I am not so happy with random read/write speed in my system because of HDD limitation...


2021-10-19, 16:51:07
Reply #4

Vuk

  • Active Users
  • **
  • Posts: 113
    • View Profile
My Nas setup is as follows:

1. 6x10TB Ironwolf Drives in RAID 5 (for models, assets, backup of old projects, photoshop assets, etc...) - Drive bays of the NAS

2. 2x1TBSamsung Enterprise sata SSD's 2.5" no RAID (just for textures) - Drive bays of the NAS

3. 4x2TB Samsung M2 970 Evo Plus nvme SSD's in RAID 0 (for all current projects and project-related files) - They are all stacked up on a QM2 Quad nvme SSD card in the PCI-E Gen 3x8 slot of the NAS

4. 1x512gb Samsung M2 860 Evo sata SSD's in RAID 1 for the NAS operating system (M2 slots on the motherboard of the NAS)

I don't use caching haven't tried it to be honest but I know a few people who tried it and had problems with it. Especially when you use read-write cache and when the SSD's get filled up. They tend to become super slagish and start affecting the network badly.

2021-10-19, 19:20:02
Reply #5

dia.interactive

  • Active Users
  • **
  • Posts: 6
    • View Profile
My Nas setup is as follows:

1. 6x10TB Ironwolf Drives in RAID 5 (for models, assets, backup of old projects, photoshop assets, etc...) - Drive bays of the NAS

2. 2x1TBSamsung Enterprise sata SSD's 2.5" no RAID (just for textures) - Drive bays of the NAS

3. 4x2TB Samsung M2 970 Evo Plus nvme SSD's in RAID 0 (for all current projects and project-related files) - They are all stacked up on a QM2 Quad nvme SSD card in the PCI-E Gen 3x8 slot of the NAS

4. 1x512gb Samsung M2 860 Evo sata SSD's in RAID 1 for the NAS operating system (M2 slots on the motherboard of the NAS)

I don't use caching haven't tried it to be honest but I know a few people who tried it and had problems with it. Especially when you use read-write cache and when the SSD's get filled up. They tend to become super slagish and start affecting the network badly.

Wow! Thank you so much for such a detailed description. In fact you are using HDDs only for the stuff which is inconstantly in use, like 3d models or assets, which are only one time loaded to the scene and doesn't need to be constantly in sync, like textures or hdri's. In my actual setup I have everything on HDDs in RAID 1...

Which NAS model do you use?

2021-10-19, 19:39:32
Reply #6

Vuk

  • Active Users
  • **
  • Posts: 113
    • View Profile
Qnap TS-1277. The initial idea was to load all from the Nvme RAID 0 (proxies and textures mainly) but since we have most of the models on the RAID 5 we usually keep our model asset textures there so loading usually happens from the 2 raids (nvme and HDD raids) and the single ssd's.

2021-10-20, 17:52:36
Reply #7

dia.interactive

  • Active Users
  • **
  • Posts: 6
    • View Profile
Qnap TS-1277. The initial idea was to load all from the Nvme RAID 0 (proxies and textures mainly) but since we have most of the models on the RAID 5 we usually keep our model asset textures there so loading usually happens from the 2 raids (nvme and HDD raids) and the single ssd's.
I realized my mistake with keeping all files only on HDDs. Just ordered QNAP QM2-2P-344 and two Samsung 970 EVO Plus 2 TB today. I am planning to use it in RAID 1 for both current project files and textures, hdri's proxies etc. Is is a good or bad idea?

2021-10-21, 11:05:25
Reply #8

Vuk

  • Active Users
  • **
  • Posts: 113
    • View Profile
@dia.interactive - It is not a mistake to keep files on hdd's and if you are on a 1gbe network there is no real point in investing into an all ssd setup since you will be limited by your network to 100-110 mb/s.

Be sure to upgrade your network with a faster switch (2.5gbe,5gbe or 10gbe) in order to fully benefit from the setup you are about to install. Also use network NIC's inside your PC or any other port that is faster than 1gbe (pretty much all new motherboards today offer 2.5gbe by default). Raid 1 sounds ok but keep in mind you will only have1 SSD capacity so check if that is enough for you.

I am using a Raid 0 on the ssd's in order to achieve maximum capacity but I have 2 daily backups of the entire SSD raid over to the HDD RAID 5 and also a nightly offsite backup to the NAS located in my house.

2021-10-24, 21:49:03
Reply #9

dia.interactive

  • Active Users
  • **
  • Posts: 6
    • View Profile
@vuk

I had qnap 10gb pcie card both in NAS and in my old PC, but I just upgraded my old pc (1950x,32gb, 960pro) to new one (3970x,128gb cl15, 980pro) so I have on PC side 10gb by default in my asus rog zenith extreme alpha.

I just set up the new QNAP QM2-2P-344 with two of 970 2tb evo plus. I decided to set up RAID 0 with daily backup on HDD too. Just tested big .max file from PC to sdd's raid 0 and I am getting only between 800-900 MB per sec. It is normal? AFAIK the PCIe gen3 x4 has 4gb/s... What is your transfer speed between QM2 SSDs to PC ?

2021-10-25, 13:46:13
Reply #10

Vuk

  • Active Users
  • **
  • Posts: 113
    • View Profile
@ dia.interactive

Try copying a bigger file for example if you have a single Blu-ray movie or any kind of single file that is over 10-20gigs at least. I suppose your max file was not even over 3gb so the copying process happened too fast in order to reach the full 1100mb/s.

Also, try setting up Jumbo packets to 9014Bytes in the Device Manager tab by clicking the 10gbe Network adapters tab, and then in Advanced, you will find Jumbo Packet. Make sure to set it also on the NAS side.

Lastly, the 10gbe NIC you have is integrated into the mobo rather than a separate network card. The 10gbe NIC's tend to overheat easily that is why you see a lot of the NIC's having heatsinks and vents on. Maybe that speed is just the maximum of that integrated card. Just for reference, I had a few ASUS XG-100C NIC's and also a few QNAP NIC's and they both use the same AQUANTIA controller.
The Qnap cards seem to be better optimized and tend to be colder on the touch than the ASUS cards that would overheat easily plus the QNAP card was always faster and reaching the speed limit while the ASUS Nic was always a bit slower.

At the end of the day 900 Mb per sec is quiet a good result and not something you should ultimately worry about :).