Author Topic: Trying out an m.2 quad card with 4xNVME and Raid0  (Read 2501 times)

2020-10-16, 21:28:32

danio1011

  • Active Users
  • **
  • Posts: 361
    • View Profile
I'm planning on using four m.2 NVME drives (1tb 970 Evo Plus) in Raid0 for some high speed storage in Windows Server 2019 across 10gbe.  Mostly well-backed up assets and scenes, I'm confident in the NVME's and my backup routines for this type of data to be stored there.

The mobo is Asrock Taichi x399, it has 3 m.2 slots.  I also have a couple of PCIE actively cooled m.2 quad cards, both of which were included with some recent trx40 boards.  They support up to PCIE4 but the x399 board is PCIE3.

Right now I've got them all on a PCIE 3 slot with the m.2 quad card set to 4x4x4x4 set in the bios.  It's created with windows Storage Pool Raid0 (software raid) and ReFS as the format, I hear it's better than NTFS.  The only downside is you can't boot from it but I don't care about that since I'm booting of a simple SSD on sata3.  Obviously the 10gbe will be the bottleneck, but hey...why not go for as fast as possible :)

I'm attaching the crystal disk mark results.  The Q8T1 seems great (!) to me.  The Q1T1 random seems a bit low though.  Any of these numbers seem way out of range or screwed up?

Is putting all 4 of the drives on a PCI card a mistake?  I have 3 on-board slots to use, which means I could just put 1 nvme drive on the quad board.  That said, I just like the simplicity of one storage pool on one PCI card, and also it is actively cooled with a fan and large heatsink which is nice.  The Taichi motherboard m.2 slots are not cooled nor do they have heatsinks.

Anyway, just trying to gut check this setup before I dive too deeply into it and build it into my workflow on my new file server. 

Thanks for any feedback.
Daniel
« Last Edit: 2020-10-16, 21:41:15 by danio1011 »

2020-10-17, 14:38:34
Reply #1

Juraj

  • Moderator
  • Active Users
  • ***
  • Posts: 4743
    • View Profile
    • studio website
Raiding SSDs is simply wasted effort. You aren't increasing anything meaningful apart from sequential reading of large files.
It even increases latency.

10Gbit bottlenecks at 1200 MB/s, that's fraction of PCI 3.0 SSD. What is the point of multiplying that speed by 10 :- ) ?

Single Enterprise grade SSD with Million IOPS will run circles around these setups for same price (or less), with zero drawbacks and superb data security.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2020-10-17, 16:34:10
Reply #2

danio1011

  • Active Users
  • **
  • Posts: 361
    • View Profile
Actually the first reason I looked into Raid0 here was to pool these 1tb drives which I already had into one storage space and one drive letter.  Is there another way to do that besides RAID?  But yeah otherwise I agree with you for sure that the speed is gratuitous.

Would enterprise SSDs give me better performance?  10gbe is still the limiting factor right?