Hi there,
The Aquantia chip does the work but it is far from the optimal solution, as it tends to overheat, due to the passive cooling, which is not enough.
I personally ended playing so much with settings that eventually got lost inside all the variables, so with each change got different results. Make sure You restart PC, after each change.
The option You mentioned is Turned On here, everywhere - have two Asus cards(aka Aquantia) and the main server has integrated intel X552/X557-AT 10GBASE-T.
To maintain the speeds over the network what worked for me was to increase "Transmit Buffers" to the max supported number available and disable Jumbo Packets , everywhere.
Strangely, disabling the TCP/IP v6 Internet Protocol from the "Ethernet Porperties" inside win 10 helped a bit as well.
Guy whom found on Amazon, suggested having the following options activated/disbaled:
ARP off
Energy-Efficient Ethernet off
Flow control off
Jumbo Packets off
Large Send Offload (all 3) off
Maximum number of RSS Queues - 8 queues (only set to number of threads of your processor or lower as this puts more load on CPU)
NS Offload off (only applicable to some Aquantia chipsets)
Transmit Buffers 8184
In regards to the NVNe drives, can not suggest much. I am still using old-school high capacity SATA Hdd in raid10 here, as PC here does not support NVMe.
Do you have by chance the "Trim " option enabled on any of the disks ( if "Samsung")?
Is any of the disks "Evo" by chance? Test it with a disk benchmarking program and then compare the results with the Network benchmarking program as a start.
Last thing which could think of - bit obvious but try checking if You have the disk plugged into PCIe Gen 3.0 x4.
Good luck!
Couple of things to check:
- drivers and firmwares - updating them to latest and finding them from the "official" site not aways works. For instance despite my lan cards are branded as Asus, and Asus does have drivers, I am currently using the Aquantia ones, from another website.
- try with other cables as well, cat 6a or more. Have heard a lot of stories about bad cables which are claimed to sustain higher speed but deliver poor performance
- Try checking you switch and lan card settings. That's crucial. For instance had to manually adjust some settings in Asus XG-C100C to boost performance and have it somehow stable - Jumbo packet is disabled here and boost in speed is around 50%.
- Check Your Belkin settings and play with some of them if the switch is manageble or try looking up for any complaints about lower speeds on Belkin forums....
As a side story, a friend of mine who has access to bigger market, when on the 10gbe route, but with local NAS- Qnap.
After exchanging couple of devices - mostly lan cards, and proper drivers, he managed to sustain almost same speed on his 10gb network with fiber optic, regardless of having mixed software environment - win 7, win 10, linux, etc.
Good luck!
Just a quick follow up question: You mention jumbo packets are disabled? Normally I would think Jumbo is enabled and MTU is set to 9000 for 10gbe right? Have you had better luck with it disabled?
Edit: Right now I've had a massive jump in speed by disabling 'Recv Segment Coalescing (IPv4)' on my Aquantia chip which came with the Asus. It went from looking like it was dropping packets like crazy to actually behaving well. But now one computer sends at 10gbe and one sends at more or less 5gbe. If you flip to the other computer and run the same benchmark then you get the mirror image (write is 10, read is 5...the other is the opposite). So both are pretty fast, but one seems to be about half the speed it should be. Wondering if there might be a system variable in play like encryption or something...as in, perhaps the explanation is Windows is setup differently rather than the NIC is behaving differently. I've been moving cables around but am going to check that one more time.