Sunday, January 9th 2022

QNAP Launches Dual-Port QM2 PCIe Cards with M.2 2280 NVMe SSD Slots and 10GbE Ports

QNAP Systems, today launched the QM2-2P410G1T and QM2-2P410G2T PCIe Gen 4 Cards; and the QM2-2P10G1TB PCIe Gen 3 Card. All three PCIe cards allow adding M.2 NVMe SSD slots and 10GbE connectivity to a QNAP NAS or PC/server/workstation, with no driver installation required. NAS users can improve overall NAS performance by enabling SSD caching, and upgrade NAS storage capacity without occupying any 3.5-inch drive bays. PC/server/workstation users can increase their storage capacity while also boosting overall IOPS performance by offloading bandwidth-demanding tasks to SSDs to minimize application loading times.

The QM2-2P410GxT series and the QM2-2P10G1TB features single/dual 10GBASE-T Multi-Gigabit (10G/5G/2.5G/1G/100M) network ports to boost bandwidth-demanding tasks. With QNAP 10GbE switches, you can easily upgrade to a high-speed network environment. M.2 SSD thermal sensors allow real-time temperature monitoring, with a quiet cooling module (heatsink and smart fan) to keep the SSDs running within optimal temperatures. A tool-less design also enables quick M.2 SSD installation and replacement.
Source: QNAP
Add your own comment

9 Comments on QNAP Launches Dual-Port QM2 PCIe Cards with M.2 2280 NVMe SSD Slots and 10GbE Ports

#1
Berfs1
I just looked into this on their website, unfortunately it looks like there is a PCIe switch for the PCIe 4.0 x8 bus, it will do either 1x 10GbE + 1x NVMe, 2x 10GbE, or 2x NVMe, not all 4 at the same time, unless it negotiates them to x2, which wouldn't make sense because that is capping the bandwidth anyways.
Posted on Reply
#2
TheLostSwede
Berfs1I just looked into this on their website, unfortunately it looks like there is a PCIe switch for the PCIe 4.0 x8 bus, it will do either 1x 10GbE + 1x NVMe, 2x 10GbE, or 2x NVMe, not all 4 at the same time, unless it negotiates them to x2, which wouldn't make sense because that is capping the bandwidth anyways.
PCIe 4.0 x1 is enough for 10Gbps Ethernet. That's assuming they're using the new Marvell/Aquantia chips that can use PCIe 4.0 x1, which seems to be the case.
However, either their engineers didn't understand that, or they did something weird, as their diagram says everything gets four PCIe 4.0 lanes...
Asking an old colleague to see if it's correct or not.
Posted on Reply
#3
Tigerfox
Finally this product makes sense now. They had two iterations of this before, QM2-2P10G1T and QM2-2P10G1TA, the only difference being the NIC, Tehuit TN9710P vs Aquantia AQC107. Both were useless, in my opinion, because they only used a PCIe2.0 12 Lane-Switch, so while the 10GbE-NIC and the whole card were Gen2x4, the two M.2 were only Gen2x2 each, severely limiting speed. It made no sense to me to offer two slots capable of only a quarter of the speed most SSD offered at the time of release instead of one slot offering at least Gen2x4.

Now, QM2-2P10G1TB is as it should be, using a Gen3 Switch with at least 20 Lanes and offering 2xM.2 with Gen3x4 each, while QM2-2P410G1T and QM2-2P410G1T are even better, using a 24-Lane Gen4-Switch and offering 2xM.2 Gen4x4 each. All three seem to be the same design, though, using the newest AQC113C and just changing the switch or adding a second NIC. Wouldn't be surprised if both QM2-2P410GxT use the same 24-Lane Switch.
All three seem a bit of an Overkill for me, though, and I would prefer a combo-card with just one 10GbE-NIC and one M.2 Gen4x4 while only using a Gen4x4 interface, because those can be used ony nearly every motherboard.
Posted on Reply
#4
TheLostSwede
TigerfoxAll three seem a bit of an Overkill for me, though, and I would prefer a combo-card with just one 10GbE-NIC and one M.2 Gen4x4 while only using a Gen4x4 interface, because those can be used ony nearly every motherboard.
Not all boards support bifurcation though.
Posted on Reply
#5
Lianna
TheLostSwedePCIe 4.0 x1 is enough for 10Gbps Ethernet. That's assuming they're using the new Marvell/Aquantia chips that can use PCIe 4.0 x1, which seems to be the case.
However, either their engineers didn't understand that, or they did something weird, as their diagram says everything gets four PCIe 4.0 lanes...
Asking an old colleague to see if it's correct or not.
That's a switch, not a bifurcation of lanes. It should be visible as a single device to the CPU/chipset. Think older MB switches for 16 lanes -> 2x16 lanes for graphics.

Upstream offers theoretical ~15.7 GB/s in each direction, downstream maxes at 2x ~7.7 GB/s + 2x ~1.25 GB/s = ~18.2 GB/s in each direction, so not that much (16%) overbooking - and that's purely theoretical.

PCIe Gen 4 x4 is too much for single 10 Gbe (1 lane of Gen 4 would be enough), but they may use it for PCIe Gen 3 (or Gen 2) x4 controller. Plus I'd guess switch 8 lanes -> 4x4 lanes is a standard.
Posted on Reply
#6
TheLostSwede
LiannaThat's a switch, not a bifurcation of lanes. It should be visible as a single device to the CPU/chipset. Think older MB switches for 16 lanes -> 2x16 lanes for graphics.

Upstream offers theoretical ~15.7 GB/s in each direction, downstream maxes at 2x ~7.7 GB/s + 2x ~1.25 GB/s = ~18.2 GB/s in each direction, so not that much (16%) overbooking - and that's purely theoretical.

PCIe Gen 4 x4 is too much for single 10 Gbe (1 lane of Gen 4 would be enough), but they may use it for PCIe Gen 3 (or Gen 2) x4 controller. Plus I'd guess switch 8 lanes -> 4x4 lanes is a standard.
Yes, I'm aware of that. You might want to re-read what I specifically replied to.

The more lanes you switch, the more expensive the switch, so QNAP could've saved a chunk of money by going with a simpler switch.
Posted on Reply
#7
Lianna
TheLostSwedeYes, I'm aware of that. You might want to re-read what I specifically replied to.

The more lanes you switch, the more expensive the switch, so QNAP could've saved a chunk of money by going with a simpler switch.
First part of my post was more an answer to the other posters (bifurcation / bandwidth).

AFAIK, at least up to Gen 3, PCIe switches step from 16 lanes to 24 lanes (total) with no (common) intermediate options. That's what I meant by "8 lanes -> 4x4 lanes is a standard".
I guess if they needed more than 8 lanes up and 2x 4 lanes down, going to 24 lanes total was the only common option. In the current market / supply chain situation going with standard solution is probably a good idea AND they could use any 10GbE controller including PCIe Gen 3 or Gen 2 if they need to.
Posted on Reply
#8
TheLostSwede
LiannaFirst part of my post was more an answer to the other posters (bifurcation / bandwidth).

AFAIK, at least up to Gen 3, PCIe switches step from 16 lanes to 24 lanes (total) with no (common) intermediate options. That's what I meant by "8 lanes -> 4x4 lanes is a standard".
I guess if they needed more than 8 lanes up and 2x 4 lanes down, going to 24 lanes total was the only common option. In the current market / supply chain situation going with standard solution is probably a good idea AND they could use any 10GbE controller including PCIe Gen 3 or Gen 2 if they need to.
Right.

You might be right, it seems like they mostly come in even eight lane jumps, with a few exceptions above 24 ports.
Even the new Marvell chip comes in PCIe 4.0 x1 and PCIe 3.0 x2/x4 variants. Not sure they bothered with a PCIe 2.0 version.
Posted on Reply
#9
TheLostSwede
Finally hard back and apparently the diagram is correct, but they're working on a new version with the PCIe 4.0 x1 Marvell chips.
Posted on Reply