• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

QNAP Introduces QXG-10G2T-107, a Dual-port 5-Speed 10GBASE-T NIC

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,892 (7.38/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
QNAP Systems, Inc. today introduced the new QXG-10G2T-107, a dual-port PCI Express (PCIe) 10GBASE-T/NBASE-T NIC that supports 5 network speeds. It can be installed in a compatible QNAP NAS or a Windows /Linux PC with PCIe 2.0 x4 slots, providing organizations or individuals with a flexible and economical 10 GbE network connectivity solution.

The QXG-10G2T-107 uses the Aquantia AQtion AQC107S Ethernet controller that supports 10/5/2.5/1 Gbps and 100 Mbps network speeds. The RJ45 connector design also allows users to use existing cables. Transmission speeds can reach up to 5 Gbps when using CAT-5e cables, or up to 10 Gbps when using CAT 6 (or advanced versions) cables, unleashing the full potential of the QXG-10G2T-107.



"As 10 GbE network environment becomes more common, QNAP continues to deliver cost-effective 10 GbE solutions," said Dan Lin, Product Manager of QNAP, adding "Following QNAP's release of the single-port Multi-Gig QXG-10G1T NIC, the newly rolled out dual-port QXG-10G2T-107 NIC also leverages Aquantia Ethernet controller to offer Multi-Gig transfer rates, helping users to easily upgrade their PCs or NAS systems with 10 Gbps capability to accommodate intensive data transfer and boost productivity of team collaboration and personal workflows."

Windows and Linux users can download drivers from NIC manufacturer Aquantia's website. Using the QXG-10G2T-107 in a QNAP NAS requires QTS 4.3.6 or later.

Additionally, QNAP is offering 15% discount on popular PCIe network cards featuring Mellanox ConnectX -4 Lx SmartNIC - the 25 GbE QXG-25G2SF-CX4 and 10 GbE QXG-10G2SF-CX4 network NICs. Both cards can be installed in a NAS or PC, and support iSER (iSCSI Extension for RDMA) to offload CPU workloads and optimize VMware virtualization performance.

For more information, visit this page.

View at TechPowerUp Main Site
 
4X pci-e versus the trusty Intel X540-T2 at 8x pci-e but at higher price than used x540's.
good with more options :)
PCIe 3.0 vs 2.0 in your example, so yeah, hardly an issue.

It's hardly fair to compare new to second hand products in terms of cost either.
 
Mellanox ConnectX 2 costs 30Eur~
I have reached a max of 6gbps with copper cables so far.

Not sure whats the problem, but somehow it is very hungry for CPU, taking 100% of a single core.
 
Around $180.

Thanks, I hate it when the price isnt included. Its mind numbingly stupid.

Mellanox ConnectX 2 costs 30Eur~
I have reached a max of 6gbps with copper cables so far.

Not sure whats the problem, but somehow it is very hungry for CPU, taking 100% of a single core.

If this is your first foray into higher than GB speeds I guess I could see why you would be confused. But it takes cpu power to pump packets. Wait until you start getting closer to 10gb speeds and start using multiple cards and optics.
 
10Gbit is overpriced by today's tech standard.

Should have been standard in gaming years ago and come down to humane prices by now.

Gigabyte among others tried to do a push and release motherboards with 10Gbit but failed when Intel inflated prices for their chipsets when there was no competition from AMD.

Seamed Intel was not interested in consumer 10Gbit to lower their profit, probably because they wanted to milk the ultra cheap 1Gbit standards to the end of days.

Hope that time is over now and we swiftly make a shift over to at least 10Gbit as a new home standard.
 
I mean you can buy something that can route 10gig right now for like $130.


DAC cables and even optics and fiber isn't expensive. Its been like his for a few years. Consumers, just arent ready yet. If it doesn't say nighthawk or linksys and come with a pretty apple-esque GUI it isn't fast or its scary.
 
10Gbit is overpriced by today's tech standard.

Should have been standard in gaming years ago and come down to humane prices by now.

Gigabyte among others tried to do a push and release motherboards with 10Gbit but failed when Intel inflated prices for their chipsets when there was no competition from AMD.

Seamed Intel was not interested in consumer 10Gbit to lower their profit, probably because they wanted to milk the ultra cheap 1Gbit standards to the end of days.

Hope that time is over now and we swiftly make a shift over to at least 10Gbit as a new home standard.

2.5Gbps is the new "low cost" consumer standard.
 
10Gbit is overpriced by today's tech standard.

Should have been standard in gaming years ago and come down to humane prices by now.

Gigabyte among others tried to do a push and release motherboards with 10Gbit but failed when Intel inflated prices for their chipsets when there was no competition from AMD.

Seamed Intel was not interested in consumer 10Gbit to lower their profit, probably because they wanted to milk the ultra cheap 1Gbit standards to the end of days.

Hope that time is over now and we swiftly make a shift over to at least 10Gbit as a new home standard.

10gbps is kind of dumb for consumer standards. 10 gbps costs $299 a month where I live. 2.5 gbps is more realistic, and they should begin pushing these out before they lose the market to other OEMS.
 
I mean you can buy something that can route 10gig right now for like $130.


DAC cables and even optics and fiber isn't expensive. Its been like his for a few years. Consumers, just arent ready yet. If it doesn't say nighthawk or linksys and come with a pretty apple-esque GUI it isn't fast or its scary.
SFP+ is useless for most home users, as it either requires costly adapters, or a fibre based network. It's hard enough to make consumers understand the benefits of Ethernet. Wi-Fi is the standard consumer networking interface, as most consumers use mobile devices and only care about browsing the web.

10gbps is kind of dumb for consumer standards. 10 gbps costs $299 a month where I live. 2.5 gbps is more realistic, and they should begin pushing these out before they lose the market to other OEMS.
We're talking local networks here, not internet access speeds...
I have a 10Gbps card in this PC and one in my NAS, so I can quickly copy files between the two.
I only have a 200Mbps internet connection.
 
We're talking local networks here, not internet access speeds...
I have a 10Gbps card in this PC and one in my NAS, so I can quickly copy files between the two.
I only have a 200Mbps internet connection.

Was just talking in the realm of internet speeds down and up and the availability of Intel competitive integrated products in the current motherboard market.

Referencing network transmission, the problem isn't the cost of the cards, but the switches.
 
It's hard enough to make consumers understand the benefits of Ethernet. Wi-Fi is the standard consumer networking interface, as most consumers use mobile devices and only care about browsing the web.

That is fair, I let my profession get in my way. However, if your talking from a techy perspective (which this thread isnt) I would still argue it over buying an expensive pre-built router. Especially if cost is a concern to begin with.
 
We still need those dirt cheap switches that support 2.5/5/10 speeds!!!
A reasonably priced 8-port option would be a good start...
As in, something in the $200-300 range, rather than $400-500.
 
If this is your first foray into higher than GB speeds I guess I could see why you would be confused. But it takes cpu power to pump packets. Wait until you start getting closer to 10gb speeds and start using multiple cards and optics.

It does not take those levels of CPU power to pump packets because every >1Gbit NIC has hardware offloading. Saturating my 10Gbit with iperf3 takes 7% of a single i7-2600 core. You can easily buy 200Gbit/s Mellanox ConnectX-6 NICs nowadays and they don't require huge CPU power either. The problem here is most likely misconfiguration - perhaps not using Jumbo packets or wrong drivers (if on Windows)/firmware?
 
It does not take those levels of CPU power to pump packets because every >1Gbit NIC has hardware offloading. Saturating my 10Gbit with iperf3 takes 7% of a single i7-2600 core. You can easily buy 200Gbit/s Mellanox ConnectX-6 NICs nowadays and they don't require huge CPU power either. The problem here is most likely misconfiguration - perhaps not using Jumbo packets or wrong drivers (if on Windows)/firmware?

I tried many mellanox drivers and been messing with settings alot (by reading other people, who also had this problem, posts)

My windows system has 9900k, my "server" has ryzen 3600. When i had win10pro on my ryzen system, speeds were up to 6gbps. Sending from 9900k system yelds higher speed.
When i have installed ESXi 6.7 into my ryzen system (default vmware builtin drivers) and uploaded the iso files, it only reached 1.2gbps speed.
The source and the destination were MVMe drives.

Some people say it may not reach max speed because of copper.
Fiber will cost me additional 85 eur. I might try that at some point.
But it is strange that it requires so much cpu. I mean rack servers use Xeons, they are much weaker than desktop cpus and they have no problems with 10gbps speeds.
So yes, i do have a driver/configuration problem, but i have yet to find the right combo.
 
I tried many mellanox drivers and been messing with settings alot (by reading other people, who also had this problem, posts)

My windows system has 9900k, my "server" has ryzen 3600. When i had win10pro on my ryzen system, speeds were up to 6gbps. Sending from 9900k system yelds higher speed.
When i have installed ESXi 6.7 into my ryzen system (default vmware builtin drivers) and uploaded the iso files, it only reached 1.2gbps speed.
The source and the destination were MVMe drives.

Wait... you're talking about transfers between filesystems? That's totally different from raw network performance and depends on many more factors. Try running pure iperf3 between the hosts to check if the NICs are the problem in the first place.

Some people say it may not reach max speed because of copper.

I am running a 7m direct attach SFP copper cable between 2 Mellanox Connect-X 2 and am able to saturate the link with 9600 MTU with barely any CPU load.
 
Back
Top