• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Gigabit Ethernet better than onboard RealTek RTL8111E Gigabit Ethernet?

Joined
Aug 29, 2005
Messages
7,082 (1.04/day)
Location
Asked my ISP.... 0.0
System Name Lynni PS \ Lenowo TwinkPad T480
Processor AMD Ryzen 7 7700 Raphael \ i7-8550U Kaby Lake-R
Motherboard ASRock B650M PG Riptide Bios v. 2.02 AMD AGESA 1.1.0.0 \ Lenowo 20L60036MX Bios 1.47
Cooling Noctua NH-D15 Chromax.Black (Only middle fan) \ Lenowo WN-2
Memory G.Skill Flare X5 2x16GB DDR5 6000MHZ CL36-36-36-96 AMD EXPO \ Willk Elektronik 2x16GB 2666MHZ CL17
Video Card(s) Asus GeForce RTX™ 4070 Dual OC GPU: 2325-2355 MEM: 1462| Nvidia GeForce MX™ 150 2GB GDDR5 Micron
Storage Gigabyte M30 1TB|Sabrent Rocket 2TB| HDD: 10TB|1TB \ SKHynix 256GB 2242 3x2 | WD SN700 1TB
Display(s) LG UltraGear 27GP850-B 1440p@165Hz | LG 48CX OLED 4K HDR | AUO 14" 1440p IPS
Case Asus Prime AP201 White Mesh | Lenowo T480 chassis
Audio Device(s) Steelseries Arctis Pro Wireless
Power Supply Be Quiet! Pure Power 12 M 750W Goldie | 65W
Mouse Logitech G305 Lightspeedy Wireless | Lenowo TouchPad & Logitech G305
Keyboard Akko 3108 DS Horizon V2 Cream Yellow | T480 UK Lumi
Software Win11 Pro 23H2 UK
Benchmark Scores 3DMARK: https://www.3dmark.com/3dm/89434432? GPU-Z: https://www.techpowerup.com/gpuz/details/v3zbr
Oh, and if you're really serious you'd drop a couple of these in your system:

Intel E10G42BT X520-T2 10Gigabit Ethernet Card 10G...

I set someone up with 2 of those per system a couple weeks back. So far, network connectivity hasn't been a bottleneck. ;-)

i would rather buy a Nvidia IOn with 1gigabit ethernet and put one or 4 ports card in it and than get one of my friends to set it up as my router :roll:

i think that will be cheaper than $689.99 :laugh:
 
Joined
May 23, 2008
Messages
376 (0.06/day)
Location
South Jersey
Oh, and if you're really serious you'd drop a couple of these in your system:

Intel E10G42BT X520-T2 10Gigabit Ethernet Card 10G...

I set someone up with 2 of those per system a couple weeks back. So far, network connectivity hasn't been a bottleneck. ;-)

LMAO.

That is insane :)

To other people

Gigabit is plenty UNLESS everyone in your house is streaming HD at the same time from different PCs in the house. If from the same PC, then your going to run into storage issues unless your running multi (not 2, not 3, but multi drive) RAID.

As for Intel nics, my testing was done using an Onboard Intel nic same classification apparently, it reads the same Pro 1000, the chip is different (couldn't find any info on the IC used on the ASUS board I reviewed hush hush and all that).

I am not sure where the comment PCI is slower that PCIE came in. Someone is not understanding the difference between a capital and a lower case B I guess. A PCI card might come close to maxing out on gigabit Ethernet. If there is any price difference it is not worth the difference though. $5 for future proofiing? yes maybe. But saving much more than that? Hell go with PCI. Your going to use it for your time frame. It will either die, get thrown or resold. Going PCIE wont get you faster times, and offers no future improvement. Not for saving 50% now.

Geez

Lets not even look into the top PCIE 1x slot sharing resources with USB3... oh and that can be an issue ;) Been there done it. Plug in a USB 3 and lose internet because im runnning a PCIE 1x wireless card...



(**** PCI is slower than PCIE in bandwidth but NOT related to Ethernet afaik. PCI has a max bandwidth of 1.06Gbps)

EDIT: X58 and Server setups with a HW RAID card are different. But OP's ssytems specs are socket 1155 IE limited bandwidth.
 
Last edited:
Joined
May 23, 2008
Messages
376 (0.06/day)
Location
South Jersey
i would rather buy a Nvidia IOn with 1gigabit ethernet and put one or 4 ports card in it and than get one of my friends to set it up as my router :roll:

i think that will be cheaper than $689.99 :laugh:


LOL Ion cant handle flash thats why it does not exist anymore. You want to offload network resourcing to it?

Socket 775 setup... now we are talking a CPU based router :) (since it wont be optimized HW/SW for the job)

Of course, if you are spending that kind of money, you might be one of those people with 20 drives raided for a network share. In which case, your systems might have SSDs and can handle 500MB/s input. So 10Gigabit would be AWESOME.

Home PCs have CPU offloaded everything to make them affordable. (google winmodem if you dont know what I am talking about, its one device but explains everything) Servers dont. CPU is left for server duties and for the longest time, everything else was given enough bandwidth and basically seperate CPUs to do everything. IE, your NIC had a CPU, your USB controller hada CPU your Audio (not needed on a server) but on PCs back in the day.. had a real cpu...)

CPUs have evolved so far, that it is not necessary even for servers to offload everything, which is good since windows network stack is CPU driven anyway. But... people bitch about a $200 single gigabit nic let alone a $200 sound card.

I get it I do.. .I am of the camp that CPU drives everything especially with OCing but I also know that it takes its toll and a BUDGET platform like socket 1155 no matter how powerful is so BW limited it is a budget platform and not meant to last. no matter how awesome it was, it has a p designation for a reason... remember socket 775, and their BW issues withteh FSB?
 

Completely Bonkers

New Member
Joined
Feb 6, 2007
Messages
2,576 (0.41/day)
Processor Mysterious Engineering Prototype
Motherboard Intel 865
Cooling Custom block made in workshop
Memory Corsair XMS 2GB
Video Card(s) FireGL X3-256
Display(s) 1600x1200 SyncMaster x 2 = 3200x1200
Software Windows 2003
A decent Intel NIC will handle error correction on the card itself rather than in software and requiring CPU interrupts and overhead. So for long cable runs the Intel becomes an increasingly better option than an onboard NIC. Most home-consumers are within 10m of their router and it wont make any noticeable difference.

A decent Intel NIC has a hardware buffer and will handle the protocol overhead itself, rather than relying on software and requiring CPU interrupts and overhead. This is very useful when you are using a HUB rather than a SWITCH. But we all use switches nowadays so the work managing contention is at the router rather than at the NIC. However, IIRC when you want to run concurrent multi-point connections, e.g. data from PC A to PC B and PC C simultaneously, then the decent Intel NIC will not stall if B is slow or busy, and will continue data transfer with C uninhibited. I believe an onboard NIC tends to stall more often due to how network stack is managed.

A DECENT ROUTER/SWITCH is also needed. No point having the best NICs if the bottleneck is due to limited bandwidth or delays on the ROUTER/SWITCH.
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.19/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
A DECENT ROUTER/SWITCH is also needed. No point having the best NICs if the bottleneck is due to limited bandwidth or delays on the ROUTER/SWITCH.

thats a key point there. if you arent running decent managed gigabit switches, its pointless. replace a NIC when it fails or becomes outdated (more often with wifi than wired), the performance benefits just arent there for home users.
 
Joined
Apr 2, 2011
Messages
2,657 (0.56/day)
I am not sure where the comment PCI is slower that PCIE came in. Someone is not understanding the difference between a capital and a lower case B I guess. A PCI card might come close to maxing out on gigabit Ethernet. If there is any price difference it is not worth the difference though. $5 for future proofiing? yes maybe. But saving much more than that? Hell go with PCI. Your going to use it for your time frame. It will either die, get thrown or resold. Going PCIE wont get you faster times, and offers no future improvement. Not for saving 50% now.


(**** PCI is slower than PCIE in bandwidth but NOT related to Ethernet afaik. PCI has a max bandwidth of 1.06Gbps)

Perhaps my math is off:
1Gb = 1000 Mb
1000 Mb = 125MB
PCI runs at a maximum bandwidth of 133 MB/second, assuming there is 0 overhead, nothing else on the bus, and assuming you have an amazing chip that will always perform at maximum speed.

Here is the real world, the 133 MB will rather quickly drop below the 125 MB threshold given regular system losses. Even assuming that you could maintain 125 MB/second, you have to schedule reads and writes of information to RAM and the HDD, so the information has somewhere to go once it is interpretted by the NIC.

On the real planet Earth, and not the ideal one where the laws of physics can be ignored, the PCI bus will not always be more efficient than ethernet communications.


On the other hand:
1Gb = 1000 Mb
1000 Mb = 125MB
PCI-e x1 runs at a maximum bandwidth of 1000 MB/second.

You lose some of the potential of the PCI-e, but the bottleneck is the network and not the connection between NIC and CPU. This is why people suggested that the PCI bus is not fast enough.


While users that connect only to the internet will be far from saturating the 133 MB/s, those with internal networks can see the difference. So in short, not "future proofing" with pci-e is foolish, as the "future" where this bus bandwidth limitation would be detrimental has been around for the last four years.
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.19/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
lilhassel: all that is right except PCI-E 1.0 is 250MB/s each direction, with 2.0 being 500.


it is correct that PCI does not have enough bandwidth to saturate gigabit, which is why even onboard solutions use PCI-E nowadays.

Also... PCI is a shared bus. every PCI device on the system shares that 133MB/s, making it even less likely to reach high speeds.
 
Top