• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

2 Gbit/s RJ45 Link Aggregation for Windows 10/11 through Load Balancing/Failover (LBFO) re-implementation

Joined
Jun 16, 2023
Messages
329 (0.47/day)
Hello everyone,

I've recently started looking into link aggregation to try to boost up my transfer speeds since it's a possibility offered by most modems, switches and NAS systems (using 802.3ad / LACP). 2.5Gb switches are still expensive (not mentioning 10Gb), but dual RJ45 cards are cheap ($40), some MOBOs even offer two RJ45 ports, and you could potentially even aggregate an RJ45 connection with a WIFI - haven't tried - to get a 2Gb/s link instead of 1. Plus it's nice to have the option to boost your existing hardware.

To my surprise, I learned after a bit of research that link aggregation is no longer supported by Windows 11 since... 2020.

However, a brillant guy named Graham Sutherland had a look into it and figured out a way to reimplement this feature by extracting it from the freely available Windows Server 2019. His tutorial is based on Windows 10 but it works on Windows 11 as well.

To make it even easier, another folks called Maxim Kiselevich bundled all the required files together on the related Github post with instructions, and after trying it for myself, I can confirm it works 100% on the latest version of Windows 11 pro.

The reason I'm sharing all of this is that I was pretty upset to see this feature disappeared. It helps use available technology to most of us in house, for free, to boost transfer rates or use fail over capabilities. I do not understand why Microsoft would deprive us from these benefits, so I'm spreading the word in the hope that more people become aware of it and someday maybe Microsoft decides to reimplement it back All props go to Graham, Maxim and all the others on Github for the remarkable work they've accomplished. Hope this helps someone looking for this solution down the road.
 
Last edited:
Link aggregation doesn't work like that.
The only benefit it when multiple clients are connecting to a "server" with link aggregation, as a single client can't exceed the speed of a single network card due to limitations in how Ethernet and TCP/IP works.

I'm not saying this to wind you up, I tried over 10 years ago when I was reviewing a Thecus NAS and was lent a managed switch just to test it. My computer at the time had a pair of identical NICs with teaming support and I gained exactly zero extra speed. Added a second client and the NAS was easily handing the extra extra data being shuffled.

It's no wonder Microsoft removed the feature, as it's useless in a client PC.
 
Link aggregation doesn't work like that.
The only benefit it when multiple clients are connecting to a "server" with link aggregation, as a single client can't exceed the speed of a single network card due to limitations in how Ethernet and TCP/IP works.

I'm not saying this to wind you up, I tried over 10 years ago when I was reviewing a Thecus NAS and was lent a managed switch just to test it. My computer at the time had a pair of identical NICs with teaming support and I gained exactly zero extra speed. Added a second client and the NAS was easily handing the extra extra data being shuffled.

It's no wonder Microsoft removed the feature, as it's useless in a client PC.
I appreciate the feedback, but there's so much discussion on it that I won't try to prove you wrong here. I will leave you a link for move info, and will tell you that there's a reason why it's available in other versions of Windows. I've tried it myself, and I can absolutely say there's a difference in speeds when all is configured accordingly. There's a lot of misconception around it and I guess this might be why people haven't massively complained.

 
I appreciate the feedback, but there's so much discussion on it that I won't try to prove you wrong here. I will leave you a link for move info, and will tell you that there's a reason why it's available in other versions of Windows. I've tried it myself, and I can absolutely say there's a difference in speeds when all is configured accordingly. There's a lot of misconception around it and I guess this might be why people haven't massively complained.

Yeah, please read my post, it was over a decade ago so no Windows 10.
Also not Intel based NICs in that system.
I've also worked at QNAP and they pushed out a new product with teaming and everyone's consensus there was that you need multiple clients to gain any extra performance from it and I tested it again there and got the same results.
You believe what you want, but it really has no benefit in single client environments.
 
2.5 Gbit NIC is about 25€
2.5 Gbit switch starts at 90€

Even lots of midrange mainboards come with 2.5 Gbit today.
So just get 2.5 Gbit hardware, if you need it. Link Aggregation is something different, if you do not know that, dont try that.

Its not really expensive compared to 5 oder 10 Gbit "real" server grade Hardware.
 
2.5 Gbit NIC is about 25€
2.5 Gbit switch starts at 90€

Even lots of midrange mainboards come with 2.5 Gbit today.
So just get 2.5 Gbit hardware, if you need it. Link Aggregation is something different, if you do not know that, dont try that.

Its not really expensive compared to 5 oder 10 Gbit "real" server grade Hardware.
It's really difficult to comprehend why you would come to tell me not to try something which I've explained I've already done, questioning whether I know what I'm doing, and questioning the results I have tested myself, and tested by others. I doubt I'd be able to replace my 16 POE 1Gb ports switch by the 2.5Gb equivalent for less than $1000, but by all means if you find something decent feel free to share! And free stuff is always good to share, don't you think?

So if you don't find it useful, by all means, don't use it.
 
Well, I guess in most environments its easier (cheaper) to upgrade a switch and those clients that really need more speed than to install new wireing in the building in order to get a seconde cable to these clients.

So you have 16 Cients powered by PoE and all those could fit in dual 1 Gbit NICs instead of what they have now. And you say this would be more economic than upgrading to 2.5, 5 or 10 Gbit? What kind of clients are this and what kind of "dual 1 Gbit PoE" devices are you talking about?
 
Well, I guess in most environments its easier (cheaper) to upgrade a switch and those clients that really need more speed than to install new wireing in the building in order to get a seconde cable to these clients.

So you have 16 Cients powered by PoE and all those could fit in dual 1 Gbit NICs instead of what they have now. And you say this would be more economic than upgrading to 2.5, 5 or 10 Gbit? What kind of clients are this and what kind of "dual 1 Gbit PoE" devices are you talking about?
Not really, no.

Take a very simple case where you have a media server running Rockstor, or a Drobo, or any media server that supports LACP. Plug two cables instead of one in your switch, configure both, you're done. Do the same for your computer, configure your switch accordingly and there you have a real 2Gb link between your comp and your media server. See screenshot attached. If I want to change for a 2.5Gb switch, I need to replace my current material with the equivalent, thus buying a new 2.5Gb 16 ports POE+ one, and it's expensive, as discussed.

And it's really neat to be able to transfer big files at a faster, steady rate for free.

LBFO.jpg
 
Why would you need to replace your switch? Most devices wont profit from more throughput. As you said, NAS and some desktop clients would, but most clients would not.
Just get an additional one with 5 or 8 ports and switch the NICs in those devices where you would profit from the additional speed.

I have a 24 port gigabit switch for all those slow devices and an additional MS510 powering accesspoints via PoE, serving power clients 2,5 and 5 gbits and giving the NAS and server full 10gbits.
 
Graham's method absolutely works and you do not need to do Maxim's path as it requires disabling core driver signature check, and that is a big security issue.
Just go with the latest Graham's instruction (you can find it in issues discussion) and you will be able to restore LBFO natively (almost).
 
Back
Top