• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Modern motherboards with 6+ usable PCIe slots?

As have been said several times above, the only way to get an actual meaningful amount of working PCIE slots are workstation level hardware. Consumer hardware is not built for expandability anymore. And the CPU/Chipsets do not have enough available PCIE lanes for this to make much sense anyway. Things like USB4 controllers, 10Gb (or higher) NIC, M.2 nvme ssds, all require multiple PCIE lanes to work properly. So a miner motherboard with several x1 slots is not an option either.

You need to find a motherboard with built-in PCIE switch(es) which automatically increase the price by a good margin. See the MSi MEG X570S Unify-X MAX, that have one x16 and one x8 slot by default. But with numerous built-in switches it can provide up to 6 M.2 slots with x4 PCIE lanes each (1 M.2 is x4 from CPU, 1 is x4 from chipset, the other 4 are split from the physical x16 slots via switches). With some M.2 to PCIE slot adapters you can expand this alot. But the price for that motherboard alone is umm significant. But at least x4 M.2 slots have enough bandwidth for many potential upgrades.

The only board that is somewhat modern there is the ASRock X570S PG Riptide and even that is gimped... three x16 slots but the lower two are electrically x4 and the bottommost one is limited to x2, WHY EVEN BOTHER FFS. Some of the other boards have an x16 and x8... but no USB-C internal connector.

That search is legitimately amazing though.
Not much to work with when the CPU/chipsets don't have more PCIE lanes to begin with. At least the slots can physically hold x16 cards :laugh:
 
The only board that is somewhat modern there is the ASRock X570S PG Riptide and even that is gimped... three x16 slots but the lower two are electrically x4 and the bottommost one is limited to x2, WHY EVEN BOTHER FFS. Some of the other boards have an x16 and x8... but no USB-C internal connector.
On the upside, x16 + x4 will operate as x16 + x4. In other Core or Ryzen boards, unless there's a PCIe switch onboard, x16 + x8 will never work but will be reduced to x8 + x8.

That search is legitimately amazing though.
True. I often post links to it. Unfortunately the search is limited to what's currently on sale, so if someone is asking about a Skylake board with certain properties, it can't help much.

Do you see the page in English, by any chance? Up until a few years ago it was in German and English, I saw one on my home PC and the other at work, with no option to choose language. Now I only see it in German.
 
Do you see the page in English, by any chance? Up until a few years ago it was in German and English, I saw one on my home PC and the other at work, with no option to choose language. Now I only see it in German.
I only see German as well. But they link to Skinflint at the bottom of that page, which at first glance look like the same site in English.
 
That is still a ton of bandwidth.
No it's not; X670E can put out 48 lanes of PCIe (28 CPU + 8 chipset 1 + 12 chipset 2). The problem is that because AMD and motherboard manufacturers are shitheels, they don't allow for those lanes to be allocated sensibly.

What are you doing that can even make use of it all?
Graphics card = 16 lanes
Quad M.2 NVMe card = 16 lanes

Completely reasonable to expect the above to work in any desktop motherboard, and yet there are ZERO boards (including AMD's "high-end" X670E) that allow it. I'd happily settle for all the onboard M.2 slots to be disabled to allow the above configuration, but because motherboard and CPU manufacturers are useless, greedy shitheels that's not possible either. Even though, on many boards with a PCIe x4 slot, using said slot disables one of the M.2 slots and vice versa... so why the hell can't they extend that to ALL M.2 slots?

Do you see the page in English, by any chance? Up until a few years ago it was in German and English, I saw one on my home PC and the other at work, with no option to choose language. Now I only see it in German.
I just use Chrome's built-in translate.
 
Last edited:
You need to find a motherboard with built-in PCIE switch(es) which automatically increase the price by a good margin. See the MSi MEG X570S Unify-X MAX, that have one x16 and one x8 slot by default. But with numerous built-in switches it can provide up to 6 M.2 slots with x4 PCIE lanes each (1 M.2 is x4 from CPU, 1 is x4 from chipset, the other 4 are split from the physical x16 slots via switches). With some M.2 to PCIE slot adapters you can expand this alot. But the price for that motherboard alone is umm significant. But at least x4 M.2 slots have enough bandwidth for many potential upgrades.
That board really has a lot of flexibility built in but those aren't PCIe packet switches, which would enable communication with both ports. They are simply switched to one of the two positions on boot and stay there. The manual does talk about "bandwidth sharing" but in reality it's lane stealing.

1693916357509.png


I only see German as well. But they link to Skinflint at the bottom of that page, which at first glance look like the same site in English.
Skinflint only lists products currently available in the UK, and the default is only from sellers in the UK. You can choose to also include sellers from Germany, Austria and Poland (and maybe other countries) who ship over the channel. I compared both sites' results a few times, with foreign shipping options included, and found some PC parts that you can buy in Germany but not in the UK - but never the opposite.
 
That board really has a lot of flexibility built in but those aren't PCIe packet switches, which would enable communication with both ports. They are simply switched to one of the two positions on boot and stay there. The manual does talk about "bandwidth sharing" but in reality it's lane stealing.
It's not "stealing" so much as "reallocation". And this implementation is exactly what I'd expect from all motherboards! Why is the ability to allocate lanes away from unused slots something that has to be restricted to a halo model? It's so simple:
  • Choose a PCIe x16 slot to participate in lane reallocation
  • Add 4 M.2 slots on the board and link them to the above PCIe slot
  • Write the following UEFI code:
    • If none of the linked M.2 slots are populated, the linked PCIe slot runs at x16
    • If one of the linked M.2 slots is populated, the linked PCIe slot runs at x8 (AFAIK x12 is not a valid step for PCIe, so you lose 4 lanes here)
    • If two of the linked M.2 slots are populated, the linked PCIe slot runs at x8
    • If three of the linked M.2 slots are populated, the linked PCIe slot runs at x4
    • If all four of the linked M.2 slots are populated, the linked PCIe slot runs at x0 i.e. is disabled
That way, if you never use a particular M.2 slot, you don't lose that bandwidth from the corresponding PCIe slot.

You don't even have to link it to M.2 slots. You could, for example, say that (just like on Promontory 21 for AM5), four SATA ports take four PCIe lanes, and then your x16 slot is linked to 3 M.2 slots and 4 SATA ports. Or To 2 M.2 slots and 8 SATA ports. Or...
 
Last edited:
Most AMD Threadripper only have 4 slots usable without a riser. A few have 5. I'm looking for 6. The price is a bit steep, but doable. Intel Sapphire Rapids cost more than my car.
Maybe consider the AMD WRX 80 Pro - AMD Threadripper Pro Motherboard. That joker has seven PCI lanes (4-PCI 4.0X16+3-PCI 4.0X8). I am not sure how fast into overclocking you desire for the memory or CPU.
 
Maybe consider the AMD WRX 80 Pro - AMD Threadripper Pro Motherboard. That joker has seven PCI lanes (4-PCI 4.0X16+3-PCI 4.0X8). I am not sure how fast into overclocking you desire for the memory or CPU.
A WRX80 system will cost about as much as a car.
 
It's not "stealing" so much as "reallocation". And this implementation is exactly what I'd expect from all motherboards! Why is the ability to allocate lanes away from unused slots something that has to be restricted to a halo model? It's so simple:
  • Choose a PCIe x16 slot to participate in lane reallocation
  • Add 4 M.2 slots on the board and link them to the above PCIe slot
  • Write the following UEFI code:
    • If none of the linked M.2 slots are populated, the linked PCIe slot runs at x16
    • If one of the linked M.2 slots is populated, the linked PCIe slot runs at x8 (AFAIK x12 is not a valid step for PCIe, so you lose 4 lanes here)
    • If two of the linked M.2 slots are populated, the linked PCIe slot runs at x8
    • If three of the linked M.2 slots are populated, the linked PCIe slot runs at x4
    • If all four of the linked M.2 slots are populated, the linked PCIe slot runs at x0 i.e. is disabled
That way, if you never use a particular M.2 slot, you don't lose that bandwidth from the corresponding PCIe slot.

You don't even have to link it to M.2 slots. You could, for example, say that (just like on Promontory 21 for AM5), four SATA ports take four PCIe lanes, and then your x16 slot is linked to 3 M.2 slots and 4 SATA ports. Or To 2 M.2 slots and 8 SATA ports. Or...
The joy of fast PCIe ... Up until 3.0, no switches were even necessary, two slots were simply wired in parallel. That's my assumption at least, I checked some info on Z490 boards (PCIe 3.0) and found the following for the MSI MEG Z490 Ace. The "Switch" here might be a real PLX one but other wires just split without a switch if the diagram is correct. 4.0 made everything costlier and less flexible.

1693921743538.png


PCIe x12 was part of the original specification but no products ever existed. It's probably been abandoned by now, and even if it hasn't been, it will continue to not matter.
 
And that's what AMD wants you to spend on it.
NewEgg has a bundle with WRX80 + Threadripper 5955WX for only $2500. If you are actually making money with your PC and making good use of the slots this might not be a bad deal. In my case it's a bit more than double the TDP and overly expensive at such a low core count just to get more PCIe slots - although I am tempted.

Whenever I find myself looking at threadripper I recompare perf and wattage (at lower core counts) I'm reminded how awesome it is to even be able to get a good performing 16 core cpu/mb combo for under $1000 on AM4/AM5.
 
Last edited:
It does but really it's a niche & extremely small one at that, even if you want 40+ PCIe lanes bifurcated nicely I doubt you'd utilize all of them to the fullest. This is why they're not any board makers doing that, on servers you probably have all these lanes being saturated from time to time.
 
That board really has a lot of flexibility built in but those aren't PCIe packet switches, which would enable communication with both ports. They are simply switched to one of the two positions on boot and stay there. The manual does talk about "bandwidth sharing" but in reality it's lane stealing.
I did not call them packet switches either. They are literal switches. They switch lanes from one connector to another. Which I thought I explained in the rest of that post. So no, they are not multiplexers. They give you the option to redirect lanes according to a strict A/B configuration. Which in this case is the choice between M.2 slots or other slots/features onboard.
 
The biggest problem with Threadripper, apart from its price, is that it's always going to be at least a generation behind. So even if you buy WRX80 you only get PCIe 4.0, whereas AM5 will give you PCIe 5.0. And if I'm shelling out that much money I'd expect to be getting the latest and greatest for it...

Basically, HEDT is a scam.
 
NewEgg has a bundle with WRX80 + Threadripper 5955WX for only $2500. If you are actually making money with your PC and making good use of the slots this might not be a bad deal. In my case it's a bit more than double the TDP and overly expensive at such a low core count just to get more PCIe slots - although I am tempted.

Whenever I find myself looking at threadripper I recompare perf and wattage (at lower core counts) I'm reminded how awesome it is to even be able to get a good performing 16 core cpu/mb combo for under $1000 on AM4/AM5.
Thats a hard sell, basically just buying it for the pcie lanes as a 13700k as an example will beat it on single threaded comfortably and trade blows with it on all core usage for a much lower cost.
 
Thats a hard sell, basically just buying it for the pcie lanes as a 13700k as an example will beat it on single threaded comfortably and trade blows with it on all core usage for a much lower cost.
Exactly. Unless of course you need the expansion and over 128GB of memory and over 16 cores, then you might really get your bang for the buck. If you only need one of these three then you may not be getting as much as what you are paying for while already entering into server price territory. There was a similar combo with 3955WX for about $1200 but that's still a pretty premium for 2 generations behind. This of course is just my opinion and I admit in my heart I still want to get a Threadripper
OP has left the chat lol

Darn. Actually I think I found the slot configuration he was looking for in the ASRock Master SLI/ac :roll:

1693962650379.png


however the VRM's and OCP (or lack of) on that board really suck as many here already know by now. I have noticed in many boards with the 1x slot open ended, there are components in the way, but on this board it was like they actually thought about that and left some clearance for 4x cards to fit in 1x slot space. I still find myself browsing ebay to see if this board still exists from time to time.
 
Last edited:
Darn. Actually I think I found the slot configuration he was looking for in the ASRock Master SLI/ac
I think the one he was looking for is more like on the one he listed in OP or like on Gigabyte Z390D or UD (first x1 slot is above the first x16 slot, and you have empty space for dual-slot GPU).
The only problem is that his AsRock B450 is a subpar overclocker, and GB Z390D[UD] (at least from my experience) is utter garbage.
05_gigabyte_z390_ud.jpg
 
Last edited:
I think the one he was looking for is more like on the one he listed in OP or like on Gigabyte Z390D or UD (first x1 slot is above the first x16 slot, and you have empty space for dual-slot GPU).
The only problem is that his AsRock B450 is a subpar overclocker, and GB Z390D[UD] (at least from my experience) is utter garbage.
View attachment 312309
Oops I missed that point (with the dual slot) but if you go water cooled you get back down to a 1 slot card.
 
Most AMD Threadripper only have 4 slots usable without a riser. A few have 5. I'm looking for 6. The price is a bit steep, but doable. Intel Sapphire Rapids cost more than my car.


- GPU (2 Slots)
- 10G NIC SFP+
- USB3 Controller (dedicated to mouse)
- Analog Capture card

That's just moving stuff from my current build. I know there's going to be yet another USB standard released at some point, that's another card. That's 6 slots used, since the GPU is 2 slots.

I feel like you want a riser cable or two.

USB3 Controller just for mouse? This one seems odd: most USB Mice update at 100 times per second, and there are configs available to easily change that to 1000x per second.

------------

Typical CPUs only have 20x or 24x lanes of PCIe physically. IMO, it makes very little sense to make many, many PCIe connectors like you ask unless you do some weird port-bifurcation (turn a x4 slot into 4x 1x slots), which is what GPU Miners do.

If you _REALLY_ need PCIe slots, then the Threadripper platform is for you.


1694010088178.png


But honestly? I think you're wasting your money on such a platform. You're not "really" using those PCIe lanes to their full potential with this proposed setup you've got. This is a $1000+ motherboard for a reason, some people really do need a ton of PCIe lanes for what they do.

In your case, it'd be cheaper to just buy two computers. One for now, and then upgrade 5 years from now when a new USB standard comes out.

The biggest problem with Threadripper, apart from its price, is that it's always going to be at least a generation behind. So even if you buy WRX80 you only get PCIe 4.0, whereas AM5 will give you PCIe 5.0. And if I'm shelling out that much money I'd expect to be getting the latest and greatest for it...

Basically, HEDT is a scam.

I wouldn't quite call it a "scam", but its a niche product unsuitable for the vast majority of computer users.

If you're going to get 4x GPUs and 16x NVMe RAID0 SSDs, you'll need something like the Threadripper platform. Nothing else will work. But this is a niche-within-a-niche, very few users ever would need such a beast. I've also seen talks on Netflix servers with NVMe SSDs + multiple SFP+ 10Gig Networking with port-ganging that would need this level of PCIe lanes to function at max speed. But these are very atypical uses of computers.
 
Last edited:
I wouldn't quite call it a "scam", but its a niche product unsuitable for the vast majority of computer users.
I was more referring to the artificial market segmentation of "mainstream" and "HEDT", whereby getting a sane number of PCIe slots with a sane number of PCIe lanes available should be normal on the former, yet is only possible on the second with all its other, mostly irrelevant bells and whistles.

In the Athlon64 days, before the scam of HEDT was dreamed up by the marketing assholes at Intel, the high-end (but very much mainstream) boards had twin PCIe x16 slots (mechanical and electrical) with some PCIe lanes left over to boot - all for barely more than a hundred quid. Inflation may be a thing but manufacturers are massively and artificially contributing to it by giving us less and charging more for it.

Fuck Intel for their bullshit, fuck AMD for collaborating with it, and fuck the motherboard manufacturers for enabling both.
 
I was more referring to the artificial market segmentation of "mainstream" and "HEDT", whereby getting a sane number of PCIe slots with a sane number of PCIe lanes available should be normal on the former, yet is only possible on the second with all its other, mostly irrelevant bells and whistles.

In the Athlon64 days, before the scam of HEDT was dreamed up by the marketing assholes at Intel, the high-end (but very much mainstream) boards had twin PCIe x16 slots (mechanical and electrical) with some PCIe lanes left over to boot - all for barely more than a hundred quid. Inflation may be a thing but manufacturers are massively and artificially contributing to it by giving us less and charging more for it.

Fuck Intel for their bullshit, fuck AMD for collaborating with it, and fuck the motherboard manufacturers for enabling both.

Hmmm... I dunno. PCIe lanes, especially 4.0 and 5.0, are getting very expensive. The tolerances required to consistently pipe 32GT/s per lane (64GBps for a x16 PCIe 5.0) is above-and-beyond RAM throughput.

So if you look at a typical x28 lane chip like the AMD 7950x3d, that's 112 GBps of I/O throughput. In contrast, your DDR5-5600 RAM is only giving you 44.8 GBps. If you dual channel (like you should), that's 89.6GBps to RAM, but 112GBps to I/O.

If you literally have your CPU do nothing but pass I/O to and from RAM, your computer is RAM-bottlenecked (!!!!). You have more I/O bandwidth than RAM bandwidth on today's computers.

----------

Who actually needs all this bandwidth? IMO, nobody. Even standard consumer platforms are overly thick with I/O bandwidth to absurd levels. The only way to effectively use all this I/O bandwidth is through technologies like DirectX DirectStorage (GPU to NVMe, bypassing the RAM bottleneck).
 
Hmmm... I dunno. PCIe lanes, especially 4.0 and 5.0, are getting very expensive. The tolerances required to consistently pipe 32GT/s per lane (64GBps for a x16 PCIe 5.0) is above-and-beyond RAM throughput.

So if you look at a typical x28 lane chip like the AMD 7950x3d, that's 112 GBps of I/O throughput. In contrast, your DDR5-5600 RAM is only giving you 44.8 GBps. If you dual channel (like you should), that's 89.6GBps to RAM, but 112GBps to I/O.

If you literally have your CPU do nothing but pass I/O to and from RAM, your computer is RAM-bottlenecked (!!!!). You have more I/O bandwidth than RAM bandwidth on today's computers.

----------

Who actually needs all this bandwidth? IMO, nobody. Even standard consumer platforms are overly thick with I/O bandwidth to absurd levels. The only way to effectively use all this I/O bandwidth is through technologies like DirectX DirectStorage (GPU to NVMe, bypassing the RAM bottleneck).
Which is why PCIe 5.0 makes no sense in the desktop space. Instead of giving us 28 lanes of PCIe 5.0, gives us 40 of PCIe 4.0.
 
In short : Buy older HEDT or server stuff, if you have the need of A LOT of PCIe lanes, or get used to whatever manufacturer integrates into PCB.

Longer version :
In the Athlon64 days, before the scam of HEDT was dreamed up by the marketing assholes at Intel, the high-end (but very much mainstream) boards had twin PCIe x16 slots (mechanical and electrical) with some PCIe lanes left over to boot - all for barely more than a hundred quid. Inflation may be a thing but manufacturers are massively and artificially contributing to it by giving us less and charging more for it.
Not sure why so salty on Intel here...
PCIe lanes up until 2009/2013 (Intel/AMD), were basically a purely Chipset thing.
You had seperate die dedicated to do PCI-e, which meant you could have made "HEDT" on the cheap with simply a bit better chipset and the same Socket/CPU/RAM.

Then LGA 1156 came along, with integrated PCI-e (into CPU).
Intel made LGA 1366 X58 as HEDT (but that platform still has PCIe inside X58 chipset, and not in CPU).
X79 is the first Intel HEDT, with actual 40 PCI-e lanes going from CPU.
Result is LGA 2011 or 2011 pins that were needed to do that (that's PCI-e 3.0).
So, how much of a difference in cost is there between LGA 115x and 20xx ?
I don't know, but it's big enough for Intel and MB guys to ask for more.

Consumers are meant to pay more for less in capitalist world (because otherwise there is a hard limit on how much money someone can earn).
If enough people don't care about above rule, it becomes a norm that everyone else has to live by.

In the end, you may not like it, BUT that ship has already sailed long ago (and unless consumers stop buying low PCI-e port boards, this isn't going to change).
 
Back
Top