• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Few PCIe lanes on home motherboards for how long?

You then ignored the huge sections of ones that don't require it in the similar product section.

No offense. I think you did not understand my post.

The point was to explain how a mainboard works.

There are not only the mechanical limitations. There are also restraints in the mainboard software. There are also restraints in the operating system - mainly the kernel. I think in the windows world you would call it than windows drivers most likely.

I would not give the expression that every plugin card will always work on every mainboard.

e.g. ASUS 90MC0CE0-M0EAY0


Just because my ASUS X670-P Mainboard just has some connectors - does not mean it will work. The 3014 uefi version was just released "just" recently.
A MSI or gigabyte plugin card will not work - although they have similar connectors to get usb 4 working.
The mainboard manual does not mention that! I think its called thunderbolt or usb 4 header. I also assumed in may 2023 when i bought that mainboard - nice - i can plugin in any available future card. That is not the fact.

22-11-2024_08:11:47_screenshot.png


All that nonsense repeats for other ASUS mainboards for other plugin cards with fancy names in the plugin cards like "thunderbolt 3" / "thunderbolt 4" / "usb 4". These extra fancy plugin cards were usually according to geizhals.at price search around 100€ last time i checked. (As i have a Asus mainboard - i check more asus plugin cards - or if these work with my existing hardware)


--

I have the limitations of the throughput, some call it bandwidth, from mainboard chip one to mainboard chip two to the processor on my X670 mainboard. So regardless what'S connected to those two peripheral chips, they all share as far as i know 4 lanes of pcie 4.0 bandwidth in total for everything to the processor.

USB - i read a book years ago about usb 1.1 - is also a shared medium most of the time. You also share the bandwidth sometimes with some usb peripherals.
See for example
Code:
Sienna_Cichlid /home/roman # lsusb -t

--

I do not want to write about bitfurcation now. I would most likely to have to look it up now or find a link for it. i think i have a rough idea what it's about it.

Any questions? why I wrote "no" earlier?

There are *pauses and counts* 6 NVMe drives in my X570 build.


May I ask something?

Which operating system do you use? Do you need any extra software to get it to work?

qm2-4p-384_diagram.png



Why did you not went with a usb bridge case? Why not a nas?
It's obvious you are also sharing the bandwidth.

If you need more lanes, then you have an option. For AMD it's called threadripper, or threadripper pro,

Or just usb
 
Last edited:

May I ask something?

Which operating system do you use? Do you need any extra software to get it to work?
Ran with both Windows 10 x64 IoT Enterprise LTSC 2021 and Windows 11 Enterprise LTSC. No extra software, driver, or even UEFI/BIOS* config required.

It's *just* a (PLX brand, IIRC) PCIe switch, physically configured to interface with 4x M.2 M-Key slots.
UEFI and Windows 'see' the connected PCIe Devices/NVMe drives the same as they would connected through the chipset's or SoC's lanes.
*Yes, switched NVMe expanders work on 'legacy' systems. -bud of mine has 2x 960GB PM963s running on an ASmedia Gen2 switchcard in a Socket 939 machine (non-boot).
I've also used cheaparse 'mining surplus' ASmedia gen2 x1 switches to add in over a dozen 16GB Optane M10s (as experiments in both a Ivy Bridge and Haswell build.)

PCIe Switches are really fun, ngl. :laugh:

Why did you not went with a usb bridge case? Why not a nas?
Sure, both would work fine. I didn't want the caveats.

USB<->NVMe bridges add extra translation, processing, and latency. My P41+s (and other DRAMless NVMes) cannot use HMB in a bridged enclosure.
While, not my goal or config... 'removable disks' do not play nicely w/ 'Windows RAID', either. Internal over PCIe, it's easy to 'software array' drives.
My QM2-4P-384 expander was less than $100 used (disclosed bad fan). USB 3.2+ NVME-USB bridging adapters are typ. more than $25/ea. and I do not have more than 1-3 USB3.x >10gbps ports.
Also, only recently did 'cheapie' NVMe bridges become decently reliable (I've had 2 'early' bridges die, out of the blue).

I don't have a NAS.
It was more affordable (and more desirable) to piecemeal the PCIe switched NVMe expander card and affordable matching NVMe drives (That, I'd already tested/trusted), over a couple-few months.

Even with 10GbE LAN, Optane/NVMe cached HDDarrays, etc. there's more latency 'off a NAS and over LAN' than a PCIe-local NVMe.
Sustained bandwidth could be higher w/ a great NAS and LAN, though. I have considered putting my 4xP41+s and their Gen3x8 NVMe expander into my prospective housemate's NAS, instead.
IoW, "It's not a bad idea" -even with the qualitative differences vs. Local PCIe NVMe drives.


It's obvious you are also sharing the bandwidth.
Each drive gets Gen3x4, the same as the switch uplink's (currently configured) bandwidth.

The drives are not (currently) in any kind of RAID, they are E:, F:, G:, and H: drive.
The 'supported uplink' of the QM2-4P-384 is Gen3x8 and my X570's chipset-connected Gen4x4 slot, runs it @ Gen3x4.

Aside from sustained and heavy E-F-G-H intradrive transfers, I don't really see any loss in bandwidth/performance.
Even though the P41+ is a Gen4 drive, it performs 'well' in a Gen3 slot (being amongst the better 'low-end' Gen4 QLC DRAMless NVMes).


TL;DR and answer to the inevitable "But, why?":
-ePeen of an all X3D+Optane+NVMe build
and
-practical cost+usecase considerations.
 
Last edited:
No

You need a special mainboard with special, extra cost, software

Quote from your amazon suggestion.


I really dislike those expansion cards which rely on mainboard special features.

Call it whatever you want

Fake raid / software raid / pci-e bifurcation / ....

--

At the end of the day - one slot - one expansion card - no special expansion card.

special cards are those graphic cards with add on m2 nvme slot -> need uefi extension
my asus mainboard has an asus usb 4 expansion card -> needs uefi extension -> same applies to msi and gigabyte

Expansion cards should not rely on special uefi software for booting or just for usage.

Indeed.I love posts that do not apply cost to the equation. Is there a TRX50 CPU that you can buy for $200 Canadian? Do you think after adopting X399 I would not have stayed there if prices were not obscene? That is the issue that some of the community have. You cannot give the MB vendors a pass when there is so much variation that you need to do real research to get what you want. So on AM4 it was the X370 Crosshair. Most X470 boards were great for PCIe fexibility and X570S (some of them) like the Ace Max are great. There was a point about 1.5 years ago where NVME storage was cheap and filling out your board felt like 2012 with inexpensive Drives.

Wow you make it seem like if PC is hobby you must be an idiot for demanding more for your money.

Ok so what about the boards with six SATA slots and two M2 slots and if you have two M2 drives two SATA slots will drop? Do keep in mind you can get very cheap M2 drives on the cheap these days, or you can just have them. I just learned this was a thing a few months ago, and I too am of the age where I have dealt with IRQ settings. "Oh cool I can get this cheapo board and just plop this M2 drive I scavanged from that dead laptop and I can run both OD drives as well a my hard drives!" is literally what I thought when I went looking for a new AM4 motherboard, just a few months ago. I don't even remember how I learned I couldn't do that, but I was mightily pissed off when it turned out that I couldn't and it was just luck that I found out before I bought the motherboard. Unless you think "lazy" means "slavishly following PC components development" I can absolutely see how people learn the hard way that the modern day sucks.

And I can't get over how you called plug and play nonsense.


The thing is not too long ago I could use all of the slots on my motherboard at the same time. If I see that I have six SATA ports, two M2 ports and three PCIe ports I'm going to assume that is what I can use, because until just a few years ago that was the case. But sure I get why I can't do that anymore (as you said, cost), but it's still annoying. And this is where we lament the lack of decently priced "professional hardware" (what we used to call HEDT). You only get Threadripper, and a modern Threadripper is what, €1.5k for just the CPU? I understand all the business side of things but see this as a space for us to vent. You can go do whatever you do (playing with old hardware, which .... can be fun honestly) and just let us in peace long for a time when I could look at a motherboard and based on that alone I could tell how many hard drives I could connect to it.

And again ... you called plug and play nonsense. I can't get over it.

This lilhasselhoffer is on my ignore list. Don't waste your time with him.

Let me take this all in one go...because this goes toward a single point that the OP made so elegantly. "I don't want to talk to you, so I set you on ignore." I therefore don't have to ever justify what I want...and I can be entitled and act like things should be what I say....period.


The reason you have PCI-e 5.0 is not because you need that interconnect speed...but because you might in a few years. Cool. Right now PCI-e is 1/4 the bandwidth, and it's a problem for only the best cards at x8. Despite this, our hardware uses 5.0 because insanely expensive drives can come close to saturation...so you are looking at one real use case for the extra bandwidth. The thing is, as was pointed out in a very long explanation, you could install a splitter chip and get boards with 2x the PCI-e lanes. You don't see that because....

Let me just bullet point the because:
1) Most people only really need a GPU.
2) Most people who need more than a GPU need a lot more...and thus need a professional platform.
3) The people in group 1 want the most performance for their money...so less interconnect and more performance is a better deal.
4) The people in group 2 want the most performance...so they can have less speed for greater core quantities and more interconnect.

You are inevitably asking for a niche of a niche group to be served, and "showing your ass" as it's referred to here when somebody calls you out. Cool. You don't understand why the compromise was made, but you want to moan about it. You also don't want to purchase an even more expensive board using expensive specialty silicon, because the market doesn't tolerate a $600 consumer board. Whine all you want, but the reason what you want doesn't exist is because it's not a saleable item in enough volume at a rational cost to come to market...and companies that do this sort of thing either sell their stuff for insane prices or go belly-up.

Regarding your comments @Frick....please read. The nonsense was me commenting on your claims about what I said. It's nonsense that I hate plug-and-play. It's nonsense that anybody would look back on those days and want all of the crappy busy work. It's nonsense that you'd want to foist that opinion on me, when I never shared it. To clarify, I am happy that 99% of the work is gone, and I'm depressed that people moan about that last 1% of things that are still required. I hated setting up windows xp x64. It was a slog through drivers working together...and it was still well past the days of provisioning. It's silly when people claim that their memory running at 6400 speeds doesn't work, when the motherboard is only rated to 5800. You have to read that...so why not just take a quick peek at the storage configuration? It's one of the last things that requires a brain left in the PC world. All the rest is so easy...but that 1% is what everyone here is really moaning about...
Let me not stop there. Allow me to show when I was an idiot. I bought into Sandybridge's enthusiast platform. It had initially been projected to be all sorts of SATAIII awesome, and I bought in based on that. What I got was a lot of SATAII...and I had to read the manual to figure out where to plug my shiny new SSDs into the board. If I plugged them in wrong it'd be a slog at way lower speeds. I didn't bemoan the problem...because a $300 board in 2011 was a large expense. It was silly when great boards and CPUs could be bought for under the $600 price tag of just the Sandybridge CPU....but I paid more to be able to run 64 GB of RAM in 8 slots. I paid more...because I perceived that I needed more. I was wrong. DVD image processing rapidly switched to iGPU acceleration, and I'd have been better buying two rigs over 10 years rather than one. Again, way less PCI-e lanes would be fine for way less dollars...


Wrapping up with the last of this, let me make my message clear. I believe the OP is entitled, and acting like a whiney child. I base this on their last few few grasps at claiming things "should be illegal." Cool. The law shouldn't be used to make business conform to wishes, but to rectify issues. This is why we have the law, and the court of public opinion. MS shouldn't be forced to support Windows 10 forever...but their customers hammering them for being displeased should be tolerated. That's why I'm tired of stuff like this, whining about not wanting to spend the money on a professional piece of hardware, while complaining that the consumer stuff isn't good enough. The consumer level hardware is meant to be good enough...and if it isn't you can literally go buy an expansion card for almost anything, at as exorbient a price as you want to pay. That isn't good enough for some people...who instead of simply buying a motherboard and voting on features with your wallet decides to bemoan the fact that things are too expensive. Yeah...it sucks. Get a helmet, or retire from living life in the way you are now. (note that's not the edge lord "kill yourself," but intended as a "go change what it is you do, and make yourself happy" sentiment)
 
1) Most people only really need a GPU.
2) Most people who need more than a GPU need a lot more...and thus need a professional platform.
3) The people in group 1 want the most performance for their money...so less interconnect and more performance is a better deal.
4) The people in group 2 want the most performance...so they can have less speed for greater core quantities and more interconnect.
Disingenuous and you know it.
The bare minimum has tacticool levels of interconnect at a middle price because we are in a market where pick-and-place components are mass produced and sold at discount for convenient mass manufacturing just to get product made and moved out. That's the template for unfinished product, not final sale. The feature push for this and that is on the production side and it's rarely everything the customer needs. It's basic production and ubiquity: "A chicken in every pot, an XYZ for every home" sort of deal. When price discovery isn't completely on its head, we see loadouts that make sense but space remains a high $$$ premium and features are just whatever because people are compelled to buy whatever they can afford. Most people need the bare minimum after the socket:

Dual channel memory support
Full size PCI-E x16 slot for some kind of accelerator, usually GPU
2 M.2 for compact footprint asymmetric JBOD or fast af RAID
2 sata for inexpensive storage or data migration
USB 20-pin for case headers or DOM
USB 9-pin because we all need more USB
Open PCI-E x1 or x4 slot for whatever prolongs the life of the machine (READ: Cutting eWaste)
Attractive design

That's literally small form factor and very recently some of embedded.
When people need more they opt for a better board or older machine.
We all know how to arrange that but it doesn't mean it's convenient.
Normies favor convenience over performance, which is how and why they get stuck with junk.
Enthusiasts are after something way more capable and the pros want it all. They get it too.
Fatter wallets = fatter boards. More PCI-E, assloads of cores, memory, USB, etc.
Many of us are not looking to slot in 8x 4090 cards or whatever you think defines a workstation.

At the non-prosumer level we don't have extreme numbers in mind, we just want a little more than what we have now and less headache.
 
I don't like spending the money it takes to get nice equipment too...

Doesn't change much.
 
Back
Top