• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Few PCIe lanes on home motherboards for how long?

The thing is PCIe slot is not supported for SLI or Crossfire. The problem is that has been around for years. When X399 launched you could get a 1900X for $200. That was 64 lanes of PCIe. Some of us (myself included) went all out on those. You could buy a 4xM2 PCIe card Asus called them Hyper X for $50. I still have 2 from my X399 days. There is the other thing that PLX chips are no longer used. Those allowed for board like the 990FX Sabretooth to have some serious PCIe lanes. Fast forward to Ryzen and there were boards that made up for it. What I mean is that 2x16 was no longer viable so some boards gave you an adapter card for 2 M2s on the 2nd PCIe slot. There is an illusion that X8 x16 will slow down the GPU but that is patently false as it has been proven that 8 lens is the equivalent of 16 lanes form the previous generation. That means that when you see 99% GPU usage that it is fine.

There is also the fact the industry as a whole is reactive. Thunderbolt was introduced by Intel and then AMD responded with USB 4. My issue with that is that is not something needed on the high end but should be a budget for a cheap MB. Is there anything that justifies that the X870E Godlike has 7 USB C rear ports? Then we have the X870E Carbon that looks exactly like the X670E but no, they made it seem like both PCIe lanes were supported as the shielding is the same but the 2nd slot only runs at x4. Now what we have is only 3 boards from the entire X870E lineup that support x8x8. That would be the Taichi, One of the highest Asus boards and the X870E Godlike. Of course the Godlike is the only one that gives you an adapter card. I too was fooled by Asus. Instead of the X670E Carbon I got the E Strix but the joke was on me as the 2nd slot is wired at x4. Imiss the days of X570 and X570S like the Unify and Ace Max. Those boards even have the 3rd slot wired at x8. No lane splitting but you could do a lot with that 3rdslot including a GPU I mean you have 8 electrical lanes. Too bad that is reserved for Uber expensive platforms now.
 
To be fair, anyone could think that if a board has 4 M2 slots, all 4 are usable under any condition.

On the other hand, you might not even need to read the manual, as the specs page would tell you about that as well
Unfortunately, the wording such as "shares bandwidth with" is all to easy to misunderstand ... actually it's wrong if only one of the two connected devices is always disabled.
 
Reason you shouldn't worry about is is tested. Even a RTX4090 the difference between 16x and x8 pcie 4.0 is only 2%. 143fps vs 140fps at 4k.

 
In my Asrock X299 Taichi XE, all m.2 slots are used (3x4 lanes) and i still can use 2 16x or 3 8x for videocards. That is 44 lanes (+4x from the chipset?). But that is an old hedt skylake platform.
 
I think it'S called lane sharing.

I think some amd processors have various multi purpose Input/output interfaces which can be used for only one feature.

I think somewhere else someone wrote that you can buy an adapter which converts a m2 pcie slot to a pcie card slot.

edit: I'm kinda surprised that my M-AUDIO Air 192|6 audio interface works well over USB 2.0. That's another word for external sound card with balanced inputs / outputs.
Some of the peripherals should belong to usb / dockingstations / keyboard / monitors ...

reminds me of the other topic when someone asked if someone would use an add on card in their computer recently.
 
Last edited:
Then there's typically an errant useless X1 slot pinned the furthest away. Why?
What purpose does this have when the main slots are full?
We need to get rid of that and start borrowing from older and SFF designs where an X1 slot is shared with top M.2 or something.

Sound cards, SATA controllers, RS232, whatever. The PCIe x1 is very useful. In this scenario we'd end up with people asking why they can't use both anyway.
 
Standard response: more interconnection means more money.

Realistically, they push for higher and higher PCI-e standards, which assigns the bandwidth to less and less lanes. Realistically, you can go out and buy a PCI-e card to install multiple M.2 drives into: Amazon product link You buy that, and suddenly you've got 1 on the board and 4 on a card...so where's the issue.


Regarding all of this other nonsense about plug and play...you do have to have a brain to build a PC. This is why outfits like Dell and the like exist building white box systems...because that half an ounce of thought applied to an 800 pound gorilla of a PC is something that people pay a premium for other people to do. I...don't support the lazy ignorance required to demand that everything should "just work."

Now that I've gotten down off my soap box...I'd like to ask what it is, exactly, that you expect. We used to have a bunch of expansion slots because everything required a card. Sound, video, nic, etc... All of that then started to get baked onto the boards themselves, because 99% of customers could be served enough with onboard cards and nics became required. Now we've got all of the features which use communication lanes baked in, and you're crying about how you don't want compromises on connectivity...because if it exists it should be able to run at 100% without any thought. I...don't understand why you believe that you're entitled to that, why you don't just buy an enthusiast grade platform if you genuinely need the connectivity, and most frustratingly why you believe that anyone with two brain cells even half functional should be able to spend hundreds of dollars without thought and have things just work.


Old man moment, but I remember when you had to provision for any installed card, instead of just slotting it in while the computer is off and having is auto-install drivers. I remember installations taking hours while your floppy disc drive screeched and tracked to try and find drivers that would work for your unique setup. I remember spending a boat load of money (for the time) on a connector card that required I set baud rates because it wouldn't ever "just work." Now people are angry when they install 4 M.2 drives and have a slot drop from Gen 5 x16 to Gen 5 x8...which is still not saturating the interface unless you've got some silly high-end hardware...connected to a consumer grade CPU and motherboard. It's...baffling that some people can complain about nothing and expect that they'll change something in this world. It is especially concerning when less than $100 will get you more slots without any issues...and is often much cheaper than the difference between high and medium end motherboards.
 
Not many, but these are meant to be top of the line boards and they only support 2-3 before dropping to x8 on the main slot. I use 3 M.2s as of the moment but I used to use 4.

Got 4 m.2's all x4 and 2 SATA and my video cards still running x16, gotta remember that the e adds more lanes and it depends on the CPU and how the boards setup. How ever i believe if i put any thing in the PCI e slot it might drop then but thats not going happen for my any time soon. The e makes all the difference on what you can do.
 
I have run 5x M.2 with my GPU at x8 on X570 with no problem :confused:

Edit:

Whoops I meant 4.. no space for 5.
 
Last edited:
Sound cards, SATA controllers, RS232, whatever. The PCIe x1 is very useful. In this scenario we'd end up with people asking why they can't use both anyway.
Not useful when you can't use it, which is the point. I get that some people may opt for the x1 to not "starve" their GPU of airflow but too often I end up using some HHHL x8 card that doesn't cause or distribute any problem. The issue is this idea of thing in slot A commits to full speed and power while thing in slot C shuts off communications to whatever thing in slot B.

I see a full size board with M.2, X16, blank, X1, X4, M.2, X1 and think to populate:
M.2_1 with storage, X16 with GPU, X4 with network card, M.2 with storage, X1 with a capture card.
Can't do it. Doesn't matter how many lanes I juggle with the GPU or the network card. 24 lanes is exactly that and allocated accordingly.
Standard response: more interconnection means more money.
Boils down to this really. Wanting to commit to ONE machine for doing all the heavy lifting immediately means dropping $$$$ on some shit board that nobody can afford or support because nothing modern is considered by the vendor. Gain connectivity, lose every feature along the way. Pass.
Now that I've gotten down off my soap box...I'd like to ask what it is, exactly, that you expect. We used to have a bunch of expansion slots because everything required a card. Sound, video, nic, etc... All of that then started to get baked onto the boards themselves, because 99% of customers could be served enough with onboard cards and nics became required. Now we've got all of the features which use communication lanes baked in, and you're crying about how you don't want compromises on connectivity...because if it exists it should be able to run at 100% without any thought. I...don't understand why you believe that you're entitled to that, why you don't just buy an enthusiast grade platform if you genuinely need the connectivity, and most frustratingly why you believe that anyone with two brain cells even half functional should be able to spend hundreds of dollars without thought and have things just work.
I don't need onboard video, onboard ethernet or onboard wifi.
At Zen 3 launch there was at least a choice to grab an entry CPU without iGPU, so I did that.
I don't use onboard audio or TosLink since I'm not a desktopper and would default to some kind of external audio interface if needed.
GPUs started shipping with their own onboard audio interface a long time ago and most gamers use that anyway.
The network card I plug into the next full size slot is a consequence of not having 10GbE SFP in the board, which I'm not going to get from any AM4 board at any price.
That's the bare minimum for setting up my workflow and it already maxxed out all the PCI-E lanes and bandwidth options.
I would like to add a dedicated X1 capture card with its own features but at this point it goes right into another computer.
This means hauling around a big steel coffin of old hardware (Read: LIABILITY) when I need to take it anywhere. Lame.

These guys get upset over TONS of super fast storage not getting detection. I'm just trying to make it to Friday on a single M.2 and iSCSI.
 
Not useful when you can't use it, which is the point. I get that some people may opt for the x1 to not "starve" their GPU of airflow but too often I end up using some HHHL x8 card that doesn't cause or distribute any problem. The issue is this idea of thing in slot A commits to full speed and power while thing in slot C shuts off communications to whatever thing in slot B.

I see a full size board with M.2, X16, blank, X1, X4, M.2, X1 and think to populate:
M.2_1 with storage, X16 with GPU, X4 with network card, M.2 with storage, X1 with a capture card.
Can't do it. Doesn't matter how many lanes I juggle with the GPU or the network card. 24 lanes is exactly that and allocated accordingly.

I might have misunderstood you, didn't you want boards that disables the X1 slot if you use the second M2 slot? Plus I think literally any X670E motherboard can do what you list without dropping anything. ASUS has 6 SATA ports instead of 4 so if you fully populate the M2's you'll loose two SATA ports. MSI (who makes the cheapest of those boards) drops the PCIe X4 to X2 if you populate the M2_4 slot, but your example only had two M2 drives so you're fine.

But yeah having to read the specification sheets instead of just looking at the board and know what you can use is shit, but at least with higher end boards (the ones with dual chipsets) you can usually use everything you see at the same time.
Regarding all of this other nonsense about plug and play...you do have to have a brain to build a PC. This is why outfits like Dell and the like exist building white box systems...because that half an ounce of thought applied to an 800 pound gorilla of a PC is something that people pay a premium for other people to do. I...don't support the lazy ignorance required to demand that everything should "just work."

Plug and play is nonsense? Do you seriously long for the days when you had to assign IRQs? Do you want to spend hours instead of minutes to install hardware? Am I lazy if I want a USB drive to just work? Am I lazy if I think If I buy a motherboard with six SATA ports and two M2 NVMe ports I should be able to populate all those slots at the same time?
Old man moment, but I remember when you had to provision for any installed card, instead of just slotting it in while the computer is off and having is auto-install drivers. I remember installations taking hours while your floppy disc drive screeched and tracked to try and find drivers that would work for your unique setup. I remember spending a boat load of money (for the time) on a connector card that required I set baud rates because it wouldn't ever "just work." Now people are angry when they install 4 M.2 drives and have a slot drop from Gen 5 x16 to Gen 5 x8...which is still not saturating the interface unless you've got some silly high-end hardware...connected to a consumer grade CPU and motherboard. It's...baffling that some people can complain about nothing and expect that they'll change something in this world. It is especially concerning when less than $100 will get you more slots without any issues...and is often much cheaper than the difference between high and medium end motherboards.

Oh you do? I've encountered some weirdos online but that's just straight up perverted.
 
Standard response: more interconnection means more money.

Realistically, they push for higher and higher PCI-e standards, which assigns the bandwidth to less and less lanes. Realistically, you can go out and buy a PCI-e card to install multiple M.2 drives into: Amazon product link You buy that, and suddenly you've got 1 on the board and 4 on a card...so where's the issue.
Where does the GPU go in that system?
 
Pcie lane counts on consumer platforms have been increasing, not decreasing over the years. The issue is those pcie lanes all get allocated to x4 m.2 slots and PCIE switches have been jacked up in price to the point that they cost more than most high-end boards. At the same time TPU has to mention at the end of every SSD review that you don't see much real world performance benefits from drives supporting the latest interface speeds. So, we really need more m.2 slots sharing their links and bandwidth with each other and not our PCIe slots.
 
Nothing new going on here. Consumer product lines from Intel and AMD have been this way for a number of generations now.

Sure there is something new here, and that is storage that eats up 4 lanes for each drive.

NVMe wasn't around when AMD and Intel settled on this low number.

As for the board in the OP, this scheme could be a good thing since this way you use more CPU-bound lanes for more NVMe drives.The alternative would be to go through the chipset, which gives you the graphics slots back but makes the NVMe drives slower.
 
Pcie lane counts on consumer platforms have been increasing, not decreasing over the years. The issue is those pcie lanes all get allocated to x4 m.2 slots and PCIE switches have been jacked up in price to the point that they cost more than most high-end boards. At the same time TPU has to mention at the end of every SSD review that you don't see much real world performance benefits from drives supporting the latest interface speeds. So, we really need more m.2 slots sharing their links and bandwidth with each other and not our PCIe slots.
We know the vendors are heavily influenced by reviewers, in a recent live stream, a Intel rep even referred to reviewers as their community.

When the trend started to using most lanes on M.2, I don't recall any reviewers criticising it, so the vendors seen it as a free pass to keep going in that direction, and now it has become the norm, it still doesn't get criticised.

Think of how often things get changed/fixed when a reviewer raises it vs when a typical paying customer raises it, we see where the priorities lie.

Reviewers tend to look at the VRM often as well, which leads to huge amounts of a budget beefing up VRM's to an excessive amount as well.

I would absolutely love to be a board reviewer and was so tempted to apply on TPU, but I am not sure if I would keep my job, as I would likely be making a lot of criticisms. My review would definitely be different to the norm, I would be looking at overall connectivity not just M.2, going over the bios with detail, looking for bugs etc.

Sure there is something new here, and that is storage that eats up 4 lanes for each drive.

NVMe wasn't around when AMD and Intel settled on this low number.

As for the board in the OP, this scheme could be a good thing since this way you use more CPU-bound lanes for more NVMe drives.The alternative would be to go through the chipset, which gives you the graphics slots back but makes the NVMe drives slower.
My SN850X isnt slowed down going via the chipset. There is shared bandwidth on the chipset vs dedicated on CPU lanes, but the shared bandwidth is only going to slow you down if you actually over loading it, which for consumer use is going to be extremely rare, especially on 8 lane Intel chipsets. On an Intel chipset you would need to be pegging 3 NVME all at once on sequential, I cant think what workload would be doing that.
 
My SN850X isnt slowed down going via the chipset. There is shared bandwidth on the chipset vs dedicated on CPU lanes, but the shared bandwidth is only going to slow you down if you actually over loading it, which for consumer use is going to be extremely rare, especially on 8 lane Intel chipsets. On an Intel chipset you would need to be pegging 3 NVME all at once on sequential, I cant think what workload would be doing that.

Writing in RAID1 or RAID5.
 
Writing in RAID1 or RAID5.

I'd say if you really need that kind of bandwidth you're definitely HEDP clientele. And yeah, there should be a HEDT system that's somewhat sensibly priced.
 
I'd say if you really need that kind of bandwidth you're definitely HEDP clientele. And yeah, there should be a HEDT system that's somewhat sensibly priced.

Yeah. I'm just saying, it can make sense to divert more CPU lanes to NVMes like the mainboard in the OP does.

As for more lanes, old Xeons and EPYC is what we are feeding off. Lots of cores, but per-core speed? Nah.
 
Realistically, they push for higher and higher PCI-e standards, which assigns the bandwidth to less and less lanes. Realistically, you can go out and buy a PCI-e card to install multiple M.2 drives into: Amazon product link You buy that, and suddenly you've got 1 on the board and 4 on a card...so where's the issue.


No

You need a special mainboard with special, extra cost, software

Quote from your amazon suggestion.

Quad NVMe PCIe Adapter, RIITOP 4-Port NVMe to PCI-e 4.0/3.0 x16 Expand Controller Card with Heatsink for 2280/2260/2242/2230 M.2 NVMe SSD (PCI-e Bifurcation Required)​


I really dislike those expansion cards which rely on mainboard special features.

Call it whatever you want

Fake raid / software raid / pci-e bifurcation / ....

--

At the end of the day - one slot - one expansion card - no special expansion card.

special cards are those graphic cards with add on m2 nvme slot -> need uefi extension
my asus mainboard has an asus usb 4 expansion card -> needs uefi extension -> same applies to msi and gigabyte

Expansion cards should not rely on special uefi software for booting or just for usage.
 
I have run 5x M.2 with my GPU at x8 on X570 with no problem :confused:

Edit:

Whoops I meant 4.. no space for 5.
Who needs M.2 slots for NVMe drives? :laugh:
There are *pauses and counts* 6 NVMe drives in my X570 build.
CPU-M.2: Optane P1600X 118GB
X570-M.2: Rocket 4.0 1TB
X570-4.0x4: QM2-4P-384 w/ 4x P41+ 2TB (PCIe switch-equipped)

Consumer AMD platform expansion is 'tight' at the moment. It doesn't help that motherboard manufacturers are eschewing AIC expansion slots for NVMe slots.
That said, those that have 'extra-ordinary' needs, can still find a config that suits them.
 
I might have misunderstood you, didn't you want boards that disables the X1 slot if you use the second M2 slot? Plus I think literally any X670E motherboard can do what you list without dropping anything. ASUS has 6 SATA ports instead of 4 so if you fully populate the M2's you'll loose two SATA ports. MSI (who makes the cheapest of those boards) drops the PCIe X4 to X2 if you populate the M2_4 slot, but your example only had two M2 drives so you're fine.

But yeah having to read the specification sheets instead of just looking at the board and know what you can use is shit, but at least with higher end boards (the ones with dual chipsets) you can usually use everything you see at the same time.


Plug and play is nonsense? Do you seriously long for the days when you had to assign IRQs? Do you want to spend hours instead of minutes to install hardware? Am I lazy if I want a USB drive to just work? Am I lazy if I think If I buy a motherboard with six SATA ports and two M2 NVMe ports I should be able to populate all those slots at the same time?


Oh you do? I've encountered some weirdos online but that's just straight up perverted.

Where does the GPU go in that system?

I really enjoy that you guys are wincing and winging about someone not happy that their x16 slot goes down to x8 when they install a fourth M.2 drive...and they are acting like this is a crime because it was information easily available but they didn't get slapped in the face by it. That's just really stupid and lazy.

Regarding remembering provisioning, you'll note I didn't say I wanted or liked it. I said that I remember when you didn't simply stick a card in the slot and have 99% of the work done for you. It's amazing that today people bemoan that 1%, when setting up a PC used to require you doing the work for the other 99%. Wah...wah...I ONLY have 3 M.2 interface ports, which would require nearly a thousand dollars in hardware to saturate the interface, and I don't want to buy an M.2 to PCI-e interface that would allow me to mount 4 more cards without ever having the mobo (silently) configure itself to use the lanes it has to deliver the best performance I purchased. Wah...wah...I want a premium experience. Do you really want to be that level of annoyingly whiney?


Regarding all of this, the OP has opened like the fourth b**** thread in as many weeks. MS should be legally obligated to support their OS, it should be criminal to do this thing that causes me a minor annoyance, I hate motherboards autonegotiating connectivity for me so my consumer level hardware having a niche 4 M.2 drive configuration should be accounted for with magic lanes but at no additional cost to me. Get a helmet, and stop acting like a victim.



If you need more lanes, then you have an option. For AMD it's called threadripper, or threadripper pro, or a permutation therein. Up to 148 lanes of PCI-e 5.0. Choke on all of that bandwidth, and stop moaning when the (much cheaper, but less feature rich) consumer model isn't magic. If not, what I hear is somebody buying a Honda Civic and spending the effort to go online to a Honda forum and complain why it doesn't have a large block V8 option... Utter entitled madness.
 
I really enjoy that you guys are wincing and winging about someone not happy that their x16 slot goes down to x8 when they install a fourth M.2 drive...and they are acting like this is a crime because it was information easily available but they didn't get slapped in the face by it. That's just really stupid and lazy.

Regarding remembering provisioning, you'll note I didn't say I wanted or liked it. I said that I remember when you didn't simply stick a card in the slot and have 99% of the work done for you. It's amazing that today people bemoan that 1%, when setting up a PC used to require you doing the work for the other 99%. Wah...wah...I ONLY have 3 M.2 interface ports, which would require nearly a thousand dollars in hardware to saturate the interface, and I don't want to buy an M.2 to PCI-e interface that would allow me to mount 4 more cards without ever having the mobo (silently) configure itself to use the lanes it has to deliver the best performance I purchased. Wah...wah...I want a premium experience. Do you really want to be that level of annoyingly whiney?


Regarding all of this, the OP has opened like the fourth b**** thread in as many weeks. MS should be legally obligated to support their OS, it should be criminal to do this thing that causes me a minor annoyance, I hate motherboards autonegotiating connectivity for me so my consumer level hardware having a niche 4 M.2 drive configuration should be accounted for with magic lanes but at no additional cost to me. Get a helmet, and stop acting like a victim.



If you need more lanes, then you have an option. For AMD it's called threadripper, or threadripper pro, or a permutation therein. Up to 148 lanes of PCI-e 5.0. Choke on all of that bandwidth, and stop moaning when the (much cheaper, but less feature rich) consumer model isn't magic. If not, what I hear is somebody buying a Honda Civic and spending the effort to go online to a Honda forum and complain why it doesn't have a large block V8 option... Utter entitled madness.
Indeed.I love posts that do not apply cost to the equation. Is there a TRX50 CPU that you can buy for $200 Canadian? Do you think after adopting X399 I would not have stayed there if prices were not obscene? That is the issue that some of the community have. You cannot give the MB vendors a pass when there is so much variation that you need to do real research to get what you want. So on AM4 it was the X370 Crosshair. Most X470 boards were great for PCIe fexibility and X570S (some of them) like the Ace Max are great. There was a point about 1.5 years ago where NVME storage was cheap and filling out your board felt like 2012 with inexpensive Drives.
 
No

You need a special mainboard with special, extra cost, software

Quote from your amazon suggestion.


I really dislike those expansion cards which rely on mainboard special features.

Call it whatever you want

Fake raid / software raid / pci-e bifurcation / ....

--

At the end of the day - one slot - one expansion card - no special expansion card.

special cards are those graphic cards with add on m2 nvme slot -> need uefi extension
my asus mainboard has an asus usb 4 expansion card -> needs uefi extension -> same applies to msi and gigabyte

Expansion cards should not rely on special uefi software for booting or just for usage.

You...went to the trouble to view the page and find that a card which took me less than 10 seconds to get to on Amazon might not work for you. You then ignored the huge sections of ones that don't require it in the similar product section. The ones that turn an M.2 slot into multiple SATA ports. You also completely ignored that spending an extra less than $50 while adding literally 10x that much in drive hardware makes zero sense...or you could just buy threadripper and literally drown in PCI-e...


What entitles you people? If you genuinely need all of that interconnect then it should be a business or professional thing...so BUY THE PROFESSIONAL HARDWARE. I like my consumer grade stuff affordable, which means less lanes or I might just have to buy another special thing to go along with my special hardware.

M.2 to SATA
x8 PCI-e to M.2 card...no bifurcation needed.



Maybe now you guys can stop being completely insufferable about things? I mean, can't it be enough to just power down your PC, slot in a card, and have windows auto-install drivers that just work? It's not like reading a two line note in your motherboard's configuration before buying is asking for anything much here.
 
You...went to the trouble to view the page and find that a card which took me less than 10 seconds to get to on Amazon might not work for you. You then ignored the huge sections of ones that don't require it in the similar product section. The ones that turn an M.2 slot into multiple SATA ports. You also completely ignored that spending an extra less than $50 while adding literally 10x that much in drive hardware makes zero sense...or you could just buy threadripper and literally drown in PCI-e...


What entitles you people? If you genuinely need all of that interconnect then it should be a business or professional thing...so BUY THE PROFESSIONAL HARDWARE. I like my consumer grade stuff affordable, which means less lanes or I might just have to buy another special thing to go along with my special hardware.

M.2 to SATA
x8 PCI-e to M.2 card...no bifurcation needed.



Maybe now you guys can stop being completely insufferable about things? I mean, can't it be enough to just power down your PC, slot in a card, and have windows auto-install drivers that just work? It's not like reading a two line note in your motherboard's configuration before buying is asking for anything much here

Wow you make it seem like if PC is hobby you must be an idiot for demanding more for your money.
 
Regarding remembering provisioning, you'll note I didn't say I wanted or liked it. I said that I remember when you didn't simply stick a card in the slot and have 99% of the work done for you. It's amazing that today people bemoan that 1%, when setting up a PC used to require you doing the work for the other 99%. Wah...wah...I ONLY have 3 M.2 interface ports, which would require nearly a thousand dollars in hardware to saturate the interface, and I don't want to buy an M.2 to PCI-e interface that would allow me to mount 4 more cards without ever having the mobo (silently) configure itself to use the lanes it has to deliver the best performance I purchased. Wah...wah...I want a premium experience. Do you really want to be that level of annoyingly whiney?

Ok so what about the boards with six SATA slots and two M2 slots and if you have two M2 drives two SATA slots will drop? Do keep in mind you can get very cheap M2 drives on the cheap these days, or you can just have them. I just learned this was a thing a few months ago, and I too am of the age where I have dealt with IRQ settings. "Oh cool I can get this cheapo board and just plop this M2 drive I scavanged from that dead laptop and I can run both OD drives as well a my hard drives!" is literally what I thought when I went looking for a new AM4 motherboard, just a few months ago. I don't even remember how I learned I couldn't do that, but I was mightily pissed off when it turned out that I couldn't and it was just luck that I found out before I bought the motherboard. Unless you think "lazy" means "slavishly following PC components development" I can absolutely see how people learn the hard way that the modern day sucks.

And I can't get over how you called plug and play nonsense.
What entitles you people? If you genuinely need all of that interconnect then it should be a business or professional thing...so BUY THE PROFESSIONAL HARDWARE. I like my consumer grade stuff affordable, which means less lanes or I might just have to buy another special thing to go along with my special hardware.

The thing is not too long ago I could use all of the slots on my motherboard at the same time. If I see that I have six SATA ports, two M2 ports and three PCIe ports I'm going to assume that is what I can use, because until just a few years ago that was the case. But sure I get why I can't do that anymore (as you said, cost), but it's still annoying. And this is where we lament the lack of decently priced "professional hardware" (what we used to call HEDT). You only get Threadripper, and a modern Threadripper is what, €1.5k for just the CPU? I understand all the business side of things but see this as a space for us to vent. You can go do whatever you do (playing with old hardware, which .... can be fun honestly) and just let us in peace long for a time when I could look at a motherboard and based on that alone I could tell how many hard drives I could connect to it.

And again ... you called plug and play nonsense. I can't get over it.
 
Back
Top