- Joined
- Nov 12, 2020
- Messages
- 247 (0.14/day)
Processor | 265K (running stock until more Intel updates land) |
---|---|
Motherboard | MPG Z890 Carbon WIFI |
Cooling | Peerless Assassin 140 |
Memory | 48GB DDR5-7200 CL34 |
Video Card(s) | RTX 3080 12GB FTW3 Ultra Hybrid |
Storage | 1.5TB 905P and 2x 2TB P44 Pro |
Display(s) | AW3425DW and U2724D |
Case | Dark Base Pro 901 |
Audio Device(s) | Sound Blaster X4 |
Power Supply | Toughpower PF3 850 |
Mouse | G502 HERO/G700s |
Keyboard | Ducky One 3 Pro Nazca |
Nice idea, but this isn't actually viable without more work than one would think.ATX boards have theoretically 7 "slots" for cards. The first is usually used for a PCIe X1 or not used at all. You can easily do e.g. X4 + X16 + blank + blank + X4, which leaves two bottom slots for all the chipset lanes, e.g. X8 + X8.
The first thing lost is the ability to split off the CPU PCIe lanes. This may not be important for a majority of users, but it's still capability lost.
The biggest problem though comes from the chipset lanes. No chipset currently supports more than 4 lanes in a single connection. In the case of AMD they don't even have enough bandwidth to support more than 4 lanes.
To do what you're suggesting AMD would have to increase the bandwidth to their chipset as well as increasing the complexity of the chipset. Intel wouldn't have to increase bandwidth, but there would be added complexity. This is of course assuming we're just talking about PCIe 4.0 connectivity and not 5.0.
Then of course there's the added cost to buyers who now would need adapter cards for storage. In the end this comes out as an equally awful idea as the use of M.2 in desktop already is.
Last edited: