• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ASUS ProArt X870E-Creator Wi-Fi

Top M.2 slot lacks full length thermal pads
Say it with me now: "NAND does not require cooling, only the controller does."
 
Why is the MSI X870E Carbon motherboard so bad at gaming?

According to TechPowerUp's tests, this motherboard shows a drop in performance compared to other motherboards, especially on Cyberpunk. How can a motherboard affect gaming performance?
 
Short answer is no
Long answer is not on any AM5 motherboard no matter how expensive it is
the only way to get enough PCIe lanes to do it is to buy a Threadripper mobo
Thank you,

my goal is have a VGA that runs at PCie 5.0 x16
1 NVME that runs at PCie 5.0 x4
1 NVME that runs at PCie 4.0 x16

So everything all those devices at its best/max performance.
 
my goal is have a VGA that runs at PCie 5.0 x16
Not needed. Gen5 x8 is enough for a few generations. This website measured that 5090 "loses" 1% of performance in Gen5 x8 slot.
Screenshot 2025-03-24 at 17-20-42 NVIDIA GeForce RTX 5090 PCI-Express Scaling - Relative Perfo...png

1 NVME that runs at PCie 5.0 x4
ok, we have this.
1 NVME that runs at PCie 4.0 x16
???
So everything all those devices at its best/max performance.
Look at measurements I gave you. Not everything that shines is gold...
 
Unless you must have USB4 on a desktop, this board is inferior to the Gigabyte B850 AI TOP which is $130 less expensive and comes with dual AQC113C 10GbE

On the B850 AI TOP you also get two CPU-connected M.2 slots because the USB4 controller isn't consuming one of them. This makes your second M.2 drive much faster than if it were connected to the secondary chipset.

IMHO there is very little use for USB4 on a desktop, it's much more relevant to laptops or Mac desktops which have no internally expandable storage.
 
On the B850 AI TOP you also get two CPU-connected M.2 slots
By loading both of those slots you'll lose x8 PCIe lanes on the GPU
He's better off doing what he said PCIe 5 x16 GPU, PCIe 5 gen 5 x4 NVMe, use the chipset for the other NVMe at gen 4 x4
that way he gets to keep the x16 PCIe 5.0 for his GPU and not drop to x8
 
Why is the MSI X870E Carbon motherboard so bad at gaming?

According to TechPowerUp's tests, this motherboard shows a drop in performance compared to other motherboards, especially on Cyberpunk. How can a motherboard affect gaming performance?

Usually the Bclk is off compared to the rest. Some drive 100.8 something Mhz which usually is the reason why some boards do better then others.
 
Intel demonstrated with the i225 and i226 that they have absolutely no idea how to make a reliable ethernet controller,
I mean their enterprise parts for 10gbps are some of the best, actually. They seem to have just completely failed at translating that to a consumer part.
 
By loading both of those slots you'll lose x8 PCIe lanes on the GPU
He's better off doing what he said PCIe 5 x16 GPU, PCIe 5 gen 5 x4 NVMe, use the chipset for the other NVMe at gen 4 x4
that way he gets to keep the x16 PCIe 5.0 for his GPU and not drop to x8
No, Zen 5 has 16+4+4 lanes in CPU, so on the B850 AI TOP you get both M.2 slots CPU-connected with zero downsides (except not having USB4).

On the X870/X870E boards, they're required to have the ASMedia ASM4242 USB4 controller onboard, which consumes 4 CPU lanes, leaving only one gen 5 M.2 slot available without cutting the GPU down to 8. However on a few X870 boards, notably the Tomahawk, you have the option to disable ASM4242 USB4 in BIOS which then gives you two CPU-connected M.2 slots. If you're going to disable the ASM4242, then that begs the question, why buy an X870 instead of a B850.
 
I mean their enterprise parts for 10gbps are some of the best, actually. They seem to have just completely failed at translating that to a consumer part.
Yep, I'm quite happy with the X520 and X540 adapters. No issues running dozens of those those in multiple datacenters for years now.

Intel's consumer division is the one that's completely losing the plot these days.
 
Not needed. Gen5 x8 is enough for a few generations. This website measured that 5090 "loses" 1% of performance in Gen5 x8 slot.
View attachment 391299

ok, we have this.

???

Look at measurements I gave you. Not everything that shines is gold...
Thank you for the reply, I know it won't change much/anything but I like the idea to have a system that makes all the components work at their top/best specification.
It's not mandatory of course, I'm just looking for a board that would allow me to make it happens ^^

about your "???", the system in my mind should have these 3 components:

1 VGA (5080 maybe) that support and runs at PCie 5.0 x16
1 (main) NVMe drive that support and runs at PCie 5.0 x4
1 (secondary) NVMe drive support and runs at PCie 4.0 x16

I don't know if this motherboard (ProArt X870E) is able to do it or not.

I hope it's more clear this time.
 
1 VGA (5080 maybe) that support and runs at PCie 5.0 x16
1 (main) NVMe drive that support and runs at PCie 5.0 x4
1 (secondary) NVMe drive support and runs at PCie 4.0 x16

I don't know if this motherboard (ProArt X870E) is able to do it or not.
For this configuration you need a board with two Gen5 x16 slots capable of running in x8/x8 mode, which this board does.
You can run 5080 or other modern card in the first slot in Gen5 x8 mode without losing any significant performance.
In the second PCIe slot (Gen5 x8), you can run AIC with four NVMe Gen4 drives at their full speed
Primary NVMe runs in Gen5 x4 M.2 slot.
 
For this configuration you need a board with two Gen5 x16 slots capable of running in x8/x8 mode, which this board does.
You can run 5080 or other modern card in the first slot in Gen5 x8 mode without losing any significant performance.
In the second PCIe slot (Gen5 x8), you can run AIC with four NVMe Gen4 drives at their full speed
Primary NVMe runs in Gen5 x4 M.2 slot.
Are you aware of a card that is PCIe Gen5 x8 and supports four PCIe Gen4 x4 NVMe drives? I've never heard of such a card. I've seen quad M.2 to x8 cards but the last ones I'm aware of were Gen3.
 
Like, what else do you need from a board?
I personally want the possibility of x8/x8 slots for mGPU, and a display-in for video passthrough on USB-C for my KVM switch, so that only leaves the ProArt models as viable options, be it x670e, x870e, b850 or even b650.
With that board you can populate the board with up] to 4 5.0 drives but as I said before X670E does not have USB 4.0 to rob lanes
The x670e model also has USB4, using the JHL8540 controller, which is PCIe 3.0 x4 (limited to 32Gbps for both ports). Fun fact, it shares lanes with the first M2 5.0 slot, so you can run into issues while using both, and it's not noted in the manual. Reference:

The x870e uses a ASM4242 instead, which is 4.0 x4, so it should support the full 40Gbps bandwidth, but it makes use of x4 out of the x16 lanes meant for the main PCIe slots.

Given the needs that I stated above, both the x670e and x870e would've worked fine for me, I just would need to sacrifice one NVMe slot with either option (since I do need x8/x8 AND USB4). I went with the x670e since it was cheaper (paid ~$700, while the x870e version was close to $800) and had better availability in my region.
 
Are you aware of a card that is PCIe Gen5 x8 and supports four PCIe Gen4 x4 NVMe drives? I've never heard of such a card. I've seen quad M.2 to x8 cards but the last ones I'm aware of were Gen3.
 
A slightly different question, but is that still uncontested lanes?

My understanding of PCIe generations and lanes is that a 5.0 x8 slot has the same bandwidth as a 4.0 x16 slot, but it can't change generation; Four drives sharing eight lanes is still only two lanes per drive, and at gen 4.0, that's only half the lanes each drive needs because a gen 4.0 drive cannot magically upgrade itself to a gen 5 drive.

Unless that ASUS Hyper M.2 has some very fancy PCIe switch that can translate 8 gen 5.0 lanes into 16 gen 4.0 lanes (I've never heard of such a thing) then those four M.2 drives are fighting each other for lanes and will only have half the lanes they want when striped or RAIDed in such a way that all drives are active simultaneously.
 
AFAIK that's just a dummy board without any kind of proper PCIe switch on it, so all it does is make use of the bifurcation on the motherboard to split a x16 slot into x4/x4/x4/x4 and connect each NVMe into that.

If you only have x8 lanes available, you'll only be able to connect 2 NVMes. Only x4 lanes? Then it's just a single NVMe.
(I've never heard of such a thing)
It does exist, but it's hella expensive:

PCIe 5.0 x16 upstream, 2x x16 + 1x x4 PCIe 4.0 downstream.
 
My understanding of PCIe generations and lanes is that a 5.0 x8 slot has the same bandwidth as a 4.0 x16 slot, but it can't change generation; Four drives sharing eight lanes is still only two lanes per drive, and at gen 4.0, that's only half the lanes each drive needs because a gen 4.0 drive cannot magically upgrade itself to a gen 5 drive.
Unless that ASUS Hyper M.2 has some very fancy PCIe switch that can translate 8 gen 5.0 lanes into 16 gen 4.0 lanes (I've never heard of such a thing) then those four M.2 drives are fighting each other for lanes and will only have half the lanes they want when striped or RAIDed in such a way that all drives are active simultaneously.
On AM5 boards that support x8/x8 bifurcation, this AIC can run two NVMe Gen5 drives as there are no redrivers and separate switch chip. NVMe lanes directly connect to four groups of four PCIe lanes. This device is more for Threadripper systems that have more lanes, but it still allows two extra NVMe drives on AM5 boards.

If you only have x8 lanes available, you'll only be able to connect 2 NVMes. Only x4 lanes? Then it's just a single NVMe.
True. Still, an affordable solution for NVMe expansion without fancy and super expensive AICs with switches.
 
Why is the MSI X870E Carbon motherboard so bad at gaming?

According to TechPowerUp's tests, this motherboard shows a drop in performance compared to other motherboards, especially on Cyberpunk. How can a motherboard affect gaming performance?
It could be MSI doesn't enable all those latency "killer" features by default.
 
$480 for limited i/o and limited pcie, neeeext.
 
A slightly different question, but is that still uncontested lanes?

My understanding of PCIe generations and lanes is that a 5.0 x8 slot has the same bandwidth as a 4.0 x16 slot, but it can't change generation; Four drives sharing eight lanes is still only two lanes per drive, and at gen 4.0, that's only half the lanes each drive needs because a gen 4.0 drive cannot magically upgrade itself to a gen 5 drive.

Unless that ASUS Hyper M.2 has some very fancy PCIe switch that can translate 8 gen 5.0 lanes into 16 gen 4.0 lanes (I've never heard of such a thing) then those four M.2 drives are fighting each other for lanes and will only have half the lanes they want when striped or RAIDed in such a way that all drives are active simultaneously.
The only thing this board supports on AM5 would be a maximum of 2 drives. That is not the end though. Where this shines is in keeping NVME cool and even has a 4 pin PWM header. It is also slim. I use mine with 2 Crucial T600s in RAID 0. It does fully support 4 M2s but you need TR for that kind of support. You used to get them with some boards but the As Rock X870E Taichi supports lane splitting but did not give you an adapter.

Screenshot 2025-03-26 030003.png
 
The only thing this board supports on AM5 would be a maximum of 2 drives. That is not the end though. Where this shines is in keeping NVME cool and even has a 4 pin PWM header. It is also slim. I use mine with 2 Crucial T600s in RAID 0. It does fully support 4 M2s but you need TR for that kind of support. You used to get them with some boards but the As Rock X870E Taichi supports lane splitting but did not give you an adapter.

View attachment 391653
Have you benchmarked the RAID0 against a single drive?
What you gain in sequential performance, you lose in IOPS and access latency.

I'd very much expect your RAID0 setup to be slower in almost all use-cases than a single drive with the sole exception of sequential copying of large datasets into and out of RAM, and I'm saying that as an enterprise storage architect who's dailying two-dozen SSD arrays in servers, enterprise storage appliances from HPE/Pure/EMC as well as more consumer-stuff like homegrown renderfarms, TrueNAS servers, and off-the-peg rackmount solutions from Synology/QNAP.

Like, no matter how fast your CPU is, RAID adds overhead that increases access latency. SSD's biggest strength is the low access latency and high IOPS. I can get tens of GB/s out of spinning rust appliances with enough striping but I'll never get nanosecond access latencies.
 
Have you benchmarked the RAID0 against a single drive?
What you gain in sequential performance, you lose in IOPS and access latency.

I'd very much expect your RAID0 setup to be slower in almost all use-cases than a single drive with the sole exception of sequential copying of large datasets into and out of RAM, and I'm saying that as an enterprise storage architect who's dailying two-dozen SSD arrays in servers, enterprise storage appliances from HPE/Pure/EMC as well as more consumer-stuff like homegrown renderfarms, TrueNAS servers, and off-the-peg rackmount solutions from Synology/QNAP.

Like, no matter how fast your CPU is, RAID adds overhead that increases access latency. SSD's biggest strength is the low access latency and high IOPS. I can get tens of GB/s out of spinning rust appliances with enough striping but I'll never get nanosecond access latencies.
Latency? How fast is 1 nanosecond, maybe 50 nanoseconds? How fast is the average eye blink? This is not HDD. RAID 0 over M2 does not add any perceived latency in my experience. My C Drive is no faster in loading things than my RAID 0. I will tell you that City Skylines 2 also loves that drive. I get about 10 more FPS. Yes that Game is the true measure of PC performance. It will even scale to the number of Cores, VRAM allocation and yes Storage speed. The control in my testing is the Corsair MP700 vs 2 Crucial T600s in RAID 0. Vs that drive you get a theoretical 8 GB/s more sequential. What is the best though is my RAID 0 card keeps those babies nice and cool.

As much as I have said what I have said I do agree that 2 drives is the sweet spot as adding more is quickly diminishing returns. I would not use RAID 0 for speed beyond 3 at the most on 1 array.
 
Back
Top