• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 4090 PCI-Express Scaling with Core i9-13900K

Hah, no.
Those cost savings for the GPU manufacturer typically don't get passed on to us but the price hikes of Gen5 and Gen6 motherboards absolutely do.
Check this out :) PCIe Gen4 x8 on the upcoming NVIDIA GPUs:)
1680347722331.png
 
Check this out :) PCIe Gen4 x8 on the upcoming NVIDIA GPUs:)
View attachment 290017
Yeah, the x8 isn't great for older intel platforms, IIRC mainstream Intel was still Gen3 up until 12th gen. Technically, select motherboards had PCIe 4.0 for 11th gen, but those were mostly z-series flagships with price tags that disqualify them from most scenarios where I'd use the word "mainstream".

PCIe x8 doesn't really matter to me at this point, because all new 8GB cards are stillborn IMO; Having a shitty PCIe interface width is just making a dead card even deader. I wouldn't be surprised to see the 12GB 3060 outperforming the RTX 4060 in a few VRAM-heavy games at launch. We're aready witnessing the 4060 laptop GPU fail to pull very far ahead of the 3060 6GB laptop GPU, and even though the deesktop chips may not have the same core configurations or bus widths, the fact that the 3060 has more cores, more memory bandwidth, more VRAM, and now more PCIe bandwidth than the rumoured 4060 means that the 3060 12GB is looking like the better buy already. You can pick them up brand new for under $300, where I suspect the 4060 will be a $399 card at the absolute minimum, base-model that's hard to actually find on shelves.

I'd like to be wrong, but I think the mid-range xx60-class buyers are getting screwed by Nvidia yet again. I guess that's why they're all still hanging onto their (excellent) GTX 1060 cards!
 
  • Like
Reactions: izy
Obviously this means the cards will be cheaper to manufacture, which means lower prices for consumers. ;)
 
If the rumor is true then I predicted correctly when I said that the cards might go x8 in the future :) lets see if the price will drop or its just more profit for vendors.
I think the bandwidth will be enough for lower end cards but its bad for older motherboards with only PCIe 3.0 , i think they will have some performance hit, basically the card will run at PCIe 3.0 x 8 right?
 
Last edited:
I still don‘t get it and i also read the comments:
In the article its said „16 5.0 lanes of which are 4 4.0 for M.2“. Isn‘t this wrong spelled or something, maybe my english isn‘t that good.
Because Intel says in their Z790 block diagram „16 5.0 AND additionally 4 4.0“.
IMG_9468.jpeg

Intel misses to label the blue lines going to the boxes, didn‘t they do that in the past?

And how is it with for example Z790 Tomahawk: It has only 4.0 M.2, 3 at chipset 1 attached to CPU.
And maybe a test like let Shadowplay write the videos to the CPU attached M.2 vs. M.2 attached to chipset.
 
In the article its said „16 5.0 lanes of which are 4 4.0 for M.2“.

Where do you see this quote?

I see this:
"The processor puts out a total of 28 PCI-Express lanes. 16 of these are PCI-Express Gen 5 lanes, meant for the main x16 PCI-Express Graphics (PEG) slot. 4 of these are PCI-Express Gen 4 lanes, meant for the motherboard's sole CPU-attached M.2 NVMe slot."

And that's the fact:
16 Gen5 lanes for the GPU (or they can be split into 8+8 on some motherboards, so that a Gen5 SSD can be used)
4 Gen4 lanes for the M.2 slot
8 Gen4 lanes for the chipset
 
16 of these are PCI-Express Gen 5 lanes, meant for the main x16 PCI-Express Graphics (PEG) slot. 4 of these are PCI-Express Gen 4 lanes…

This was the misleading part for me, assuming the 4 was referring to the 16 instead 28.

So the CPU has 20 lanes for direct attached devices and 8 goes to the chipset. Yo ok :)
 
This was the misleading part for me, assuming the 4 was referring to the 16 instead 28.

So the CPU has 20 lanes for direct attached devices and 8 goes to the chipset. Yo ok :)
Using the PCI-E 5.0 NVME slot takes away from the 16x GPU slot.
The first 4.0 slot has dedicated bandwidth not taken from the GPU slot

It's changed from previous generations, in that you cannot use the 5.0 NVME slot at all, if you want the full 16x to the GPU.
 
I think the bandwidth will be enough for lower end cards but its bad for older motherboards with only PCIe 3.0 , i think they will have some performance hit, basically the card will run at PCIe 3.0 x 8 right?
It still does not matter for class 60 cards and lower. PCIe 3.0 x8 provides ~8GB/s of transfer, which is enough for those cards.
Even 4090 loses only 2% in Gen4 x8 slot. Tested by TPU. This means that PCIe 3.0 x16 is just about saturated only by 4090.

Intel misses to label the blue lines going to the boxes, didn‘t they do that in the past?
It does not. You cut out the rest of the image to the right that does have PCIe information. Do you see those four blue lines to the right?
 
What does PCIe 4.0 X4 mean for the power consumption. As it is a kind of FPS limiter, does it limit the power consumption to a specific wattage?
 
It still does not matter for class 60 cards and lower. PCIe 3.0 x8 provides ~8GB/s of transfer, which is enough for those cards.
Even 4090 loses only 2% in Gen4 x8 slot. Tested by TPU. This means that PCIe 3.0 x16 is just about saturated only by 4090.


It does not. You cut out the rest of the image to the right that does have PCIe information. Do you see those four blue lines to the right?
Thats what i thought too but its not exactly like that , i think it depends on the game , i saw this video a while ago , in CS:GO for example you get 100fps less ^^ the 1% lows closer tho.
 
Interesting this test was done by reducing lanes, my bios lets you drop the gen speed, but no control over lanes.

So I could drop to 4x16 instead of 5x16, and likewise 3x16. But no way to use x8, I could drop to x4 by moving the card to my 4x4 slot.
 
Interesting this test was done by reducing lanes, my bios lets you drop the gen speed, but no control over lanes.

So I could drop to 4x16 instead of 5x16, and likewise 3x16. But no way to use x8, I could drop to x4 by moving the card to my 4x4 slot.
Second PCIE slot should be x8 only i think, if you have to full PCIE slots.
 
Second PCIE slot should be x8 only i think, if you have to full PCIE slots.
With the death of SLI I think this is a less common setup now?

On my board I have the following PCIE.

full 5x16
full 4x4
full 3x4
finally two short 3x1
 
With the death of SLI I think this is a less common setup now?

On my board I have the following PCIE.

full 5x16
full 4x4
full 3x4
finally two short 3x1
I mean that if you use your GPU in the second PCIE slot it should work at only x8 , it will split the lanes with the main PCIE but i see you have top end MB and i just checked and is different, no mention of x8 in the specs.
• 3 x PCIe x16 Slots (PCIE2/PCIE3/PCIE5: single at Gen5x16 (PCIE2); dual at Gen5x16 (PCIE2) / Gen4x4 (PCIE3); triple at Gen5x16 (PCIE2) / Gen4x4 (PCIE3) / Gen3x4 (PCIE5))*
 
...I could drop to x4 by moving the card to my 4x4 slot.

That wouldn't be representative of actual performance since that slot is connected to the chipset, not the CPU. All the graphics data would have to go through the DMI with a substantial latency penalty.

AMD used to allow CrossFire on such configurations and it was not a good experience. NVIDIA never allowed that for SLI, for a good reason.
 
Thats what i thought too but its not exactly like that , i think it depends on the game , i saw this video a while ago , in CS:GO for example you get 100fps less ^^ the 1% lows closer tho.
It's still very negligible, 4% in 1080p on average, 2% in 1440p. It matters, as Steve said, in 2-3 specific games, so those who play those games with new cards simply need to have a newer motherboard with Gen4 GPU slot. For others, it's vastly ok to keep older Gen3 boards and play with new cards. Nothing to worry about.
 
It's still very negligible, 4% in 1080p on average, 2% in 1440p. It matters, as Steve said, in 2-3 specific games, so those who play those games with new cards simply need to have a newer motherboard with Gen4 GPU slot. For others, it's vastly ok to keep older Gen3 boards and play with new cards. Nothing to worry about.
Maybe you just get a x16 GPU from AMD than upgrading your motherboard (going from b450 to b550 for example makes no sense), i dont think 100 frames difference in CS GO for example is nothing and there are a few games there that lose a decent amount of FPS (who knows about the future games) and xx60 cards are all about 1080p.
 
That wouldn't be representative of actual performance since that slot is connected to the chipset, not the CPU. All the graphics data would have to go through the DMI with a substantial latency penalty.

AMD used to allow CrossFire on such configurations and it was not a good experience. NVIDIA never allowed that for SLI, for a good reason.

Indeed, I would be testing it by dropping the gen set in the bios whilst using the 5x16 slot, I posted as I am curious what W1zzard is doing. Never seen a bios let you set the lane count, just the gen version. But maybe his motherboard lets you drop the lane count.
 
Indeed, I would be testing it by dropping the gen set in the bios whilst using the 5x16 slot, I posted as I am curious what W1zzard is doing. Never seen a bios let you set the lane count, just the gen version. But maybe his motherboard lets you drop the lane count.

It is explained in the article. Some Intel motherboards have a Gen5 M.2 slot, but the CPU only has 16 Gen5 lanes. So if you install any SSD in that slot (no matter which generation), it will automatically split lanes into x8+x8 (it can't do x12+x4). Those motherboards have a second CPU-attached M.2 slot that uses the 4 dedicated Gen4 lanes.

My motherboard doesn't have a Gen5 slot, but it still has the bifurcation option in the BIOS. I can set it to x8+x8, even though there's no reason for it. Although it would allow me to use one of those 4060 graphics cards that has an M.2 slot on it. Bifurcation support is required for that to work.
 
It is explained in the article. Some Intel motherboards have a Gen5 M.2 slot, but the CPU only has 16 Gen5 lanes. So if you install any SSD in that slot (no matter which generation), it will automatically split lanes into x8+x8 (it can't do x12+x4). Those motherboards have a second CPU-attached M.2 slot that uses the 4 dedicated Gen4 lanes.

My motherboard doesn't have a Gen5 slot, but it still has the bifurcation option in the BIOS. I can set it to x8+x8, even though there's no reason for it. Although it would allow me to use one of those 4060 graphics cards that has an M.2 slot on it. Bifurcation support is required for that to work.
Ok then how did he do x4 for GPU? :)

On my AMD system if I set x8x8 it still runs at x16, only drops to x8 if I put something in the PCIe slot that shares the lanes. however that board had a ton of options that didnt do anything, asrock have a habit of adding options that dont do anything.
 
Ok then how did he do x4 for GPU? :)

He didn't. He did x8 and limited the speed to Gen3 and Gen2.

It is an approximation. The bandwidth for Gen4 x4 and Gen3 x8 is the same, but Gen4 might offer slightly lower latency. The bandwidth for x8 Gen2 is actually slightly higher than x4 Gen3, because different signaling was used resulting in different overhead.

You could probably physically limit lanes to x4 by taping off the pins, just like Digital Foundry did for their NVMe drive tests on the PS5. But that's more advanced stuff.
 
He didn't. He did x8 and limited the speed to Gen3 and Gen2.

It is an approximation. The bandwidth for Gen4 x4 and Gen3 x8 is the same, but Gen4 might offer slightly lower latency. The bandwidth for x8 Gen2 is actually slightly higher than x4 Gen3, because different signaling was used resulting in different overhead.

You could probably physically limit lanes to x4 by taping off the pins, just like Digital Foundry did for their NVMe drive tests on the PS5. But that's more advanced stuff.
Ok thanks for the explanation.
 
He didn't. He did x8 and limited the speed to Gen3 and Gen2.
I don't remember you being here to help during testing?
 
Wow, you finally responded to a post of mine. A pity it wasn't one where I asked a question. At least I know you can see my posts. ;)

The article says "We also did some extra runs for even more bandwidth constrained setups like x8 3.0, and x8 2.0 (for science).", so...
 
Back
Top