• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 5060 Ti PCI-Express x8 Scaling

There are caveats to that, evidently it CAN make a huge difference under some circumstances. When HUB deliberately used settings to find the 8GB card's limits it absolutely choked in situations where the 16GB card was unaffected. This was both at PCI-E 3.0 and 4.0. That was the 4060 Ti but I can't see why the 5060 would be any better. Especially on 3.0 motherboards where it will only run at 3.0 x 8.
these were very niche scenarios (1440p, rt & dlss) and if you still have a motherboard with only pcie 3.0 support you should upgrade your cpu + motherboard instead of the gpu. (I still wish it had 12gbs)
 
these were very niche scenarios (1440p, rt & dlss) and if you still have a motherboard with only pcie 3.0 support you should upgrade your cpu + motherboard instead of the gpu. (I still wish it had 12gbs)

Would still be less hassle to just get a higher end GPU which can use all 16 PCIe 3.0 lanes like the 5070 or 9070. Or, simply a 16GB 5060 Ti which I presume, like the 4060 Ti, will be relatively minimally handicapped by 3.0 x8. Maybe up to 5% or so loss, sometimes less. I've compared my 4060 Ti 3dmark and Geekbench GPU scores at 3.0 x8 to systems running on 4.0 boards and there seems to be minimal difference in the results.
 
Would still be less hassle to just get a higher end GPU which can use all 16 PCIe 3.0 lanes like the 5070 or 9070. Or, simply a 16GB 5060 Ti which I presume, like the 4060 Ti, will be relatively minimally handicapped by 3.0 x8. Maybe up to 5% or so loss, sometimes less. I've compared my 4060 Ti 3dmark and Geekbench GPU scores at 3.0 x8 to systems running on 4.0 boards and there seems to be minimal difference in the results.
well yes the video of Hardware unboxed would imply the same. 16gbs has no problem even on pcie 2.0 (in most cases). I don't think the problem of the card is that it only has 8 pcie lane support, but that it struggles in niche/edge case scenarios because of vram limits & that's why I wished it had 12gbs for peace of mind (16gbs, as you can see from the video, is clearly overkill for the performance tier the 60ti class cards currently provide)
 
Last edited:
I've come here, cause I'm using 9070 xt on pci e 3.0, (for me everything works perfectly :) ), and topic is great :D thanks for that ^^
 
So that mean the PCIe x16 3.0 results are the same as PCIe x8 4.0, correct? I mean, why would anyone run his GPU at x8 instead of x16 ?
The card is only wired for x8, not for x16.
 
Kinda wish Wizzard would buy a PCIe extender or 2 and cut one to physical x8 (or a MB with a x8, and the other x4).
Reason being downgrading PCIe to lower speed still leaves full x16 lanes that gives lower latency under heavy load than only a x8 at the same bandwidth.
( This information is found and seen in the older scaling articles nVidia 1080, 2080 or older)

Still like seeing the scaling articles :)
 
Kinda wish Wizzard would buy a PCIe extender or 2 and cut one to physical x8 (or a MB with a x8, and the other x4).
Reason being downgrading PCIe to lower speed still leaves full x16 lanes that gives lower latency under heavy load than only a x8 at the same bandwidth.
( This information is found and seen in the older scaling articles nVidia 1080, 2080 or older)

Still like seeing the scaling articles :)
The older cards (1080, 2080, etc) still physically had 16 lanes.

Newer generation midrange cards (4060 and 5060) don't - they are physically only 8 lanes. So no extender needed - they are 8x, and 8x only, and it doesn't matter what kind of motherboard you use, they will only ever be 8x.

This was a change starting with the 40## generation and seems to continue into the 50##.
 
All 8GB cards will be bad in 2025 with more demanding and popular games. There's no question about it.
As the benchmarks haven't been shown yet but are expected to mimic the differences of the 4060 series, your theory will likely be proven wrong.

They never should have been made.
Then don't buy one.
 
Last edited:
As the benchmarks haven't been shown yet but are expected to mimic the differences of the 4060 series, your theory will likely be proven wrong.
There are a number of well documented examples where 8GB card is a stuttering mess, more so on older systems supporting PCIe 3.0 speeds.
So, gamers will need to look into specific games they play to avoid unpleasant surprises. Devil is always in detail.

In addition, preliminary tests on 5060Ti 8GB model show further problems with 1% lows and FrameGen.
 
There are a number of well documented examples where 8GB card is a stuttering mess, more so on older systems supporting PCIe 3.0 speeds.
So, gamers will need to look into specific games they play to avoid unpleasant surprises. Devil is always in detail.
You can't quote HUB and expect to be taken seriously. They are known untrustworthy sell-outs.
In addition, preliminary tests on 5060Ti 8GB model show further problems with 1% lows and FrameGen.
Let's see what W1z's and a few others numbers show. Notebookcheck is just quoting that same chinese leak that was shown in the other thread. They didn't do any testing themselves.

If you're going to bother with citations, cite something credible, not flimsy as a wet paper bag.
 
You can't quote HUB and expect to be taken seriously. They are known untrustworthy sell-outs.
They are not known for that. Nonsense.
If you're going to bother with citations, cite something credible, not flimsy as a wet paper bag
It's enough if you look into framerates and combinations of games on HUB video to judge for yourself. Even if you don't believe one single word from Steve's mouth, you have your own eyes and mind to see and process what was shown.
 
Last edited:
Would have been nice if you could have included results with Resizable Bar off.

Intel ARC GPUs drops in performance sharply when Resizable bar is turned off or unsupported.

 
5060Ti 8GB model - pretty damning verdict, as expected, especially when attempting to play with features Nvidia wants to sell so dearly, such as RT and Frame Gen. SI will be selling a lot of those to unaware buyers.
 
I'm looking at these benchmarks and see someone finally used the 6800XT, which I am using and it still hits good FPS in these newer AAA games. I am using the Gigabyte Aorus Master 6800XT with a 9800x3d and it will not be updated for a couple more years, I am happy with it. These newer cards just are not getting the increase they should be. It is sad to see my card even beating out the 5070 in some games. Nvidia is just terrible and probably will be for another generation or 2.
 
No worries, I just ordered a 8 GB
It would interesting to see how PCIe scaling compares between 8GB and 16GB in Ratchet & Clank: Rift Apart.

This specific game performs terribly on 3060 Ti 8GB with an eGPU setup (both with PCIe 4.0 x 4 and 3.0 x 4), while it runs flawlessy on 3060 12GB.
 
How does 8GB version behave with pcie 4.0 and indiana jones rt enabled?
 
Last edited:
How does 8GB version behave with pcie 4.0 and indiana jones rt enabled?
Almost the same as PCIe5.0 and PCIe3.0. See the graphs;
 
How does 8GB version behave with pcie 4.0 and indiana jones rt enabled?
Almost the same as PCIe5.0 and PCIe3.0. See the graphs;
Almost true:
- you show 16 GB version graphs, not 8 GB;
- you show average for many games, not Indiana Jones;
- you show raster average, not RT.
So, almost the same... not.

@W1zzard
We tested PCI-Express scaling of the RTX 5060 Ti 16 GB here. I will rerun the same tests with the 8 GB model, to see what effect limited bandwidth has when running out of VRAM, and system memory is used as overflow.
I'm still waiting patiently, this should be entertaining.
 
you show 16 GB version graphs, not 8 GB
The 8GB graphs don't exist. and the PCIe scaling has never been a huge problem. One could run a 5060ti on a PCIe2.0 board and still get a solid performance. I don't think anyone with as much experience as some of us here have are expecting much of a difference between the 8GB and 16GB cards for PCIe bandwidth scaling.
you show average for many games, not Indiana Jones
IJ&TGC is an outlier and does not represent the majority of games.
you show raster average, not RT
Oh, right, let's fix that.
https://www.techpowerup.com/review/nvidia-geforce-rtx-5060-ti-pci-express-x8-scaling/29.html
As you can see, the differences scale more with resolution than with PCIe version. 6% being the worse case seen. It's really not a big deal.
So, almost the same... not.
6% at worst and most cases less than 3%? That's as "almost the same" as one can get without being spot-on the same.
 
Back
Top