• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 3080 PCI-Express Scaling

But the level data has to move from CPU to GPU through PCIe lanes. Isn't it when there is a chance to saturate the PCIe bandwidth if the NVMe SSD is PCIe 4.0 compliant?
A GPU plays little role in game level load times. Perhaps it is faster because of the bandwidth, but that isn't the GPU. Unless, I don't know what you're saying?
 
He means MS direct storage, whenever that's coming (with nvidia branding for nvidia cards..)
That's still quite far away, maybe wait for the first few games that support it before jumping to conclusions or making assumptions as to how much extra PCIe bandwidth that will require.
 
i load games from a raid 0 SSD sata drive under 10 seconds at a fifth of the time it took with legacy 7200 rpm HDD.
You do realise RAID 0ing SSDs does not help your (game, Windows) performance in any way? A reasonable and cheap SATA SSD will have 20 or even 40 times the 4k Random Read (Q1/T1) performance of a random old HDD.
For NVMe it would if anything lose performance due to the added latency, since NVMe is pretty fast compared to SATA; with SATA RAID latency should not matter that much.
 
You do realise RAID 0ing SSDs does not help your (game, Windows) performance in any way?
Not true at all. When doing data copy operations, should a user reach the cache threshold(which is not uncommon), the raid function does keep things flowing better. SSD RAID5 is even better for mitigating cache threshold problems.
 
Not true at all. When doing data copy operations, should a user reach the cache threshold(which is not uncommon), the raid function does keep things flowing better. SSD RAID5 is even better for mitigating cache threshold problems.
Raid 0 Performance is fantastic compared to single Sata SSD or worse before back in the days of HDD. much faster loading times by a large multiple...anyone that says otherwise obviously did not experience this ... putting game folders or other large files in SSD is worth every penny. even better in a raid config. double the speed nearly 5 to 10 times faster than even the good toshiba 220 mb/s HDD.
 
Maybe im understanding it wrong, but why would the 1% lows of fps use more bandwidth?

PCIE bandwidth required varies on a scene by scene basis. It isn't fully linear. Its like CPU performance in a sense, a longer benchmark with just average will smooth out the drops. Of course, its not as important as the CPU itself, so it will never be as bad as say a 4/4 running a modern game during an intense action scene, but still.

IDK how people dont know these things. I guess they stare at benches and dont actually play the games.
 
I am transferring this remark from the article for further analysis.
... the bottleneck remains with the "CPU processing power".

CPU processing power will be wasted on gaming only by what ever this is run in software, the most known software this is Microsoft DirectX.
Now I will made the wild assumption that when most DirectX commands those executed by the GPU, them the CPU enjoying less stress.

And now I need to ask of whom responsibility is the fact that CPU this gets stressed unimaginably instead of the GPU? some one did a mistake here.
From the presented test results the comparison of Battlefield V vs Project Cars 3 this works as proof to me, that Game developers they are responsible at 100% of any CPU caused bottleneck.

Some people forget that CPU this is two engines in one, the second engine this is the mathematical processor.
For example In a game scene with a forest, if you add 1000 trees in to the landscape, and you decide (as game developer) the math processor to assist the GPU, then you are getting responsible of your own choices.
CPU caused bottleneck will be there, and the problem can be solved by transferring the cost of this mistake at the consumer pockets, some one must pay and buy the more powerful CPU.

Therefore what we have this is a pack of bad written games and a pack of better written Game titles, and now we need to award the Game developers whom respect consumers wallet by them making fewer mistakes.

Regarding RAW GPU power, NVIDIA about the 3000 series it did make more noise than delivery of performance.
I will predict that when this will use lithography down to six microns this will deliver above 120 fps at 4K and 300fps at 1920x1080.
Personally I would care to buy such a dream card so to get old with it.

At my eyes the 3000 series this is another hot pan this using too much of electrical energy.
PCIe bandwidth load this gets increased when some one decides to share GPU load to CPU.
 
Last edited:
Rubbish & nonsense. PCIe bandwidth is perfectly linear, literally by design.

Bandwidth is linear. The requirements per scene are not linear.

IDK if I worded it wrongly, but what I am saying *is* true. Not every scene has the same geometry / raster / bandwidth / shader requirements. Not every scene is as heavy on PCIE bandwidth. I hope this makes it clearer.

EDIT:
You can even see this here. The RT (custom Vulkan-RT path by Nvidia) benchmark, despite the lower FPS, has higher bandwidth requirements on the PCIE bus than other benches. And different games react differently too.
 
My suspicion that 3.0 will still be good enough for awhile have been confirmed. Once RTX-IO or DirectStorage become an actual thing in games we will see a need for 4.0 I think.

Horizon is terrible port. Stutters as hell.
Funny, I don’t get those stutters. :rolleyes: Must be it’s not that bad of a port...or it would apply to everyone.
 
Last edited:
Low quality post by Charcharo
Thank you for correcting yourself.


This. Just because one person has a problem doesn't automatically mean everyone will have that problem.

The original comment was easy to understand anyway. Only pedants would have a problem with it, but it is corrected :)
 
My suspicion that 3.0 will still be good enough for awhile have been confirmed. Once RTX-IO or DirectStorage become an actual thing in games we will see a need for 4.0 I think.


Funny, I don’t get those stutters. :rolleyes: Must be it’s not that bad of a port...or it would apply to everyone.
Even after that, since RTX IO is supported by Turing which is PCIe 3.

About Horizon, I have zero issues aswell, but I watched many people streaming it (incl person with your CPU - 8700K and 1080 Ti in SLI) and they had weird jitter/stutter gameplay in more later playthrough. It's really weird and I haven't seen such before.

Jittery is the word, not stutter and I'm sorry, but at that time it doesn't came on my mouth. :)
 
Last edited:
Can you guys just confirm whether or not I be able to run a 3080 on my system..

I am to are running quite old tech, but its officiant streamlined overclocked tech that's still providing excellent playability on a 144hz monitor.
I just could do with the extra power of the 3080 over my humble 1080ti for VR.

Main concern is my PCIe 2 slot.


Thanks.
 
Thank you for correcting yourself.


This. Just because one person has a problem doesn't automatically mean everyone will have that problem.
I use the FH4 benchmark as my daila driver because it’s super consistent sufficiently loads all components well and gives very granular results
 
Can you guys just confirm whether or not I be able to run a 3080 on my system..

I am to are running quite old tech, but its officiant streamlined overclocked tech that's still providing excellent playability on a 144hz monitor.
I just could do with the extra power of the 3080 over my humble 1080ti for VR.

Main concern is my PCIe 2 slot.


Thanks.
If it's the system in your specs, it'll run, but you will experience some CPU bottlenecking. Not in all games or at all resolutions, but it's going to happen.
 
Can you guys just confirm whether or not I be able to run a 3080 on my system..

I am to are running quite old tech, but its officiant streamlined overclocked tech that's still providing excellent playability on a 144hz monitor.
I just could do with the extra power of the 3080 over my humble 1080ti for VR.

Main concern is my PCIe 2 slot.


Thanks.
you'll lose a few/several/double digit percent at 1080p, sure. Between the couple % from pcie 2.0 and the notably lower IPC of the sandybridge cpu (even with it overclocked), it will make a difference in most titles. That said, surely you'll still reach 144hz/fps id imagine.
 
Going to plug my 3090 into my old socket 775 system when it gets here. I'll let you know how it works out.
I'm sure it works fine actually. If you actually want to make use of the GPU's potential just be sure to shift the bottleneck to lean toward the GPU by pushing the graphics resolution which DSR is perfect for even at 1080p or 1440p that heavily slants it towards the GPU performance and away from any CPU bottleneck. I don't think anyone pairing that type of GPU on 775 would realistically do so expecting eleague high refresh rate gaming going over well that's much more latency sensitive and much less GPU dependent. I mean testing that GPU at 480p with a more modern CPU and high refresh rate isn't any better you're going to bottleneck the CPU more unrealistically and throw most of the perks of using a GPU like that out the window.
 
Can you guys just confirm whether or not I be able to run a 3080 on my system..

I am to are running quite old tech, but its officiant streamlined overclocked tech that's still providing excellent playability on a 144hz monitor.
I just could do with the extra power of the 3080 over my humble 1080ti for VR.

Main concern is my PCIe 2 slot.


Thanks.

It depends on the game; Some games are just fine with an old 2600K but many are not. Even if the average FPS on old Intel quad cores looks okay, the minimum framerates are really bad compared to newer higher-core count CPUs with more cache and faster RAM. I moved one of my machines from a i7-3770K on DDR3-1600 to an R7 3700X on DDR4-3200 when they came out over a year ago and games that I thought were running okay suddenly ran much more smoothly when the action started getting busy.

We are seeing more games run badly on quad core CPUs than ever before, and with the PS4 and XBSX both having 8-core, 16-thread CPUs in them, that is a trend that will definitely continue.

If you can afford an RTX 3080 you can also afford to change your CPU/RAM/Motherboard. You should really wait a couple of months to see how Big Navi, Ryzen 5000-series, and the RTX 3070 perform but a 3700X, B550, and 16GB of DDR4-3600 is going to cost you $250 less than a 3080 right now and it'll probably improve your gaming experience more than a 3080 by improving the minimum fps rather than the average. By November, the 3700X is probably going to be heavily discounted on sale, and the replacement is likely to be worth it over the old 3700X even at the expected $350 price point.

Steve has the answer:

1601804356080.png
 
Last edited:
PCIe 2.0 is not the main limiting factor...
...Yet The CPU and RAM in his system will be the the bigger bottleneck
I didnt say it was. I said, "between this (pcie 2.0) and that (SB IPC) it will be bottlenecked.

4% to be exact. It's one part of the big picture. I even said a "couple" % minimizing that value. Nothing there infers that i thought it was the main concern. SB IPC comes into play (5Ghz SB isnt close comet lake at 5ghz) as well as memory speeds as you added. That doesn't count any games that are capped by a 4c/8t part...

If i went wrong it was underestimating how slow the cpu is and its ability to reach 144 fls (in some titles,ike sotr).
 
Last edited:
Back
Top