• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

The Last of Us Part I Benchmark Test & Performance Analysis

Damn, you now need a 4090 to get comfortably over 60FPS at 4K on a console game released 10 years ago.

No wonder I nab cards second hand these days.
 
Valve already does this with the Steam Deck... and every other hardware/software combo in existence that has the pre-caching feature activated on Linux. If you have it enabled you process the shaders if they aren't in the database or you download them if they already exist. I wonder the size of the database, must be massive.
Yeah, that's how this conversation string started:
Unavoidable? It's 12GB of shaders that will be the same for any given graphics architecture. My Steam deck downloads pre-compiled shaders for deck-verified games hosted by Valve. Why does my Geforce or Radeon need to waste half an hour creating its own when they could be pre-compiled by the developer and downloaded as additional data during install?
You gotta also take in mind that the compiled shaders have to match the version of the drivers. So it's not only by architecture, also by driver version and possibly OS (kernel).
Drivers, no - I only have to recompile shaders if I DDU or manually clear the shader cache, so no driver upgrades don't invoke that. Kernel/OS, I can believe needing a different shader, though I'm not entirely sure why as a DX12 shader or a Vulkan shader is the same API regardless of host OS.
 
Last edited:
Have we reached a point where games are getting so sophisticated that shader compilation will not be done "quietly" anymore ?
Yes, especially if you expect that to run on a i5 2500K and GTX 1060 as well as on a R9 7950X3D and RTX 4090

There aren't that many architectures that meet the minimum specs. RDNA1, RDNA2, Turing, Ampere, Arc. That's it, just precompile those and host them on the digital download service.

If you want to game on a GPU that isn't covered then sure, compiling shaders yourself is the fallback, but the game was originally designed for RDNA2 and according to the Steam hardware survey, over 50% of the market is running Turing or Ampere.
Or you can use 10-30 minute of your machine's time to compile that instead of wasting thousends of Terabyte of bandwidth to download every one of them and Terabyte of drive space to host them...
Also how many combinations they need to host? Because you basically need every GPU/CPU combo possible, even for those 4 architectures is a nightmare.
Come on guys, it is a one-off thing, Horizon Zero Down is doing the same, I didn't saw people freak out online for that.
 
Drivers, no - I only have to recompile shaders if I DDU or manually clear the shader cache, so no driver upgrades don't invoke that. Kernel/OS, I can believe needing a different shader, though I'm not entirely sure why as a DX12 shader or a Vulkan shader is the same API regardless of host OS.
Meaningful driver updates, not incremental. If the core of the driver doesn't change you don't need to recompile.
 
A 1080Ti should exist in the charts. It would show the difference between the generations when vram is not an issue.

No, not needed really anymore. Just look at 2080Ti, they are not that far off.

While it even could be useful for me, but cards performance never matched W1z, I was always under water and shunt moded. So what's the point? 2080Ti is enough to get your idea about that gen.
 
40+ minutes for shader compilation + 10.5GB of VRAM at 900p in a decade-old game?) Excellent job, Sony, you managed to make the most buggy port of your most popular game.. I would give each person involved in creating this port a 3060ti or 3070 and make them complete the game twice at 1440p on ultra settings.
 
Last edited:
Anybody notice how almost every console port uses a ton of VRAM? It's not like AMD doing this intentionally right?

It's not just a remake
It's a remake of a remake of a remake :roll:
Original PS3, Remaster on PS4(no original release on PS4) and now this PS5 remade version.

The GPU situation doesnt help for these sort of games. This is quite abit more demanding than most games out there.
 
Seriously, at least for me, the game runs great at 2560 x 1440 medium texture settings everything else max, fsr 2 quality on an old 5700 xt 8gb. Initial shader compile was long but I have no stutters, pop ins or crashes. The mouse issues are/were there but I use my Dualsense anyway. Looks better than the majority of games on high/v.high and this is coming from someone who used to play at 4k vhigh/high regularly with a 3080 12Gb.

Edit - Also convinced the missus that a 7900 XTX would be a good 'investment' too so I'm not fussed as much :)
 
I mean, why is this acceptable with today's gaming tech?
It's "acceptable" because an endless stream of moronic "Real Gamers (tm)" constantly lower their expectations into the gutter whilst simultaneously have all the "don't pre-order self-control" of Steven Seagal at a burger stand... Until PC gamers regrow their spines and start calling out sh*tty console ports for what they are again, PC gamers will get to continue to 'enjoy' a new wave of bad console ports whilst the PC tech sector quietly cheers on 12GB VRAM usage for 900p gaming as a reason for 'needing' a $1200 GPU as the new 'budget gamer' option...
 
just like rtx 4070 12gb & 4070 ti 12gb become obselete with 2160p on this game.....

rtx 3090 24gb & rtx 3090 ti 24gb still rules the way it's meant to be played......
Something that is noteworthy is the 4080 is faster in this title even with 16GBs of VRAM over the 3090 with 24GBs of VRAM.

Edit: As well, the 4070 ti is faster than the 3090 ti in 1440p. So it does seem like the 4070 ti is a 1440p king unless you're using DLSS 3 then you can get some good 4k performance too.
 
Last edited:
It's crazy to play new games upon release these days :laugh: I always give 3 - 4 months to let the developers and GPU manufacturers to iron out the majority of issues.
Heck, I have just started playing Horizon Zero Dawn and Cyberpunk. They were released years ago. The bugs (well, mostly for Cyberpunk) have been resolved.

Although that's a long time to complile shaders, yes it's worth it. The constant shader loading stuttering *ahem Dead Space Remake ahem* is super annoying. Even on the 4090 build.
 
No, not needed really anymore. Just look at 2080Ti, they are not that far off.
You're being rather optimistic. A stock 1080 Ti should be around the 3060 or 5700XT in newer games.
 
Something that is noteworthy is the 4080 is faster in this title even with 16GBs of VRAM over the 3090 with 24GBs of VRAM.

No it's really not noteworthy that a game that uses 14GB of VRAM doesn't choke on a 16GB card.

Edit: As well, the 4070 ti is faster than the 3090 ti in 1440p. So it does seem like the 4070 ti is a 1440p king unless you're using DLSS 3 then you can get some good 4k performance too.

No, the 3090 Ti is actually faster at 1440p:

1680277567563.png


Pretty clear from the above chart that the 4070 Ti is slower at 1440p. This isn't the only game where the 3090 Ti is faster either and given the limited memory and memory bandwidth of the 4070 Ti, I only expect the 3090 Ti to look better as time goes no.

If you are going to comment please read the article before posting.
 
Something that is noteworthy is the 4080 is faster in this title even with 16GBs of VRAM over the 3090 with 24GBs of VRAM.

Edit: As well, the 4070 ti is faster than the 3090 ti in 1440p. So it does seem like the 4070 ti is a 1440p king unless you're using DLSS 3 then you can get some good 4k performance too.

A 1440p king that drops like a stone at 4k is an utter disappointment at $800.
 
Seems like a prime candidate for Direct Storage.
 
A 1440p king that drops like a stone at 4k is an utter disappointment at $800.

Ironically not even a 1440p king, looses to the 3090 Ti opposite to what he said. It's trash, $800 GPU that yet again has too little VRAM and way too little memory bandwidth.
 
No it's really not noteworthy that a game that uses 14GB of VRAM doesn't choke on a 16GB card.



No, the 3090 Ti is actually faster at 1440p:

View attachment 289925

Pretty clear from the above chart that the 4070 Ti is slower at 1440p. This isn't the only game where the 3090 Ti is faster either and given the limited memory and memory bandwidth of the 4070 Ti, I only expect the 3090 Ti to look better as time goes no.

If you are going to comment please read the article before posting.
Sorry I meant 3090 not 3090 ti. That was a typo. Thanks for correcting that.

A 1440p king that drops like a stone at 4k is an utter disappointment at $800.
But if you only have a 1440p monitor it's faster than the 3090 and especially with DLSS 3 on you'll be WAYYY ahead the 3090.

Ironically not even a 1440p king, looses to the 3090 Ti opposite to what he said. It's trash, $800 GPU that yet again has too little VRAM and way too little memory bandwidth.
The 3090 and 3090 ti seems a lot more than $800 but if you're playing in 4k those are faster natively of course. But I would probably recommend a AMD card for non DLSS 3 performance in 4k over a 3090 ti.
 
Last edited:
Sorry I meant 3090 not 3090 ti. That was a typo. Thanks for correcting that.


But if you only have a 1440p monitor it's faster than the 3090 and especially with DLSS 3 on you'll be WAYYY ahead the 3090.


The 3090 and 3090 ti seems a lot more than $800 but if you're playing in 4k those are faster natively of course. But I would probably recommend a AMD card for non DLSS 3 performance in 4k over a 3090 ti.

Why would I want to use upscaling at 1440p with an $800 GPU already?

DLSS at lower resolutions is not as good as at 4K and frame gen is worse at low frame rates because the difference between each frame is larger so more room for artifacts. Let alone the fact that the main advantage of high FPS is reducing input latency.
 
Why would I want to use upscaling at 1440p with an $800 GPU already?

DLSS at lower resolutions is not as good as at 4K and frame gen is worse at low frame rates because the difference between each frame is larger so more room for artifacts. Let alone the fact that the main advantage of high FPS is reducing input latency.
DLSS 3.0 is actually generating entirely new frames between the native frame. It's not a form of upscaling. I often turn off DLSS 2.0 (upscaling) and turn on DLSS 3.0 (frame generation) because it provides the best visuals. And if you're already hitting 120 too, turning on FG doubles your %1 lows which will do even more than upgrading to DDR5 RAM. Check out that FG in person sometime, I think you might be impressed.
 
DLSS 3.0 is actually generating entirely new frames between the native frame. It's not a form of upscaling. I often turn off DLSS 2.0 (upscaling) and turn on DLSS 3.0 (frame generation) because it provides the best visuals. And if you're already hitting 120 too, turning on FG doubles your %1 lows which will do even more than upgrading to DDR5 RAM. Check out that FG in person sometime, I think you might be impressed.

Would rather stick to native and get lower input latency without the same artifacts.

DLAA is the best piece of NV software tech, just needs implementing in more titles.
 
Would rather stick to native and get lower input latency without the same artifacts.

DLAA is the best piece of NV software tech, just needs implementing in more titles.
Okay yeah, everyone has their own preferences.

I've noticed on 4090 DLSS 3.0 looks better then turning on 2.0, maybe it's my eyes or something lol. Of course, you can run both together too if you want. DLAA is a bit tough to run but that can yield some amazing AA.

Luckily it seems like we have slightly better choices this gen than last. Next gen will be like "HAS TECHNOLOGY GONE TOO FAR?" Going to be FULL path tracing
 
Back
Top