• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

The Last Of Us Part 2 Performance Benchmark

To be honest, I hope Nvidia (and AMD, but I don't trust them to be smart enough to do this) radically re-engineers their allocation of space with their next generation. I would honestly be totally fine if Nvidia stopped adding more CUDA cores in the next generation, and just devoted 10x more space to RT cores. That's really what's needed. Rasterization has hit the limit, it's obvious. Nvidia's architecture is still fundamentally stuck at Turing. CUDA cores take up 90% of the space, with RT and Tensor cores "bolted on." We need far more RT cores.
100% agree, for now I only hope that TSMC 2N with GAA-FET will allow 6090 to pack more punch than it was with 4090->5090 node improvements.


All the people who have been in the IT or PC Gaming scene for many years, knew that Ray Tracing and mostly Path Tracing was the Holy Grail of Graphics. But we didn't have the Hardware for it (and still barely have it today... a 5090 having 30fps at Native 4K is not really great). I waa expecting Blackwell to have a much better RT/PT performance due to the supposed "new architecture" but I guess we'll have to wait for Rubin or whatever next architecture they will use for RTX 60s.
It's true, but to be honest I do not consider native 4K as a good measure, I mean it's not necessarily should run at native 4K@60 to be good or even great. I get it that it might not be convincing, but I consider Native 4K as an extreme waste of performance at the moment. DLSS Quality or even DLSS Performance at 4K with DLSS4 is enough for me if it allows to eliminate light leaking and runs dynamic GI. Not saying it's a good tradeoff for everyone, but I expect ML upscalers spatial and temporal alike to get even better. Me personally I'd probably have a really hard time with a blind test like that. Of course I do not mean Native like pixelated frame without any form of TAA
 
To be honest, I hope Nvidia (and AMD, but I don't trust them to be smart enough to do this) radically re-engineers their allocation of space with their next generation. I would honestly be totally fine if Nvidia stopped adding more CUDA cores in the next generation, and just devoted 10x more space to RT cores. That's really what's needed. Rasterization has hit the limit, it's obvious. Nvidia's architecture is still fundamentally stuck at Turing. CUDA cores take up 90% of the space, with RT and Tensor cores "bolted on." We need far more RT cores.
The problem is that Blackwell did not provide an RT/PT performance boost like Ampere did on Turing and then Lovelace on Ampere. The IPC is still the same, they just increased the max Ray count to 8 but the performance is the same...

1744149202847.png



I was expecting Blackwell IPC to be at least 40% better in RT/PT (aka without adding CUDA Cores) but no... I guess we'll have to wait for the RTX 60s.

It's true, but to be honest I do not consider native 4K as a good measure, I mean it's not necessarily should run at native 4K@60 to be good or even great. I get it that it might not be convincing, but I consider Native 4K as an extreme waste of performance at the moment. DLSS Quality or even DLSS Performance at 4K with DLSS4 is enough for me if it allows to eliminate light leaking and runs dynamic GI. Not saying it's a good tradeoff for everyone, but I expect ML upscalers spatial and temporal alike to get even better. Me personally I'd probably have a really hard time with a blind test like that. Of course I do not mean Native like pixelated frame without any form of TAA

Sure Upscalers can do a great job, DLSS Quality looks pretty neat (mostly DLSS 4 Transformer) but 4K DLAA is a lot sharper/cleaner!
The problem is that Nvidia (and even AMD) recommend at least 60fps as a base framerate to use Frame Generation and you can only get that with DLSS Performance but then it's rendered at 1080p and it doesn't look that great anymore, at least on my 4K QD-OLED monitor. And MFG seems to need about 90-100fps to work effectively like Hardware Unboxed motioned in their DLSS 4 review.
 
The problem is that Blackwell did not provide an RT/PT performance boost like Ampere did on Turing and then Lovelace on Ampere. The IPC is still the same, they just increased the max Ray count to 8 but the performance is the same...

View attachment 394097


I was expecting Blackwell IPC to be at least 40% better in RT/PT (aka without adding CUDA Cores) but no... I guess we'll have to wait for the RTX 60s.



Sure Upscalers can do a great job, DLSS Quality looks pretty neat (mostly DLSS 4 Transformer) but 4K DLAA is a lot sharper/cleaner!
The problem is that Nvidia (and even AMD) recommend at least 60fps as a base framerate to use Frame Generation and you can only get that with DLSS Performance but then it's rendered at 1080p and it doesn't look that great anymore, at least on my 4K QD-OLED monitor. And MFG seems to need about 90-100fps to work effectively like Hardware Unboxed motioned in their DLSS 4 review.
Looks like Tensor cores are slowly taking up more die space in each RTX generation.

Are Tensor cores also used to render raster and raytracing graphics along with conventional shading cores as well?
 
To be honest, I hope Nvidia (and AMD, but I don't trust them to be smart enough to do this) radically re-engineers their allocation of space with their next generation. I would honestly be totally fine if Nvidia stopped adding more CUDA cores in the next generation, and just devoted 10x more space to RT cores. That's really what's needed. Rasterization has hit the limit, it's obvious. Nvidia's architecture is still fundamentally stuck at Turing. CUDA cores take up 90% of the space, with RT and Tensor cores "bolted on." We need far more RT cores.
AMD fans would lose their minds with any kind of a decrease in rasterization performance.
 
We can all agree that if Last of us had ray tracing, the minimum bar for max graphics would be very high and the improvements hard to see.
Do we all agree? Personally I think some optional settings could have gone a long way, then it means people who want it can have it, and those that don't just leave it disabled.
 
Looks like Tensor cores are slowly taking up more die space in each RTX generation.

Are Tensor cores also used to render raster and raytracing graphics along with conventional shading cores as well?
They definitely are taking more and more space...

For Rasterization the Tensor Cores should be disabled, unless they use them for other things like denoising.
 
Do we all agree? Personally I think some optional settings could have gone a long way, then it means people who want it can have it, and those that don't just leave it disabled.

It's funny because this game is so CPU intensive that make high end GPUs become under-utilized; it leaves so much performance on tap for better looking effects but this is just a cash grab so Sony won't spend that much effort into it.

Sony ports are pretty much in the dumbster right after release too, could have implemented PT/RT to make them a little more future proof.
 
They definitely are taking more and more space...

For Rasterization the Tensor Cores should be disabled, unless they use them for other things like denoising.
So, are they only for AI data centers, useless for gaming graphics, and a waste of die space?
 
50 series laying the smack down, I almost said ATi.. AMD coming in for a tight second, pretty good for a midrange part.

My 4070Ti getting reamed as usual :D
 
Sorry dude.
Its all good man, just a cut down 4080 lol..

I got a good deal at the time, I really cant complain.. I paid nowhere near what they were retailing for towards the end.
 
So, are they only for AI data centers, useless for gaming graphics, and a waste of die space?
Tensor Cores in Games are used for DLSS, Frame Generation/Multi Frame Generation, Denoising, Ray Reconstruction, etc. But they're used for a lot more this in the Professional world.
 
New patch 1.1
 
The game was already running super smooth how's v1.1 ?
4K capture is better , less of a hit at low's , UI better , and BT as well , haven't had a change to play much , so stay tune .

4K capture is better , less of a hit at low's , UI better , and BT as well , haven't had a change to play much , so stay tune .
In the Theatre district ,one thing I notice is CPU gets a good workout , other parts CPU power is around 115 watts , Theatre district is up to 156 watts or more , loves thread's , 5950X plus 4266 RAM CL18=20=20=20 4X 16GB , temps up to 60C easy , GPU at 58C -99 percent usage, 400 watts high or less 385 .
 
Last edited:
Notice cut screen and UI looking at map or upgrade , at the movie theater , playing the instrument ,Dream scene , fps drop , still needs work. In one upload at the end where she has the dream , fps dip in 80s , in next upload, dream is the beginning scene high 90s to 112 fps , and this is with new patch 1.1.
 
2 hours into the game, amazing story, beautiful graphics made smart to not need 1000$ gpu's.
The production quality is on another level for this game, googled who made this and found that this game is " 70-month development peaked at 200 full-time employees and cost around US$220 million"
Why people still ask for ray tracing for a perfect game that runs on a potato GPU and looks incredible, are they Nvidia employees or sponsored publications like Digital Foundry ?
Strange!
 
2 hours into the game, amazing story, beautiful graphics made smart to not need 1000$ gpu's.
The production quality is on another level for this game, googled who made this and found that this game is " 70-month development peaked at 200 full-time employees and cost around US$220 million"
Why people still ask for ray tracing for a perfect game that runs on a potato GPU and looks incredible, are they Nvidia employees or sponsored publications like Digital Foundry ?
Strange!
Wait till the theater ,story takes off !! My 4090 at 4K native max, some spots love CPU ,my 5950X and 4266 Ram x16Gb , work really well, FG will help with 60fps base, .
 
Wait till the theater ,story takes off !! My 4090 at 4K native max, some spots love CPU ,my 5950X and 4266 Ram x16Gb , work really well, FG will help with 60fps base, .
Patch 3.1 nice boost for my 4090 .
 
Patch 3.1 nice boost for my 4090 .
More performance again? The v1.0 was already running great so with 1.1 + 1.2 and 1.3 having performance upgrades too it must be perfect now lol. I need to test it again but it was already super smooth on release day!
 
More performance again? The v1.0 was already running great so with 1.1 + 1.2 and 1.3 having performance upgrades too it must be perfect now lol. I need to test it again but it was already super smooth on release day!
FG is notice most , Steam input seems to be fixed , whether wired or Bluetooth .
 
FG is notice most , Steam input seems to be fixed , whether wired or Bluetooth .
What did they fix with FG ? Did they update the game to use DLSS 4 natively too ?
 
What did they fix with FG ? Did they update the game to use DLSS 4 natively too ?
Patch release notes on Steam , might want to read if have more Q .
 
Back
Top