• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GPU Test System Update for 2025

I think what's missing is Need For Speed Unbound, DiRT Rally 2.0 and EA Sports WRC.
 
Thats really not the case lol These were all reference cards for sure AMD AIB cards are running faster than this average FPS of the 7900XTX reference card is 142.9 the 4080 S is at 145.8 a whole 3 fps in 1440P some slippage is a stretch. Especially when you look at their OC capabilities the 4080 S trails behind with the AIB cards. ASUS GeForce RTX 4080 Super STRIX OC Review - Overclocking & Power Limits | TechPowerUp ASRock Radeon RX 7900 XTX Taichi White Review - o_O Sexy - Overclocking | TechPowerUp
There is indeed slippage. Look at cards like the 6600xt, 6700xt vs their NV contemporary. A near 10% change. Where the super lineup was generally at the same speed or slower for ADA is now faster than AMD. The RTX 4070 Super is now 15% faster than a 7800xt. Hardware unboxed is showing similar droppage in their latest benchmark too.
 
This is because AMD abandoned the software support - now they release quarterly driver updates, which is obviously not enough.
The first sign that AMD is about to exit the business altogether.
 
Great line-up and game choises.
I'd think about two things :
1) Swap RTX 3090 with RTX 3080 Ti (to get high end Ampere card with 12GB, and see when it runs out of VRAM vs. RTX 3090 Ti :D).
2) Swapping RX 6800 XT or adding a new entry with RX 6950 XT (to get top RDNA2 card on charts, RX 6800 XT will be always slower by ~10%).
 
The one thing I would add to future GPU reviews at this point is 1% lows. If you use a "stacked" bar chart, you wouldn't need to increase the number of bars on each chart, there'd just be a marker / different colour on each "average fps" bar indicating where the 1% low is. Also, it's actually a good thing to include 1-2 older games for the benefit of those who are upgrading GPU for higher frame-rates rather than fill it up with UE5 stutter fests. Why? Obligatory reality check for what PC gamers are actually playing in the real world along with related Best of 2024 page...
 
AMD is definitely the platform of choice right now. Good selection of components.

Really looking forward to new cards coming out. The line-up should be hilarious. The 4090 is already over three times faster than the 4060, and looking at the leaks, the 5090 will probably be over four times faster than the 5060. Ray tracing will never become the standard with such gigantic differences between low-end and top-end.

Machine Games made the right choice with forcing hardware RT in Indiana Jones, because that engine runs like a dream. But this will not work with other games, especially those on Unreal Engine. And it's a shame, because when the devs only have to focus on creating one lighting system, the results are great.
 
In my opinion, at the moment there is no replacement for Doom Eternal as a test of the brute force of a GPU
 
There is indeed slippage. Look at cards like the 6600xt, 6700xt vs their NV contemporary. A near 10% change. Where the super lineup was generally at the same speed or slower for ADA is now faster than AMD. The RTX 4070 Super is now 15% faster than a 7800xt. Hardware unboxed is showing similar droppage in their latest benchmark too.
again these are more than likely reference cards. The majority of the games in this testing are RTX titles as well I’m not surprised the bench is a bit skewed a bit but this is margin of error. HW unboxed showed no such thing in their recent video lol . Theirs no types of slippage like I said Techpowerup likes to test out the reference cards very little benefit with Nvidia AIB cards compared to AMD.
 
again these are more than likely reference cards. The majority of the games in this testing are RTX titles as well I’m not surprised the bench is a bit skewed a bit but this is margin of error. HW unboxed showed no such thing in their recent video lol . Theirs no types of slippage like I said Techpowerup likes to test out the reference cards very little benefit with Nvidia AIB cards compared to AMD.
HWU is often an outlier though. They're the only place where a 5700XT ever matched a 2070S. I'm really curious what games they're testing here because if I look at the most recent 5 games they tested the 4080 wins 4 out of 5 by up to 10%. Space Marine is the only one where the 7900XTX wins so if you actually take the most recent games from their channel the results are in line with TPU. Maybe they're still using older games for their test suite?
 
again these are more than likely reference cards. The majority of the games in this testing are RTX titles as well I’m not surprised the bench is a bit skewed a bit but this is margin of error. HW unboxed showed no such thing in their recent video lol . Theirs no types of slippage like I said Techpowerup likes to test out the reference cards very little benefit with Nvidia AIB cards compared to AMD.

This is their December results from the b580 review. Look at the 6700xt vs rtx 3070. In November's results the 6700xt vs RTX 3070, there is a 8 % difference in performance at 1440p. In the December results, there is 20% difference in performance. Look at the RTX 3060 results as well. At 1440P the RTX 3060 is 30% faster than a rx6600 and is now faster than a rx 6650xt where it was previously losing. The RTX 4060 gains notable performance vs the RX 7600 and rx 7600xt as well.
 

This is their December results from the b580 review. Look at the 6700xt vs rtx 3070. In November's results the 6700xt vs RTX 3070, there is a 8 % difference in performance at 1440p. In the December results, there is 20% difference in performance. Look at the RTX 3060 results as well. At 1440P the RTX 3060 is 30% faster than a rx6600 and is now faster than a rx 6650xt where it was previously losing. The RTX 4060 gains notable performance vs the RX 7600 and rx 7600xt as well.
It's because the newer (from the beginning of the year 2024) AMD drivers are noticeably downgrading the performance of the RDNA2 cards, as well as stability. Probably by being optimized exclusively for an RDNA3. When there's a cost cutting, the optimizations of a certain software suite is always the first victim. It's painfully obvious even when comparing the same games in older benchmarks vs. the new ones. If in doubt RDNA2 owners, try 23.12.1, after DDU-ing your new adrenaline, and see for yourself, bye.
 
It's because the newer (from the beginning of the year 2024) AMD drivers are noticeably downgrading the performance of the RDNA2 cards, as well as stability. Probably by being optimized exclusively for an RDNA3. When there's a cost cutting, the optimizations of a certain software suite is always the first victim. It's painfully obvious even when comparing the same games in older benchmarks vs. the new ones. If in doubt RDNA2 owners, try 23.12.1, after DDU-ing your new adrenaline, and see for yourself, bye.
It will help AMD move RDNA2 owners over to RDNA4 next year if that is is the strategy when reviews show a 10-15% greater difference than what would normally be. A cunning strategy.
 
It will help AMD move RDNA2 owners over to RDNA4 next year if that is is the strategy when reviews show a 10-15% greater difference than what would normally be. A cunning strategy.
I wouldn't say it's explicitly on purpose. It might be, but I believe it's mostly due to the lack of funds to optimize for a 2 completely different architectures at the same time.
 
It's because the newer (from the beginning of the year 2024) AMD drivers are noticeably downgrading the performance of the RDNA2 cards, as well as stability. Probably by being optimized exclusively for an RDNA3. When there's a cost cutting, the optimizations of a certain software suite is always the first victim. It's painfully obvious even when comparing the same games in older benchmarks vs. the new ones. If in doubt RDNA2 owners, try 23.12.1, after DDU-ing your new adrenaline, and see for yourself, bye.
If nvidia did this, there'd be mobs with pitchforks burning all the leather jackets in the land....

The one thing I would add to future GPU reviews at this point is 1% lows. If you use a "stacked" bar chart, you wouldn't need to increase the number of bars on each chart, there'd just be a marker / different colour on each "average fps" bar indicating where the 1% low is. Also, it's actually a good thing to include 1-2 older games for the benefit of those who are upgrading GPU for higher frame-rates rather than fill it up with UE5 stutter fests. Why? Obligatory reality check for what PC gamers are actually playing in the real world along with related Best of 2024 page...
I'll add, i'd love to see frame times. Just because two GPUs both do 60 FPS doesnt mean they perform the same.
 
The very important ray tracing, huge impact after 6 years, in the TPU suite is only replaced F1 23 on 24 and added SH.
Congratulations to Nvidia for manipulating the market with this useless feature.
LOOOL
1735313644829.png
 
1.I hope you will link or post settings used in games , so we can replicate them. 2. rtx 5000 is pci 5.0 most likely. from past reviews, pci versions won't make a difference. I hope users won't require 5.0 next year, so you won't need to do a pci scalling test.
 
Why? Obligatory reality check for what PC gamers are actually playing in the real world along with related Best of 2024 page...

Concurrent player counts checked at any arbitrary time are almost always going to be completely filled with major esports/online titles with huge long term playerbases. Most of these titles are not graphically intensive and are easy to run on even lower end hardware. Counter-Strike 2 is in there as a pretty good modern representative of that class of game. Most of those major multiplayer experiences are extremely difficult to design and run a reproducible test for anyway.

Major AAA releases that acheive huge playercounts only tend to do so for a short period after launch as everybody jumps in to play - take Elden Ring, Wukong, STALKER 2, Cyberpunk 2077, etc from the Best of 2024 charts for example. Those are all still modern, popular single player titles that plenty of people are purchasing and playing, but by their nature aren't building up a continually returning player base over many years enough to regularly keep in the concurrent players charts. If you're buying anything over a 60-class GPU are you not doing so primarily to enjoy better performance in modern eye-candy heavy titles?

Techpowerup likes to test out the reference cards very little benefit with Nvidia AIB cards compared to AMD.
Using the reference designs provides the most accurate and reproducible representation of the products, and in particular most accurately reflects what most buyers are actually getting. People like us having a discussion in the comment thread of a post about a GPU test setup update on an enthusiast tech website are in the extreme minority. Most people are going to figure out which tier of card they want to buy and are going to pick the cheapest option. They're buying the cheaper MSI Ventus over an Asus TUF or whatever else because they don't have the knowledge or experience to know what the difference is or why they'd want to pay more for the more premium card.

It's easy to take that consistent reference baseline and get a good general idea how much difference a given AIB card might have by comparing a dedicated review of that AIB card and adding on the performance delta.
 
Wow, fantastic build for GPU testing. What caught my eye besides the beast 9800X3D is that DDR5-6200 CL28 RAM, very fast and low latency with 1:1 is sweet. 24H2 with VBS enabled is an odd choice though, I did a clean Win11 24H2 install and VBS wasn't enabled and I didn't change anything. MS even recommends having it off for gaming on one of their posts.
I was under the assumption that VBS has been the default for all new installations for over a year. MS recommended it at some point way in the past, even the most stubborn CPU vendors are switching to using VBS on as their default in their benchmark guidances.

I fresh installed 24H2 and never looked at VBS, so it's at default whatever that value might be :)

I suggest another page: Path Tracing Performance.
I wanted to add Wukong Path Tracing, failed at the configuration and like half my results are broken. Will probably retest soon and add, because it is important for GeForce 50 and the market it addresses

Seeing as most / all games these days allow some form of upscaling and most gamers use some form of upscaling, I don’t think it unreasonable to include some upscaling benchmark numbers (and would love to see them myself), but realize it’d be quite a bit more work.
This will be included for GeForce 50

Excuse me if I missed something, but where are the games based on a Unity engine?

May I suggest a poll with, let's say 5-10 Unity games to choose at least 2-3 from, to be included in a test suite? Or at least one? Any suggestions are welcome, and again criticism if I missed something. Cheers.
No Rest for the Wicked. What other Unity games are you aware of that aren't completely CPU limited on a midrange GPU?

"I picked the RTX 4070 Ti as the 100% baseline for no specific reason."

Well if you need a reason you could say the ~$1000 baseline.
Don't need a reason, but I felt it important to explain that it isn't some sort of standard, must buy, or what I expect people to have etc

and @cerulliber still thought it was special

Will you do BIOS updates to te rig? For AMD it often change stuff

Do you consider Linux benchmarks? At least for one kinda instance?
Not until I do a full retest of every single card, or that would mess up the scores. Imagine testing AMD Navi 48 with BIOS version 1, and then upgrading to BIOS version 2, which boosts performance, but only for NVIDIA Blackwell

No plans for any Linux testing, other publications do a great job and I rather focus on what's relevant to my readership

That case is fine, but what happens when a GPU doesn't fit? :D
The case was selected to be long enough, otherwise I'll just cut it up

The one thing I would add to future GPU reviews at this point is 1% lows.
Uh, 1% lows have been included for years, on a separate page. Charts get too busy otherwise, for a significant part of the audience, for you experts it's easy to look them up on the other page (at least imo)
 
Last edited:
No Rest for the Wicked.
My bad, missed that one. Looks like a nice game, not sure how it performs though, as I haven't tried it yet.
What other Unity games are you aware of that aren't completely CPU limited on a midrange GPU?
That's a tough question. Maybe someone else may give an answer to it? :p
 
It is always funny to see that GPU prices are completely different to my country.
Some are more, some are less, some aren't even available anymore.
(Except for some leftover stock that has prices higher then MSRP. For example 4x EVGA GeForce RTX 3090 Ti FTW3 sold by Amazon for 2773€ each.)

And please don't stone me, but I really want to see how much the 5090 dwarfs the other bars in the chart. And if it can beat the 4090 in the PPD chart.
There are people that say ~+17,5% at worst with ~+39% at best. My guess is more like ~+50-70%.
 
Last edited:
@Wizzard Can you elaborate on how you conduct the multi monitor power consumption test? I have had multiple GPUs over the years and my idle GPU power consumption is always about double your test results for the same GPU. I suspect it has to do with monitor resolution - I'm using 3 4K monitors and your test bench probably has lower resolution displays - but since you never explained your setup, I have never been able to find whether it is my display resolution or something else that is at fault for the higher idle power
 
  • Multi-monitor: Two monitors are connected to the tested card, and both use different display timings. One monitor runs 2560x1440 over DisplayPort, and the other monitor runs 3840x2160 over HDMI. The refresh rate is set to 60 Hz for both screens. Windows 11 is sitting at the desktop with all Windows closed and drivers installed. The card is left to warm up in idle mode until power draw is stable. When using two identical monitors with the same timings and resolution, power consumption can be less, we intentionally test mismatched monitors. When using high refresh rate monitors, power consumption can be higher than in this test.
You can find description of different scenarios in every GPU review.
 
I'm using 3 4K monitors
See @rainzor‘s reply. You are at the extreme end of the range, i tried to pick something more common, and intentionally mismatched settings. With 3 identical monitors running using the same timings you should see some power improvements. Cru might help

Why the 3080Ti is missing from these tests?
Very similar to 3080, and very little interest generally. After GeForce 50 releases I’ll drop most GeForce 30 cards anyway. Current+last gen+fastest from 2 gens ago is what i usually include
 
Back
Top