• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Official AMD Radeon RX 7800 XT & RX 7700 XT Performance Figures Leaked

T0@st

News Editor
Joined
Mar 7, 2023
Messages
3,066 (3.88/day)
Location
South East, UK
System Name The TPU Typewriter
Processor AMD Ryzen 5 5600 (non-X)
Motherboard GIGABYTE B550M DS3H Micro ATX
Cooling DeepCool AS500
Memory Kingston Fury Renegade RGB 32 GB (2 x 16 GB) DDR4-3600 CL16
Video Card(s) PowerColor Radeon RX 7800 XT 16 GB Hellhound OC
Storage Samsung 980 Pro 1 TB M.2-2280 PCIe 4.0 X4 NVME SSD
Display(s) Lenovo Legion Y27q-20 27" QHD IPS monitor
Case GameMax Spark M-ATX (re-badged Jonsbo D30)
Audio Device(s) FiiO K7 Desktop DAC/Amp + Philips Fidelio X3 headphones, or ARTTI T10 Planar IEMs
Power Supply ADATA XPG CORE Reactor 650 W 80+ Gold ATX
Mouse Roccat Kone Pro Air
Keyboard Cooler Master MasterKeys Pro L
Software Windows 10 64-bit Home Edition
Argentina's HD Tecnología site has obtained and published AMD's official data outlining the performance prowess of the soon-to-be released Radeon RX 7800 XT & RX 7700 XT GPUs, when stacked up against their closest rivals—NVIDIA GeForce RTX 4060 Ti 16 GB and RTX 4070 12 GB. Team Red could have "cherry-picked" some of this information, and presented resultant performance charts during the grand unveiling of their mid-range RDNA 3 cards at last month's Gamescom press event. HD Tecnología claims that the fuzzy batch of screengrabs were obtained from an official review guide, they chose to not share pages containing precise details of system specifications. An embargo imposed on media outlets is set to be lifted tomorrow, which coincides with the launch of AMD's Navi 32-based contenders.

The test system was running games within a DirectX 12 environment, possibly at maximum settings—general hardware specs included an non-specific AMD Ryzen 7000-series CPU coupled with DDR5 memory on unidentified AM5 motherboard. VideoCardz's abbreviated analysis of the numbers stated: "In summary, without ray tracing, the Radeon RX 7800 XT outperforms the GeForce RTX 4070 by almost 7% on average, while with ray tracing enabled, it maintains a slight 0.5% lead. Conversely, the RX 7700 XT exhibits 16% higher performance over the RTX 4060 Ti 16 GB. However, the presence of ray tracing can tip the scales slightly in NVIDIA's favor, resulting in an 8.5% lead over the AMD GPU."




Here is a VideoCardz-produced summation of these figures:

Radeon RX 7800 XT 16 GB versus RTX 4070 12 GB
  • Raster: +6.9%
  • Ray tracing: -11.6%
  • Average: +0.5%

Radeon RX 7700 XT 12G versus RTX 4060 Ti 16 GB
  • Raster: +15.9%
  • Ray tracing: -5.4%
  • AVG: +8.5%

They also received an additional slide (as supplied by HD Tecnología)—its shows: "3DMark performance data for the RX 7800 XT and RX 7700 XT. Notably, this list excludes the Speed Way benchmark, which is UL's latest high-end system test. In terms of overall performance, the RX 7800 XT and RX 7700 XT appear to outperform their NVIDIA counterparts in most benchmarks, with one noteworthy exception. In the Port Royal benchmark, which incorporates ray tracing capabilities, the RTX 4070 appears to achieve higher scores when compared to the RX 7800 XT. However, it's worth highlighting that even in this scenario, the RX 7700 XT emerges as a clear winner when pitted against the RTX 4060 Ti."



View at TechPowerUp Main Site | Source
 
Boundary? What is that? lol

Interesting to see how performance degrades as RT is used more intensively.
 
Looks solid enough performance wise, as per usual pricing will decide if its a failure or not. The pricing will be "okay", not great though.
 
But a 4070 is 860 Canadian,
And a rx7800 (500us) which should be around 700 Canadian…

so if need artificial intelligence you pay 160 extra…… lol
 
Last edited:
the price tag is attractive to say the least but if only the benchmarks stated what it is or should be it as the actual result of performance i doubt that many are buying it. Hope not.
 
Interesting to see how performance degrades as RT is used more intensively.
Imagine your average RDNA2 GPU.

Voilá! You know how RT affects RDNA3 performance with at least 99% accuracy.
 
Interesting results. Even in ray tracing, if you disregard the (designed for Nvidia) Cyberpunk numbers, then you get competitive results.

These cards should sell well. I just wonder why AMD is launching them so late.
 
Wrong, RT perf with RDNA 3 is way better.
Double wrong. Both generations lose the same ~54% in Control, ~67% in Cyberpunk, ~31% in Deathloop, ~70% in Formula 1, yet it's worse for RDNA3 in Far Cry 6 and it's worse for RDNA2 in Doom: Eternal.
1693945597155.png
1693945637474.png
1693945641561.png
1693945721299.png
1693945711165.png
1693945666391.png


RDNA3 offers no RT uplift. The only reason they perform better at RT is they have more horsepowers in general.
 
These cards should sell well. I just wonder why AMD is launching them so late.
RDNA 3 allegedly had some power issues ("bugs"), I guess they needed time to optimize it, since this chip is way smaller than Navi31, so that the performance is good enough.
RDNA3 offers no RT uplift. The only reason they perform better at RT is they have more horsepowers in general.
Absolutely wrong, as w1zzard notes himself, RT performance is improved. It loses the same % as before, but the raw RT perf is 50-100% higher compared to RDNA 2. This is a well known fact, go and read more reviews then (or try to understand every part of it)
 
This is a well known bullshit

It's like saying "RTX 3090 has improved RT performance compared to RTX 3070."

Basically, you're correct because RTX 3090 doesn't fall so short.
Actually, you're incorrect because the reason why RTX 3090 has more RT proficiency is more stuff under the bonnet, not more quality of said stuff.

RDNA3 has more computing units, more VRAM speed, more VRAM bandwidth, higher clocks, higher cache speed etc compared to RDNA2, yet if RDNA2 and RDNA3 GPUs show us identical results without RT, they will also show identical results with RT (or RDNA3 will even be behind due to pre-alpha stage of chiplet architecture, much similar to 1st gen Ryzen CPUs which had a disasterpiece of latencies).

It only can be considered a real deal if FPS loss decreased which is not the case for the majority of games.

RX 7800 XT will have give or take ten percent identical results to 6800 XT's in RT, unless Navi 32 has better RT units than the rest of RDNA.
 
It only can be considered a real deal if FPS loss decreased which is not the case for the majority of games.
No, more performance is more performance, and every tech expert / reviewer agrees on the fact that RDNA 3 has higher RT performance compared to 2, and pretty comparable performance to Ampere, which makes it way way better than what RDNA 2 had.


Just an example for a in-depth read. But you can also read the architecture page here on the 7900 XTX review for example. 50% more rays in flight, up to 80% with other improvements combined.
 
and every tech expert / reviewer agrees on the fact that RDNA 3 has higher RT performance compared to 2
But RX 7800 XT disagrees.
1693947106658.png
1693947129060.png


27 FPS in 7800 XT (unknown testing method) VS 23.7 FPS in 6800 XT (TPU's testing method). 23.7 is about 85% of 27.

These are tests without RT:
1693947289433.png


1693947306900.png


94 FPS in RDNA3, which has been tested no one knows how, and 76 FPS in RDNA2, tested by W1zzard himself.

94 is 124% of 76.
27 is 113% of 23.7.

So we have a GPU which has 24% uplift RT off and only sorry 13% uplift RT on. This, if you didn't know, is called regression.
 
I wanna add: your argument generally makes no sense, it would if RDNA 3 = RDNA 2, but that's not the case, IPC is about 17.5% higher, clocks are higher as well, so RT perf obviously need to keep up otherwise % disparity would further increase. You're wrong on all accounts.

In your latest post you're of course using the one game that is optimised for Nvidia and runs terrible on RDNA 3 to try and make your moot point. But obviously you're still wrong. Go and look at other games.
 
argument generally makes no sense
Because I failed to explain it simple enough.
IPC is about 17.5% higher
Maybe it is but latency is also a lot higher which makes this IPC way less useful.
clocks are higher as well
Why is this relevant?
RT perf obviously need to keep up otherwise % disparity would further increase
This is how you see things. For you, RT pefrormance is a completely different entity. For me, it's a part of the whole performance entity. And if your RT performance is 3 times worse than non-RT performance on both generations, I consider it no RT specific uplift at all. RDNA3 isn't better at RT. It's better at everything, RT included. Do you now get it?
using the one game that is optimised for Nvidia and runs terrible on RDNA 3
Try reading reviews beforehand next time. It is one of a very limited number of games which runs WAY better on RDNA3 compared to RDNA2 (RX 7600, failing to achieve anything significant (compared to RX 6650 XT which is give or take 1% behind 7600 everywhere) in other games, has a draw with RX 6700 XT in Cyberpunk; higher tier RDNA3 are even more ahead of RDNA2 in CP2077, and yes, EVEN IF WE DOWNCLOCK THEM TO RUN ALL OTHER GAMES 1% WORSE THAN RDNA2).
make your moot point
But obviously you're still wrong.
You're wrong on all accounts.
"The more you tell a lie the more they believe it."
Go and look at other games.
OK. RDNA2 with the same RT speed has the same non-RT speed in 90% of them. What next? 10% counts and 90% don't because I'm always wrong?

I'm outta here.
 
IPC is about 17.5% higher

Impossible.
RX 6650 XT 2048 shaders at 2410 MHz | 2635 MHz | 2190 MHz | 100%
RX 7600 2048 shaders at 2250 MHz | 2655 MHz | 2250 MHz | 101%

Game clock looks reduced by 7%, but memory clock is increased by 3%. Overall you get the same performance, in the area of the statistical error.

This is in no way 17.5% higher performance everything same.
 
Because I failed to explain it simple enough.

Maybe it is but latency is also a lot higher which makes this IPC way less useful.

Why is this relevant?

This is how you see things. For you, RT pefrormance is a completely different entity. For me, it's a part of the whole performance entity. And if your RT performance is 3 times worse than non-RT performance on both generations, I consider it no RT specific uplift at all. RDNA3 isn't better at RT. It's better at everything, RT included. Do you now get it?

Try reading reviews beforehand next time. It is one of a very limited number of games which runs WAY better on RDNA3 compared to RDNA2 (RX 7600, failing to achieve anything significant (compared to RX 6650 XT which is give or take 1% behind 7600 everywhere) in other games, has a draw with RX 6700 XT in Cyberpunk; higher tier RDNA3 are even more ahead of RDNA2 in CP2077, and yes, EVEN IF WE DOWNCLOCK THEM TO RUN ALL OTHER GAMES 1% WORSE THAN RDNA2).



"The more you tell a lie the more they believe it."

OK. RDNA2 with the same RT speed has the same non-RT speed in 90% of them. What next? 10% counts and 90% don't because I'm always wrong?

I'm outta here.
You can go, it doesn't matter to me, didn't read the post either, it's really not worth my time to endlessly debate facts all tech reviewers and experts agree on. So i never needed your confirmation. It's obvious you have a bias and a agenda Anti-AMD, so it's really a waste of time to try and educate you, no as we already saw it is even impossible. So goodbye, and please don't start other arguments with me again, i have better things to do, than debate things with Anti-AMD haters who can't accept simple facts and ignore all websites and data that don't support their baseless and nonsensical claims.
Impossible.
RX 6650 XT 2048 shaders at 2410 MHz | 2635 MHz | 2190 MHz | 100%
RX 7600 2048 shaders at 2250 MHz | 2655 MHz | 2250 MHz | 101%

Game clock looks reduced by 7%, but memory clock is increased by 3%. Overall you get the same performance, in the area of the statistical error.

This is in no way 17.5% higher performance everything same.
This has nothing to do with practical real life performance, try harder at trolling next time.
 
Impossible.
RX 6650 XT 2048 shaders at 2410 MHz | 2635 MHz | 2190 MHz | 100%
RX 7600 2048 shaders at 2250 MHz | 2655 MHz | 2250 MHz | 101%

Game clock looks reduced by 7%, but memory clock is increased by 3%. Overall you get the same performance, in the area of the statistical error.

This is in no way 17.5% higher performance everything same.
This extra performance gain from RDNA3 requires specific optimizations in the game's code, so the tendency is for RDNA3 to run the latest games better.
 
According to the logic of this guy RTX 40 gen also does not have better RT performance than RTX 30 gen, since the percentages are almost the same in lost perf:


Too bad that raw RT performance is still higher and thus his claims still make zero sense.
 
Use the average graphs:

4K, +2.3%:

1693948894533.png


1440p, 1.8%:

1693948932349.png


1080p, 3.2%:

1693948973694.png



This extra performance gain from RDNA3 requires specific optimizations in the game's code, so the tendency is for RDNA3 to run the latest games better.

This is cherry picking.
 
Use the average graphs:

4K, +2.3%:

View attachment 312244

1440p, 1.8%:

View attachment 312245

1080p, 3.2%:

View attachment 312246




This is cherry picking.
No his claim is absolutely true. It's typical for new architectures to need optimizations to scale, doesn't change the fact that the newer architectures have higher abilities.
 
No his claim is absolutely true. It's typical for new architectures to need optimizations to scale, doesn't change the fact that the newer architectures have higher abilities.

They might have but these new games are 1% of the total base of all games launched for all times.
I am not going to play the newest games because someone from AMD claims that it has specific optimisations only in them.
It's ridiculous cherry picing to prove your own agenda.
 
Back
Top