• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon RX 9070 XT Benchmarked in 3D Mark Time Spy Extreme and Speed Way

Joined
Oct 22, 2024
Messages
197 (0.95/day)
Although it has only been a few days since the RDNA 4-based GPUs from Team Red hit the scene, it appears that we have already been granted a first look at the 3D Mark performance of the highest-end Radeon RX 9070 XT GPU, and to be perfectly honest, the scores seemingly live up to our expectations - although with disappointing ray tracing performance. Unsurprisingly, the thread has been erased over at Chiphell, but folks have managed to take screenshots in the nick of time.

The specifics reveal that the Radeon RX 9070 XT will arrive with a massive TBP in the range of 330 watts, as revealed by a FurMark snap, which is substantially higher than the previous estimated numbers. With 16 GB of GDDR6 memory, along with base and boost clocks of 2520 and 3060 MHz, the Radeon RX 9070 XT managed to rake in an impressive 14,591 points in Time Spy Extreme, an around 6,345 points in Speed Way. Needless to say, the drivers are likely far from mature, so it is not outlandish to expect a few more points to get squeezed out of the RDNA 4 GPU.




Regarding the scores we currently have, it appears that the Radeon RX 9070 XT fails to match the Radeon RX 7900 XTX in both the tests, although it easily exceeds the GeForce RTX 4080 Super in the non-ray-traced TS Extreme test. In the Speed Way test, which is a ray-traced benchmark, the RX 9070 XT fails to match the RTX 4080 Super, falling noticeably short. Considering that it costs less than half the price of the RTX 4080 Super, this is no small feat. Interestingly, an admin at Chiphell, commented that those planning on grabbing an RTX 50 card should wait, further hinting that the GPU world has "completely changed". Considering the lack of context, the interpretation of the statement is debatable, but it does seem RDNA 4 might pack impressive price-to-performance that may give mid-range Blackwell a run for its money.



View at TechPowerUp Main Site | Source
 
I mean its looking promising especially if the pricing I've seen is correct. It will make all the older cards value shoot way down.
 
I’m very curious to read about arch changes seeing as how the managed to approach a 7900XTX with half the shaders and significantly less bandwidth.

PSA - don’t hype yourself up over leaks like this, you’re going to end up disappointed.
 
Maybe a comparison to 7900 XT would be more useful :confused:.
 
I’m very curious to read about arch changes seeing as how the managed to approach a 7900XTX with half the shaders and significantly less bandwidth.

PSA - don’t hype yourself up over leaks like this, you’re going to end up disappointed.
I've jokingly hypothesized that the over-promising leaks that seem to preceed every single Radeon release are just misinformation from Nvidia fanboys.....I don't really believe that, but it is strange that every Radeon release seems to be preceeded by "leaks" that always over promise on performance and I can't help but feel like that takes a heavy toll on how these releases are received by consumers (at least the ones that have seen the leaks).

If it happened a few times sporadically, I would say it's a coincidence and think nothing of it, but it is strange how it happens every single time...

Obviously, AMD wouldn't put false, over promising leaks out, and AMD employees, if they are leaking information, would presumably be leaking correct information (which wouldn't be over promising) or purposely false information, however if AMD was purposely feeding false formation to employees (in a sting operation for example) I would have to assume those "leaks" would under-promise.....so the question remains, why is it that leaks always over promise with Radeon and who's behind it?
 
A quick summary of leaks/rumors for the 9070XT:

64 CUs (4096 SPs)
330W
3.0-3.1 GHz boost clock
16 GB 20000 MT/s GDDR6
256-bit memory bus
Raster performance +/- 5% of a 4080/4080S
Ray tracing performance +/-5% 4070Ti/4070TiS
$479
Release Jan 22
4 nm
PCIe 5.0 x16

If any of this is close to true, it would be prudent to wait as long as you can to see where everything lands. I would say by the end of February we would know the performance of the 5070/5070Ti/5080/5090 and the 9070/9070XT.
 
I've jokingly hypothesized that the over-promising leaks that seem to preceed every single Radeon release are just misinformation from Nvidia fanboys.....I don't really believe that, but it is strange that every Radeon release seems to be preceeded by "leaks" that always over promise on performance and I can't help but feel like that takes a heavy toll on how these releases are received by consumers (at least the ones that have seen the leaks).

If it happened a few times periodically, I would say it's a coincidence, but it is strange how it happens every single time...

Always possible, and I’ve thought about this too.

Considering its still ~8% slower in raster and matches rt performance in 3Dmark synthetics compared to a 7900XTX, I don’t think that’s over promising if true this time around.

It will do or die based on its price and performance position against the 5070/5070ti.
 
RT performance is a joke, but so is the 5090's 50 something FPS on 4K RT Ultra on Alan Wake 2, as shown on nvidia's presentation.

It still is a marketing gimmick, with negligible visual improvements and huge tanking in performance.
 
I’m very curious to read about arch changes seeing as how the managed to approach a 7900XTX with half the shaders and significantly less bandwidth.

PSA - don’t hype yourself up over leaks like this, you’re going to end up disappointed.
+1 to this, the raw hardware specs of the card don't seem like it should be able to hit the performance levels AMD is claiming.
 
Well well! Call me slightly surprised. Does AMD have an actual ace up its sleeve?
 
If the performance is good and if the price of $479 is also real (of course this will be about 550-600 euros in EU) I am considering it. I don't need it, I definitely DON'T need it, but I am considering it.
 
99FPS with no upscaling native 4K Black Ops 6 VS 149 for a 4080 Super with upscaling
 

Attachments

  • IMG_6604.jpeg
    IMG_6604.jpeg
    1.8 MB · Views: 122
RT performance is a joke, but so is the 5090's 50 something FPS on 4K RT Ultra on Alan Wake 2, as shown on nvidia's presentation.

It still is a marketing gimmick, with negligible visual improvements and huge tanking in performance.
+1 IMO it's way worse than conventional lighting in appearance. Most implementations are grainy with alot of blooming for no reason - reminds me of early attempts to render water by stipling out pixels.
 
+1 to this, the raw hardware specs of the card don't seem like it should be able to hit the performance levels AMD is claiming.

I can’t remember where I saw it but the infinity cache ratio might be going up with navi 4x. Between that, architectural improvements, and better working dual issue shaders it might be possible.
 
I can’t remember where I saw it but the infinity cache ratio might be going up with navi 4x. Between that, architectural improvements, and better working dual issue shaders it might be possible.
Sram is hard to shrink and they are just going from 5nm to 4nm over the 7000 series.

if infinity cache was made 4 times larger the chip would have to become a heck of a lot larger
 
I’m very curious to read about arch changes seeing as how the managed to approach a 7900XTX with half the shaders and significantly less bandwidth.

PSA - don’t hype yourself up over leaks like this, you’re going to end up disappointed.
The shaders are only 50% less (4096 vs 6144) but the clock speed is 25-30% higher. There was mention of optimized SPs in the announcement deck so maybe there is a 10-20% increase in IPC.
 
Sram is hard to shrink and they are just going from 5nm to 4nm over the 7000 series.

if infinity cache was made 4 times larger the chip would have to become a heck of a lot larger

Not four times larger, itwas an abbreviation for navi 48 or navi 44. Infinity cache was was going to remain similar to 7900XTX but feeding less shaders.
 
Last edited:
RT performance is a joke, but so is the 5090's 50 something FPS on 4K RT Ultra on Alan Wake 2, as shown on nvidia's presentation.

It still is a marketing gimmick, with negligible visual improvements and huge tanking in performance.

+1 IMO it's way worse than conventional lighting in appearance. Most implementations are grainy with alot of blooming for no reason - reminds me of early attempts to render water by stipling out pixels.

Have you checked Cyberpunk, Indiana Jones or Alan Wake II with Full RT? It's a huge tanking in performance, yes, but it's beautiful.
 
Have you checked Cyberpunk, Indiana Jones or Alan Wake II with Full RT? It's a huge tanking in performance, yes, but it's beautiful.
Im talking about Cyberpunk specifically. I've poured 600+ hours into that game -- the RT is one of the best implementations ive seen, and it still looks like crap (IMO).

1736449628397.png



It's grainier, blurrier:

1736449765772.png


Pick the RT shot -- it's the one on the left.
 
Last edited:
The shaders are only 50% less (4096 vs 6144) but the clock speed is 25-30% higher. There was mention of optimized SPs in the announcement deck so maybe there is a 10-20% increase in IPC.
You can say 33% less or 50% more. 50% less is wrong, simple math.
 
The shaders are only 50% less (4096 vs 6144) but the clock speed is 25-30% higher. There was mention of optimized SPs in the announcement deck so maybe there is a 10-20% increase in IPC.

I’m mis-remembering the clock speed differences apprently; aib 7900xtx cards averaged around 2800-2900 while reference was around 2600, so there is a larger frequency boost.
 
64 ROPs only out of a 64 CUs but with 256 TMUs? Paper-specs wise it would be between the 7800 XT and the 7900 GRE, but add in architectural (IPC) improvements, higher stock clocks and slightly faster GDDR6 RAM at around 330W TGP, then I think its possible.

Those ROPs are wider now, able to hold a total of 256 TMUs, so 4 TMUs per unit.
 
Back
Top