• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon RX 8800 XT Reportedly Features 220 W TDP, RDNA 4 Efficiency

Radeon RX 8000 series GPUs

Can the TPU team review the image quality of videos encoded in AV1 with the new VGAs from Intel, AMD and Nvidia? Please...

Is it possible to encode videos in 2 passes using the GPU? If so, which app does it?
 
Well that is your opinion. I enjoyed Crossfire support so much that most of the Games I bought at that time supported Crossfire. Mutli GPU is not the same thing as crossfire and has no impact on Games. Ashes of the Singularity is the only Game I know that just supports Multi GPU native. The thing with Polaris was that Crossfire was at the Driver level so if the Game supported it it worked and if not the other card would basically be turned off.
That 'thing' was bog standard for every crossfire and SLI capable GPU. Which meant most of the time you would clearly notice that you would actually run on one card, and if you didnt, it was clearly audible too because of all the extra heat and noise :)

Driver for Nvidia even lets me pick SFR or AFR. Not that it matters though; no game support = you are looking at something arcane that doesnt work or literally renders half the screen.
 
All rumors about amd gpu cards never goes live, so this vision is more like dream card rather than real one. Even rumor about double 8-pin doesn't fit to this vision which can end up with gpu with TGP around 300W... better wait instead of overhyping this gpu like many previous miracle AMD gpus which never goes live at the end.
 
Can the TPU team review the image quality of videos encoded in AV1 with the new VGAs from Intel, AMD and Nvidia? Please...

Is it possible to encode videos in 2 passes using the GPU? If so, which app does it?

That, but also a standard image quality testing in order to see which brand of cards cheats with the textures and which does not (yeah, am looking at you, Nvidia :D)


All rumors about amd gpu cards never goes live, so this vision is more like dream card rather than real one. Even rumor about double 8-pin doesn't fit to this vision which can end up with gpu with TGP around 300W... better wait instead of overhyping this gpu like many previous miracle AMD gpus which never goes live at the end.

It is about smart engineering and AI. I know there is no smart engineering at AMD, but maybe it will be their first time to implement it.
It's called undervolting, it is pretty simple and straight-forward, can be easily done at the factory. The trade-off - you lose 2% of performance, but your cards get lowered TDP from 300W to some sane 180W..
 
All rumors about amd gpu cards never goes live, so this vision is more like dream card rather than real one. Even rumor about double 8-pin doesn't fit to this vision which can end up with gpu with TGP around 300W... better wait instead of overhyping this gpu like many previous miracle AMD gpus which never goes live at the end.
Two 8 pin connectors are used for the 7700 XT as well which isn't even a 250 W card.
 
That 'thing' was bog standard for every crossfire and SLI capable GPU. Which meant most of the time you would clearly notice that you would actually run on one card, and if you didnt, it was clearly audible too because of all the extra heat and noise :)

Driver for Nvidia even lets me pick SFR or AFR. Not that it matters though; no game support = you are looking at something arcane that doesnt work or literally renders half the screen.
There were about 4 settings for Crossfire. I used it for the life of Total War from Medieval 2 to 3 Kingdoms, when they changed the engine. You could still enable Crossfire in the script, but it did nothing to the engine. At that point I started giving up on Multi GPU and starting focusing on other things to put in my other PCIe slots. I guess the heat that you are talking about is true if you don't have a case that is up to snuff but we are talking about 2 RX570/580 combos that might have pulled 150 Watts each. Plus they were inexpensive and popular.
 
I have a 7600XT from As Rock. It is not a power hungry card but it comes with 2 8 pin connectors. I do believe that is going to be the standard for everyone else.
 
I have a 7600XT from As Rock. It is not a power hungry card but it comes with 2 8 pin connectors. I do believe that is going to be the standard for everyone else.

RX 7600 is either single 6-pin or single 8-pin.
 
That is a design mistake. Because you can't put 375-watt connectors (2 x 150-watt + 75-watt from the PCIe slot) on a 245-watt card.
Why is it a mistake? It doesn't make much difference in space utilization from an 8+6 pin combination. It makes keeping track of inventory and assembly way easier on them, not having to keep tabs on two different parts.
 
That is a design mistake. Because you can't put 375-watt connectors (2 x 150-watt + 75-watt from the PCIe slot) on a 245-watt card.
Surely, not every AIB miscalculated. It's simple; 228 W would require 153 W from the 8 pin connector and 75 W from the PCIe slot. In practice, many GPUs draw miniscule amounts from the PCIe slot. Given how many people use third class PSUs, it's prudent to avoid more than 150 W from the 8 pin connector. Two 8 pin connectors make sense when you look at it from that perspective.
 
Why is it a mistake? It doesn't make much difference in space utilization from an 8+6 pin combination. It makes keeping track of inventory and assembly way easier on them, not having to keep tabs on two different parts.

It is a tough PSU requirement. I must be double 6-pin. Not all PSUs have those PCIe power connectors, which are ugly, space consuming, and can't be hidden inside the case.
 
@3valatzy
Pretty much every decent-ish PSU will come with at least two 6+2 pin PCI-e cables. Not really an issue.
I mean, if you want easy, simple, one solution to use on any card, well, 12V-2x6 is there to solve that, but I thought nobody liked it because it burns down your dog and kicks your house or something. Even though the revised connector is totally fine.
 
Given the rumoured specifications, 4080 performance is very unlikely. Going by the numbers in the latest GPU review, the 4080 is 42% faster than the 7800 XT at 1440p and 49% faster at 4K. That is too great a gap to be overcome by a 6.7% increase in Compute Units.
I even have doubts if it can reach 4070 Ti Super/7900XT level raster because if it only has 64CU's (TPU's placeholder page even says 56) then it will be difficult to close the gap to a 84CU card and then surpass it by 30% (difference between 7800XT and 7900XT).
RT is harder to pin down as here AMD could reap the low hanging fruit and massively increase RT performance without increasing the number of RT cores (same number as CU's). Here i can actually believe 4080S level performance.
If it's 45% faster in RT vs the 7900xtx, that makes it basically a 4080/4080S. Since the raster is also similar, then im calling it, 799$ MSRP.
Raster is not similar. Raster is ~4070 Ti Super level tho the reported specs dont support that.
I agree. This is how the hype train starts rolling and then the inevitable derailment leads to bashing of the actual product, even if it's undeserved. The Compute Unit count and rumoured clock speeds suggest performance in the ballpark of the 7900 XT, not the 4080, and certainly not the 7900 XTX which is 20 % faster than the 7900 XT at 4K.
Glad someone gets it. Already i see people start making unrealistic claims. Lets temper our expectations.
AMD never won against Nvidia since over 15 years. The only other small "win" they had was with R9 290X, which was very temporarily, they were a bit faster than 780 and Titan and the answer to that was fast by Nvidia, the 780 Ti, I don't count that very temporary win as a W for AMD.
They didn't and 290X was temporary?
You need to check you timeline and prices.

Yes 290X launched in October 2013 and while Nvidia did release both the 780 Ti and the first Titan a month later those cards were more expensive while not being a whole lot faster. Titan was only miniscule 3% faster while costing obscene (for a gaming card at the time) 999 while 780 Ti was more reasonable 699 but still only 4% faster.

290X at 549 remained the bang for buck choice until Nvidia released GTX 980 in September 2014 for also 549 that beat the RX 290 by a more convincing 13%.
It wasn't until the middle of 2015 when Nvidia released 980 Ti for 649 that convincingly beat the 290X by 28% (and 390X by 21%) at much lower power consumption. So essentially 290X had at least 12 months of being the best value high end card.
 
I even have doubts if it can reach 4070 Ti Super/7900XT level raster because if it only has 64CU's (TPU's placeholder page even says 56) then it will be difficult to close the gap to a 84CU card and then surpass it by 30% (difference between 7800XT and 7900XT).
RT is harder to pin down as here AMD could reap the low hanging fruit and massively increase RT performance without increasing the number of RT cores (same number as CU's). Here i can actually believe 4080S level performance.

Raster is not similar. Raster is ~4070 Ti Super level tho the reported specs dont support that.

Glad someone gets it. Already i see people start making unrealistic claims. Lets temper our expectations.

They didn't and 290X was temporary?
You need to check you timeline and prices.

Yes 290X launched in October 2013 and while Nvidia did release both the 780 Ti and the first Titan a month later those cards were more expensive while not being a whole lot faster. Titan was only miniscule 3% faster while costing obscene (for a gaming card at the time) 999 while 780 Ti was more reasonable 699 but still only 4% faster.

290X at 549 remained the bang for buck choice until Nvidia released GTX 980 in September 2014 for also 549 that beat the RX 290 by a more convincing 13%.
It wasn't until the middle of 2015 when Nvidia released 980 Ti for 649 that convincingly beat the 290X by 28% (and 390X by 21%) at much lower power consumption. So essentially 290X had at least 12 months of being the best value high end card.
Yes, matching 7900 XT's rasterization performance, in the absence of any increase in the performance of a single compute unit, would require high clocks: 3 GHz would be enough, but it's rather unlikely with a 220 W TDP. We know that RDNA 3.5 has doubled the number of texture samplers per compute unit and that may allow a greater than expected performance increase in some cases. In any case, at least the rumours about 7900 XTX level rasterization performance seem ridiculous. I'm also uncertain if they can match Nvidia for ray tracing performance after being behind for so long; the most likely case would be a big improvement over RDNA3, but a smaller gap to Ada.

As for the 290X, it was leading the 780 Ti in TPU's suite before the sun had set on 28 nm being the latest node for GPUs.
 
Last edited:
That is a design mistake. Because you can't put 375-watt connectors (2 x 150-watt + 75-watt from the PCIe slot) on a 245-watt card.
OC can go more than 300W on those GPUs, so this config is the best for safety of the current regulation.
 
I even have doubts if it can reach 4070 Ti Super/7900XT level raster because if it only has 64CU's (TPU's placeholder page even says 56) then it will be difficult to close the gap to a 84CU card and then surpass it by 30% (difference between 7800XT and 7900XT).
It will have at least 64 CUs, maybe more, the 7800 XT already has 60 CUs, stay realistic.
Yes 290X launched in October 2013 and while Nvidia did release both the 780 Ti and the first Titan a month later those cards were more expensive while not being a whole lot faster. Titan was only miniscule 3% faster while costing obscene (for a gaming card at the time) 999 while 780 Ti was more reasonable 699 but still only 4% faster.
4% faster is 4% faster, that's far away from a W for AMD. If you want a W you must be clearly faster and not 4% slower. The 780 Ti was solely released to beat the 290X, which it did - and prices never matter for Enthusiast cards, we all know that, 500, 700, tomato, tamoto. Most people bought the 780 Ti over it. And also, the 290 vanilla easily outsold the 290X, AMD usually cut themselves back then by releasing a card which was 100$ less with just 256 shaders shaved off. They did some weird decisions back then, which since RX 7000 times they stopped doing. 6800 XT had only 512 shaders less than 6900 XT and also 300$ less msrp, another mistake by AMD. But the 7900 XT is 20% slower than the XTX due to them also shaving the bus off by 64 bit and reducing clocks perhaps as well. That's how long AMD needed to learn proper "product segmentation" but then again the 7900 XT was overpriced at launch and it needed months for them to correct the price.
290X at 549 remained the bang for buck choice until Nvidia released GTX 980 in September 2014 for also 549 that beat the RX 290 by a more convincing 13%.
Not for the vast majority of people, due to Nvidias mindshare most people still bought the 780 Ti over it, and then even cards like the 780 vanilla which was slower and had less vram. Lastly the 290X didn't even compete well with his own brother 290 vanilla which had nearly the same performance for 100$ less.
It wasn't until the middle of 2015 when Nvidia released 980 Ti for 649 that convincingly beat the 290X by 28% (and 390X by 21%) at much lower power consumption. So essentially 290X had at least 12 months of being the best value high end card.
No, the 980 Ti was released to compete with the Fury X, this is already a different generation and has not much to do with the 290X.
 
It will have at least 64 CUs, maybe more, the 7800 XT already has 60 CUs, stay realistic.
Unless AMD changed CU design it cant be more than 64. That is the limit for the die size they're going with. This has been the case since Vega.
The higher end variant was canned. Presumably this would have been the 80+ CU die.
4% faster is 4% faster, that's far away from a W for AMD. If you want a W you must be clearly faster and not 4% slower. The 780 Ti was solely released to beat the 290X, which it did - and prices never matter for Enthusiast cards, we all know that, 500, 700, tomato, tamoto. Most people bought the 780 Ti over it.
4% is so little it may as well be a tie. And i disagree on prices. Those who did not care about price bought the 999 Titan that had double the VRAM of 780 Ti.
Not for the vast majority of people, due to Nvidias mindshare most people still bought the 780 Ti over it, and then even cards like the 780 vanilla which was slower and had less vram. Lastly the 290X didn't even compete well with his own brother 290 vanilla which had nearly the same performance for 100$ less.
Not arguing that. Nvidia even back then had mindshare. 290 was clearly the smart buy.
No, the 980 Ti was released to compete with the Fury X, this is already a different generation and has not much to do with the 290X.
Let me guess - another W for Nvidia because 980 Ti was 2% faster than Fury X?
Not quite sure how it was supposed to compete with Fury X when it released after 980 Ti...
Yes in terms of performance and price they were very close but i dont consider under 5% anything but a tie and under 15% anything but underwhelming. I only consider a card soundly beaten when it's 30% or more.
 
Unless AMD changed CU design it cant be more than 64. That is the limit for the die size they're going with. This has been the case since Vega.
That's very old info and that was an "engine amount limit", that it "only" topped out at 64 CUs was a side effect of not being able to use more graphics engines (or clusters). Since RDNA times AMD does not have a hard limit anymore, Big Navi already had 80 CUs and RDNA 3 topped out at 96.

Edit: you can see it here: https://www.techpowerup.com/gpu-specs/amd-fiji.g774#gallery-3
The maximum amount of engines was 4. Seeing that GCN already had problems feeding 2816 shaders and much more so with 4096, it was fine that it topped out at 64. More was never needed back then and when it was, RDNA lifted that limit already.
4% is so little it may as well be a tie. And i disagree on prices. Those who did not care about price bought the 999 Titan that had double the VRAM of 780 Ti.
Yes and? You were talking about a W for AMD, this is not a case with being 4% slower. Being the "budget king" is nothing special, they did this most of the times. Toyota is also better in price to performance than Mercedes (though that analogy sucks because Toyota still sells a lot of cars, whereas AMD doesn't sell many GPUs compared to Nvidia).
Let me guess - another W for Nvidia because 980 Ti was 2% faster than Fury X?
Tied when you compare Ref vs Ref, yes but the 980 Ti custom models were far ahead, so it was more like 10%, 20% with OC. The 980 Ti was simply better. Especially if you didn't play in 4K, the Fury X had issues with its usage in lower resolutions due to having too many shaders and suboptimal DX11 drivers.
Not quite sure how it was supposed to compete with Fury X when it released after 980 Ti...
The Fury X and 980 Ti released at about the same time. Maybe you forgot that Fury X was part of R9 300 gen and Maxwell, GTX 900 series, was the competitor to that. Those were the GPUs for 2014/2015.
Yes in terms of performance and price they were very close but i dont consider under 5% anything but a tie and under 15% anything but underwhelming.
A tie, but we were talking about "Ws for AMD" and AMD did not have that in that generation, ties or winning "budget king" awards, don't help much and Nvidia sold much more, so it's basically a W for Nvidia. But they sold less than AMD in HD 5000 times, that's one of the rare "true" Ws AMD (back then ATI) had against NV.
 
Last edited:
That's very old info. Since RDNA times AMD does not have a hard limit anymore, Big Navi already had 80 CUs and RDNA 3 topped out at 96.
Like i said - RDNA4 high end was canned. There will be no 80-96CU die this time. I was not talking about hard limit overall. If they make a massive die for UDNA1 it could have 128CU's for all we know. I was talking about the die size they're going with having 64CU max. It's a math thing. I head they might have doubled RT units per CU. Before it was 1:1 and now it may be 2:1 which would explain the reported massive RT perf increase if RT unit count goes from say 64 to 128.
Yes and? You were talking about a W for AMD, this is not a case with being 4% slower. Being the "budget king" is nothing special, they did this most of the times. Toyota is also better in price to performance than Mercedes (though that analogy sucks because Toyota still sells a lot of cars, whereas AMD doesn't sell many GPUs compared to Nvidia).
Still a better deal with 150 cheaper or even 250 cheaper of we account for 290 non-X. None of it mattered tho because people still bought Nvidia. Im not arguing against that.
Tied when you compare Ref vs Ref, yes but the 980 Ti custom models were far ahead, so it was more like 10%, 20% with OC. The 980 Ti was simply better. Especially if you didn't play in 4K, the Fury X had issues with its usage in lower resolutions due to having too many shaders and suboptimal DX11 drivers.
Fury X was maxed out from the get go for sure. Yes i remember 980 Ti having good OC headroom. Back when Nvidia still allowed vBIOS modding. They locked it down with 10 series.
The Fury X and 980 Ti released at about the same time. Maybe you forgot that Fury X was part of R9 300 gen and Maxwell, GTX 900 series, was the competitor to that. Those were the GPUs for 2014/2015.
Unless Nvidia had inside knowledge of Fury X performance (980 Ti launched nearly a moth before) i dont see how that's the case. Yes series vs series but SKU vs SKU AMD had the advantage of launching later and thus they likely adjusted their price to match 980 Ti, for better or for worse.
 
I was talking about the die size they're going with having 64CU max. It's a math thing.
I didn't challenge that, I only challenged your notion that it would have "less" CUs than the predecessor, which will 100% not be the case. 60-64 is realistic, yes, I didn't say otherwise.
I head they might have doubled RT units per CU. Before it was 1:1 and now it may be 2:1 which would explain the reported massive RT perf increase if RT unit count goes from say 64 to 128.
It's a case of Ray Accelerator vs a real RT Unit, and the RA was only a additional part of the TMU, the RT Unit in RDNA 4 will probably more be like Nvidias, its own unit, not shared with the TMUs, and way bigger. It's not about amount, it's about size and capability. That's 1 RT Core per double-unit I guess, so around 32 RT cores max.
Unless Nvidia had inside knowledge of Fury X performance (980 Ti launched nearly a moth before) i dont see how that's the case.
22 days later is still about the same time, I never said they launched in the exact same nano-second. ;) And yes they always have insider info, they always know even months ahead how fast the competition will be, the only thing they don't know is pricing, I heard a lot of times that pricing is always a last second thing, whereas performance is the exact opposite.

And because AMD saw how fast the 980 Ti exactly is, they pushed Fury X to 1050 MHz, to the absolute limit, I think 1000 MHz would've been the regular clock for it, and also 250 W max and not 275 W. Even like that it was underwhelming, and 4 GB vram didn't help either. They tried to cushion this with "HBM is better" marketing, but nobody fell for that. The only good thing about HBM was that it didn't eat a lot of power in multi-monitor, idle and video-play usage (things AMD had problems with back then, and partially even still today).
 
Last edited:
Stated performance seems to be too good to be true also TDP is stated as 220w and 265w.

Let's hope that RX 8800 can mach RX 7900 XT raster and RTX 4070Ti RT performance for 499-549$.

What's really bad current generation RTX 4000 series holds it's value very well after two years. The worst price/performance ratio generation ever released (also consumer iq test)
 
Last edited:
What's really bad current generation RTX 4000 series holds it's value very well after two years. The worst price/performance ratio generation ever released (also consumer iq test)
30 gen was worse but it was due to mining and scalping.
 
Back
Top