Friday, June 23rd 2023

Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?

AMD Radeon RX 7800 XT will be a much-needed performance-segment addition to the company's Radeon RX 7000-series, which has a massive performance gap between the enthusiast-class RX 7900 series, and the mainstream RX 7600. A report by "Moore's Law is Dead" makes a sensational claim that it is based on a whole new ASIC that's neither the "Navi 31" powering the RX 7900 series, nor the "Navi 32" designed for lower performance tiers, but something in between. This GPU will be AMD's answer to the "AD103." Apparently, the GPU features the same exact 350 mm² graphics compute die (GCD) as the "Navi 31," but on a smaller package resembling that of the "Navi 32." This large GCD is surrounded by four MCDs (memory cache dies), which amount to a 256-bit wide GDDR6 memory interface, and 64 MB of 2nd Gen Infinity Cache memory.

The GCD physically features 96 RDNA3 compute units, but AMD's product managers now have the ability to give the RX 7800 XT a much higher CU count than that of the "Navi 32," while being lower than that of the RX 7900 XT (which is configured with 84). It's rumored that the smaller "Navi 32" GCD tops out at 60 CU (3,840 stream processors), so the new ASIC will enable the RX 7800 XT to have a CU count anywhere between 60 to 84. The resulting RX 7800 XT could have an ASIC with a lower manufacturing cost than that of a theoretical Navi 31 with two disabled MCDs (>60 mm² of wasted 6 nm dies), and even if it ends up performing within 10% of the RX 7900 XT (and matching the GeForce RTX 4070 Ti in the process), it would do so with better pricing headroom. The same ASIC could even power mobile RX 7900 series, where the smaller package and narrower memory bus will conserve precious PCB footprint.
Source: Moore's Law is Dead (YouTube)
Add your own comment

169 Comments on Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?

#51
nguyen
Bomby569on par with the 4070ti, so 10% slower then the 7900xt, how many cards will they plan on releasing to stack them every 10%?!

i guess no 7800xtx or it would just be hilarious :roll:
7800XT probably tie with 6900XT more than likely, AMD has excess volume of Navi31 so they have no other choice than to sell cut-down Navi31 as 7800XT, or even 7700XT :roll:
Posted on Reply
#52
N/A
There is no 10%, RDNA scales pretty linearly. 33% less means 33% and slightly better than a 4070 but it could force Nvidia to release the 16GB. As for power savings most people surely set the driver to prefer performance mode where it's probably using the higher 3D clock as opposed to power saving where the low clock is applied instead.
Posted on Reply
#53
Pumper
I won't even consider any AMD GPU until they fix the low load power draw.
Posted on Reply
#54
Vya Domus
john_Obviously we are exchanging opinions. And as Harry Callahan have said... :p

Don't know what a GPU needs to do to keep feeding a high refresh rate monitor or a dual monitor setup. But close to 100W power consumption in video playback is unacceptable in 2023. While an opinion I also believe it is logical to say that. Your smartphone in your pocket or on your desk can probably playback the same video files at 5-10W power consumption. In fact even AMD's own APUs play video while keeping their power consumption for the whole SOC at even lower than 15W. So, something gone wrong with RDNA 3 here. And even if we consider this still an opinion, then there is this chart

where a 6900XT is at half the power consumption.
AMD Radeon RX 7900 XTX Review - Disrupting the RTX 4080 - Power Consumption | TechPowerUp
I own a 7900XT, I never see it using more than 30-35W during video playback with a ton of other things opened in the background and I use two monitors as well.
Posted on Reply
#55
Vayra86
nguyen7800XT probably tie with 6900XT more than likely, AMD has excess volume of Navi31 so they have no other choice than to sell cut-down Navi31 as 7800XT, or even 7700XT :roll:
Yep...
Posted on Reply
#56
Bomby569
88W to watch a video is insane. :kookoo:
Posted on Reply
#57
EatingDirt
john_The 3GHz, it's not. This power consumption in multi monitor and video playback is unacceptable in 2023.

This is from the original review of TechPowerUp but I think it still remains a problem even today.


Probably.

In my opinion it is. Looking at 6650XT and 7600 specs side by side and then no performance change in games, that's a problem.
Why are you referencing the original review on TPU when you can look at the most recent GPU review to see that those power consumption numbers for multi-monitor and video playback are around half of what they were at release.

There's nothing wrong with the 7600, aside from price(it should be around the $200-225 price range, max). Even it's naming scheme left out the "XT" to differentiate it from the previous generation. Sure it's basically on par with the 6650XT in terms of performance, but it's also around 20% more efficient with that performance.
Posted on Reply
#58
R_Type
Dr. DroCan't blame anyone who chose a 4090 over any AMD GPU, it's faster, more feature complete, better maintained, it's simply a product that AMD currently can't hope to match. They don't have either the engineering or software development required to release a product of its caliber at this time.



I thought the same until I saw the RX 7600, and nope, RDNA 3 is practically irredeemable at this point. It's just bad beyond belief if you stack it against Ada Lovelace. And from my standpoint as someone with a high-end Ampere card, it's not for me either. Its marginal gains in raster and equal RT performance at the cost of losing of the whole feature set of the RTX ecosystem and Nvidia's superior driver software make it unworthy of me even considering a replacement. This generation has failed, and short of massive price cuts, I don't think any of these are worth buying.

Unfortunately no heads will roll, Radeon needs new blood, innovative people to develop products and the brand. The boomers that developed these GPUs in the 2000s and linger in upper management need to go. They can't keep up with the industry anymore.
You don't think chiplet gpus are innovative?
Posted on Reply
#59
Garrus
RTX 4090 is a status symbol for someone that can't afford a Tesla or Mercedes. If you want to spend 70 percent more money for 20 percent more performance, go brag to your mommy.

Meanwhile we are all waiting for further improved performance/dollar, and AMD is always the leader there. Hopefully the 7800 XT can impress. Almost fall 2023 here, time for refreshed products, never mind the first release.
Vya DomusI own a 7900XT, I never see it using more than 30-35W during video playback with a ton of other things opened in the background and I use two monitors as well.
Have to spend most of your time responding to the imaginary product in their heads, not the one you actually own.

----------

Sad to realize that most hated GPU release from NVidia, the 4060 Ti, is the only card with a decent perf/dollar. Building on the last gen but lacking VRAM. EVERY other NVidia GPU is much worse. Oh well.
Posted on Reply
#60
Vya Domus
R_TypeYou don't think chiplet gpus are innovative?
Another thing people are overlooking is that AD102 has 30% more transistors, also Navi31 dedicates more transistor budget for the cache, GPUs scale very close to linearly, when you realize that there is absolutely nothing impressive about it.

More transistors = more compute units = more performance. Performance per compute unit is comparable between ADA and RDNA3.
Posted on Reply
#61
Bomby569
nguyen7800XT probably tie with 6900XT more than likely, AMD has excess volume of Navi31 so they have no other choice than to sell cut-down Navi31 as 7800XT, or even 7700XT :roll:
but will people buy more just because it's called 7800xt? i don't think so. Unless of course they make a big price gap, but in that case no one will buy the 7900 obviously, so i think they have the same problem but now getting even less money.
Posted on Reply
#62
londiste
Vayra86The problem isn't RDNA3, its the competitive edge. The whole RDNA3 'stack' (three SKUs lol) just doesn't lead and also fails to cover a relevant stack position in price/perf. Ada realistically has it beat; its ever so slightly more power efficient, has bigger featureset, and sacrificed raster perf/$ compared to RDNA3 is compensated for by DLSS3, if that's what you're willing to wait for. All aspects that command a premium. It should have competed on price. The only advantage it has is VRAM, but RDNA2 has that too.
I would actually argue that RDNA3 was not intended to have a VRAM advantage. AMD successfully failed into that.

OK, RTX4090 is Nvidia's successful halo product at the top but RTX 4080 should not be competing with 7900XT, much less 7900XTX. It should be competing with the upcoming 7800XT.
76SM vs 84CU, 256-bit vs 320-bit memory bus, 64MB vs 80MB of LLC.
For whatever reason AMD had to move the product stack down a notch and as a result ended up with more VRAM at same product stack/price/performance levels.
Vayra86Its high. But there are other examples with strange gaps. Look at the 3090ti compared to a 3090. 13W at the same VRAM capacity. Mkay?
Double the GDDR6X chips. 12x2GB vs 24x1GB. That comes with a nasty tradeoff in power consumption.
Posted on Reply
#63
Bomby569
londisteI would actually argue that RDNA3 was not intended to have a VRAM advantage. AMD successfully failed into that.

OK, RTX4090 is Nvidia's successful halo product at the top but RTX 4080 should not be competing with 7900XT, much less 7900XTX. It should be competing with the upcoming 7800XT.
76SM vs 84CU, 256-bit vs 320-bit memory bus, 64MB vs 80MB of LLC.
For whatever reason AMD had to move the product stack down a notch and as a result ended up with more VRAM at same product stack/price/performance levels.
AMD has been using the higher vram as a selling point for year, it was like that in the Polaris days almost a decade ago, it kept it ever since. It's no a new move or a miscalculation of rdna3
Posted on Reply
#64
Dr. Dro
R_TypeYou don't think chiplet gpus are innovative?
Since they don't measure up in performance and AMD currently offers me less features and worse software for my money than Nvidia, I don't care if it's a monolithic, chiplet, 3D stacked, whatever really.

I paid about the same price an RX 6750 XT costs nowadays for my Radeon VII, which I bought brand new in box (on sale). GPU prices are high because the companies realized that they sell at those inflated prices regardless - and it's interesting to note that the VII was considered a "poor value" choice at $699 when you could have the 5700 XT with RDNA architecture (but less memory and a sliver less performance at the time - though it's faster now) for less money indeed...
Posted on Reply
#65
john_
Vya DomusI own a 7900XT, I never see it using more than 30-35W during video playback with a ton of other things opened in the background and I use two monitors as well.
That's nice.
EatingDirtWhy are you referencing the original review on TPU when you can look at the most recent GPU review to see that those power consumption numbers for multi-monitor and video playback are around half of what they were at release.

There's nothing wrong with the 7600, aside from price(it should be around the $200-225 price range, max). Even it's naming scheme left out the "XT" to differentiate it from the previous generation. Sure it's basically on par with the 6650XT in terms of performance, but it's also around 20% more efficient with that performance.
Another person pointed at the newer numbers, but probably you didn't read more posts. That being said the over 40W power consumption in TPU's review is still pretty high. Please read rest of the posts before replying. Not going to repeat what is already written.

As for 7600, you are missing the point about performance. More efficient? Where do you base this?
GarrusHave to spend most of your time responding to the imaginary product in their heads, not the one you actually own.
We are using those.... imaginary numbers from TPU reviews. Maybe the reviews are flawed? What do you say?
Posted on Reply
#66
londiste
Bomby569AMD has been using the higher vram as a selling point for year, it was like that in the Polaris days almost a decade ago, it kept it ever since. It's no a new move or a miscalculation of rdna3
Yes, but they genuinely added more VRAM in RDNA2 vs Ampere. That was an interesting comparison because both made different technical tradeoffs in search of faster memory system.
- AMD went heavily into LLC and got the chance to cut memory bus widths as a result without significant loss in performance.
- Nvidia wanted faster VRAM and went for GDDR6X - with the primary problem that it was only available (probably against predictions) in 1GB chips. This led to VRAM sizes on Ampere being lower than on RDNA2 despite wider memory buses.

In that comparison Nvidia either failed or maybe just had less luck this time around.
This generation - RDNA3 vs Ada - Nvidia followed AMD example of adding a large cache and cutting the memory bus width.
Posted on Reply
#67
Beginner Macro Device
QuietBobWould you call a 33% difference "marginal"?
I would. When the newer card is more expensive (as per current prices) getting +33% performance is marginal and no single live one should ever care how little the price difference is. 33% can't justify nothing.

RDOA3 has its name for an eversolid reason. More of the same is almost the complete opposite of what AMD should've been doing if they're aiming at gains and not losses. 7900XTX is a complete mess driverwise and RT-wise. 7900XT is even more of a total nonsense because it shares the 7900XTX problems and is even worse moneywise (even though you have never expected that to be possible). And... 7600 is a marginally overclocked 6650XT marketed as something new. Since the whole line-up is a 10 outta 10 failure how do you expect these 33% to justify anything? Those who already have 3090 will at least buy 4090 or, which is more likely, wait till something can really beat it effortlessly aka x2.5 performance. x2.5, not x1.33.

Those whose best card is at most 3070/6700 XT will still upgrade (if decided to do it now which is senseless but I take it) to 4070 Ti or 4090 because the former has DLSS and better RT and the latter is really providing with massive performance gains over an old GPU.

Nothing, even big discounts, can help RDOA3. x700 and x800 area is doomed because almost everyone who wanted such performance card has already got such a card. And the leatherjerket boy will make even worse products in his RTX 5000 line-up. 8 PCI-e lane 5080 and $800 5060 just because nothing competes.
Posted on Reply
#68
kapone32
Yep this is definitely an AMD titled GPU post. You would think these cards are an absolute failure. There is one thing the 7900 series cards can do that no 6000 card can. Both cards can run 4K 144Hz panels no problem. There is no Game, not even TWWH3 (Not the benchmark) that cannot be enjoyed at 4K with those cards. Idle monitor usage or video playback issues are so small that people forget that the message before the cards were launched was to get a nice Beefy PSU to avoid any issues with power and I hope none of the people Whining about power draw own an OLED panel. This card has no foreknowledge so saying it will be X, Y or Z is just conjecture. I will say the last time AMD showed confidence the Internet barbqed them and some of the negative hype is directly from the propaganda Campaign that is Nvidia's "advantage". This card should absolutely kick butt at 1440P anything anyway. Let's keep in mind that the 6700XT is not far away from a 6800 at 1440P.
Posted on Reply
#69
Beginner Macro Device
kapone32Yep this is definitely an AMD titled GPU post. You would think these cards are an absolute failure. There is one thing the 7900 series cards can do that no 6000 card can. Both cards can run 4K 144Hz panels no problem. There is no Game, not even TWWH3 (Not the benchmark) that cannot be enjoyed at 4K with those cards. Idle monitor usage or video playback issues are so small that people forget that the message before the cards were launched was to get a nice Beefy PSU to avoid any issues with power and I hope none of the people Whining about power draw own an OLED panel. This card has no foreknowledge so saying it will be X, Y or Z is just conjecture. I will say the last time AMD showed confidence the Internet barbqed them and some of the negative hype is directly from the propaganda Campaign that is Nvidia's "advantage". This card should absolutely kick butt at 1440P anything anyway. Let's keep in mind that the 6700XT is not far away from a 6800 at 1440P.
You don't get it. nVidia has put a negative effort, yet their Ada products are still better than RDOA3. This is enough to ignore every other detail.
Posted on Reply
#70
londiste
Dr. DroIf the 7900 XTX hadn't failed, it would have been matching the 4090, just as the 6900 XT once matched the 3090.
No. Simply, no. GPUs are not that complicated to have a performance estimate on.

6900XT vs 3090 were roughly equal (figuring out the SKUs aside where AMD seems to have reacted with 6900XT)
- 80CU vs 82SM, roughly same amount of transistors and shader units. Nvidia had slight disadvantage from being half a node behind.
- AMD bet on LLC to make up for 256-bit memory bus vs 384-bit on 3090. A successful bet, in hindsight.

This is simply not the case for 4090 vs 7900XTX. 128SM vs 96CU on same process node, same memory bus width, similar enough LLC.
There are definitely cases where 79000XTX can get close, mostly when power or memory becomes the limiting factor.
Posted on Reply
#71
kapone32
Beginner Micro DeviceI would. When the newer card is more expensive (as per current prices) getting +33% performance is marginal and no single live one should ever care how little the price difference is. 33% can't justify nothing.

RDOA3 has its name for an eversolid reason. More of the same is almost the complete opposite of what AMD should've been doing if they're aiming at gains and not losses. 7900XTX is a complete mess driverwise and RT-wise. 7900XT is even more of a total nonsense because it shares the 7900XTX problems and is even worse moneywise (even though you have never expected that to be possible). And... 7600 is a marginally overclocked 6650XT marketed as something new. Since the whole line-up is a 10 outta 10 failure how do you expect these 33% to justify anything? Those who already have 3090 will at least buy 4090 or, which is more likely, wait till something can really beat it effortlessly aka x2.5 performance. x2.5, not x1.33.

Those whose best card is at most 3070/6700 XT will still upgrade (if decided to do it now which is senseless but I take it) to 4070 Ti or 4090 because the former has DLSS and better RT and the latter is really providing with massive performance gains over an old GPU.

Nothing, even big discounts, can help RDOA3. x700 and x800 area is doomed because almost everyone who wanted such performance card has already got such a card. And the leatherjerket boy will make even worse products in his RTX 5000 line-up. 8 PCI-e lane 5080 and $800 5060 just because nothing competes.
Yep that's why the 6700XT is the best selling card on Newegg Canada. All this card has to beat is the 6800XT, It is not a 7900XT and what driver issue are you talking about? You must mean the 3 months AMD spent making sure that console ports sing with RDNA3. I guess you would have to own one to appreciate. Just read some of the posts in the 7000 Owners Club and you will understand. It is all about pricing.
Beginner Micro DeviceYou don't get it. nVidia has put a negative effort, yet their Ada products are still better than RDOA3. This is enough to ignore every other detail.
Yep the 4070 is the same price in Canada as the 7900XT

www.newegg.ca/msi-geforce-rtx-4070-ti-rtx-4070-ti-gaming-x-trio-12g/p/N82E16814137771?Description=4070TI&cm_re=4070TI-_-14-137-771-_-Product

www.newegg.ca/msi-radeon-rx-7900-xt-rx-7900-xt-gaming-trio-classic-20g/p/N82E16814137782?Description=7900XT&cm_re=7900XT-_-14-137-782-_-Product

So which would a knowledgeable Gamer buy in a World of 4K benchmarks?

Nvidia has gone full in for Greed and are paying the price. In some ways it is the same as Intel. The issue is the hubris of Nvidia Fanboys that qoute high power draw in a world of burning connectors and use desultory words to describe something they have no real experience with.

Posted on Reply
#72
Beginner Macro Device
kapone32what driver issue are you talking about?
I'm talking about aforementioned absurdly high wattage at video playback and multi-monitor usage. It doesn't matter if it's fixed, the only thing that matters is they launched a product unable to perform efficiently. You can afford taking money from customers for a pre-alpha testing privilege when you're miles ahead of your competition, not when you're stone age behind and you yourself are not a competition at all.

7900 XTX was launched marginally cheaper than 4080 and has nothing to brag about. +8 GB VRAM does nothing since 4080 can turn DLSS3 on and yecgaa away from the 7900 XTX. "Sooper dooper mega chiplet arch" does also do nothing when 4080 can preserve 60ish framerates RT On whilst 7900 XTX goes for shambly 30ish FPS with permanent stuttering inbound. More raw power per buck? WHO CARES?
Posted on Reply
#73
londiste
Vya DomusAnother thing people are overlooking is that AD102 has 30% more transistors, also Navi31 dedicates more transistor budget for the cache, GPUs scale very close to linearly, when you realize that there is absolutely nothing impressive about it.

More transistors = more compute units = more performance. Performance per compute unit is comparable between ADA and RDNA3.
In both RDNA3 and Ada, the purpose of LLC is not to augment compute units. It is to augment memory controller and increase effective bandwidth. Memory bus width is the same for both 4090 and 7900XTX so yes, AD102 dedicates less of it to LLC. On the other hand, the transistor budget that 7900XTX dedicates to LLC is the one that does not matter - that cache is next to the memory controllers on MCDs.
Posted on Reply
#74
kapone32
Beginner Micro DeviceI'm talking about aforementioned absurdly high wattage at video playback and multi-monitor usage. It doesn't matter if it's fixed, the only thing that matters is they launched a product unable to perform efficiently. You can afford taking money from customers for a pre-alpha testing privilege when you're miles ahead of your competition, not when you're stone age behind and you yourself are not a competition at all.

7900 XTX was launched marginally cheaper than 4080 and has nothing to brag about. +8 GB VRAM does nothing since 4080 can turn DLSS3 on and yecgaa away from the 7900 XTX. "Sooper dooper mega chiplet arch" does also do nothing when 4080 can preserve 60ish framerates RT On whilst 7900 XTX goes for shambly 30ish FPS with permanent stuttering inbound. More raw power per buck? WHO CARES?
That the thing with a 7900XT you don't need any of those. Just turn the colour and contrast up, enable Freesync and you are good. Please tell me that is not the case.
Posted on Reply
#75
Dr. Dro
londisteNo. Simply, no. GPUs are not that complicated to have a performance estimate on.

6900XT vs 3090 were roughly equal (figuring out the SKUs aside where AMD seems to have reacted with 6900XT)
- 80CU vs 82SM, roughly same amount of transistors and shader units. Nvidia had slight disadvantage from being half a node behind.
- AMD bet on LLC to make up for 256-bit memory bus vs 384-bit on 3090. A successful bet, in hindsight.

This is simply not the case for 4090 vs 7900XTX. 128SM vs 96CU on same process node, same memory bus width, similar enough LLC.
There are definitely cases where 79000XTX can get close, mostly when power or memory becomes the limiting factor.
It's an invalid comparison because they aren't the same architecture or work in a similar way, remember back in the Fermi v. TeraScale days, the GF100/GTX 480 GPU had 480 shaders (512 really but that config never shipped) while a Cypress XT/HD 5870 had 1600... nor can you go by the transistor count estimate because the Nvidia chip has several features that consume die area such as tensor cores and an integrated memory controller and on-die cache that the Navi 31 design does not (with L3 and IMCs being offloaded onto the MCDs and the GCD focusing strictly on graphics and the other SIPP blocks). It's a radically different approach in GPU design that each company has taken this time around, so I don't think it's "excusable" that the Radeon has less compute units because that's an arbitrary number (to some extent).

If you ask me, I would make a case for the N31 GCD being technically a more complex design than the portion responsible for graphics in AD102. And of course, the 7900 XTX can never get close unless you pump double the wattage into it.
Posted on Reply
Add your own comment
May 16th, 2024 09:58 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts