Friday, June 23rd 2023
Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?
AMD Radeon RX 7800 XT will be a much-needed performance-segment addition to the company's Radeon RX 7000-series, which has a massive performance gap between the enthusiast-class RX 7900 series, and the mainstream RX 7600. A report by "Moore's Law is Dead" makes a sensational claim that it is based on a whole new ASIC that's neither the "Navi 31" powering the RX 7900 series, nor the "Navi 32" designed for lower performance tiers, but something in between. This GPU will be AMD's answer to the "AD103." Apparently, the GPU features the same exact 350 mm² graphics compute die (GCD) as the "Navi 31," but on a smaller package resembling that of the "Navi 32." This large GCD is surrounded by four MCDs (memory cache dies), which amount to a 256-bit wide GDDR6 memory interface, and 64 MB of 2nd Gen Infinity Cache memory.
The GCD physically features 96 RDNA3 compute units, but AMD's product managers now have the ability to give the RX 7800 XT a much higher CU count than that of the "Navi 32," while being lower than that of the RX 7900 XT (which is configured with 84). It's rumored that the smaller "Navi 32" GCD tops out at 60 CU (3,840 stream processors), so the new ASIC will enable the RX 7800 XT to have a CU count anywhere between 60 to 84. The resulting RX 7800 XT could have an ASIC with a lower manufacturing cost than that of a theoretical Navi 31 with two disabled MCDs (>60 mm² of wasted 6 nm dies), and even if it ends up performing within 10% of the RX 7900 XT (and matching the GeForce RTX 4070 Ti in the process), it would do so with better pricing headroom. The same ASIC could even power mobile RX 7900 series, where the smaller package and narrower memory bus will conserve precious PCB footprint.
Source:
Moore's Law is Dead (YouTube)
The GCD physically features 96 RDNA3 compute units, but AMD's product managers now have the ability to give the RX 7800 XT a much higher CU count than that of the "Navi 32," while being lower than that of the RX 7900 XT (which is configured with 84). It's rumored that the smaller "Navi 32" GCD tops out at 60 CU (3,840 stream processors), so the new ASIC will enable the RX 7800 XT to have a CU count anywhere between 60 to 84. The resulting RX 7800 XT could have an ASIC with a lower manufacturing cost than that of a theoretical Navi 31 with two disabled MCDs (>60 mm² of wasted 6 nm dies), and even if it ends up performing within 10% of the RX 7900 XT (and matching the GeForce RTX 4070 Ti in the process), it would do so with better pricing headroom. The same ASIC could even power mobile RX 7900 series, where the smaller package and narrower memory bus will conserve precious PCB footprint.
169 Comments on Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?
There's nothing wrong with the 7600, aside from price(it should be around the $200-225 price range, max). Even it's naming scheme left out the "XT" to differentiate it from the previous generation. Sure it's basically on par with the 6650XT in terms of performance, but it's also around 20% more efficient with that performance.
Meanwhile we are all waiting for further improved performance/dollar, and AMD is always the leader there. Hopefully the 7800 XT can impress. Almost fall 2023 here, time for refreshed products, never mind the first release. Have to spend most of your time responding to the imaginary product in their heads, not the one you actually own.
----------
Sad to realize that most hated GPU release from NVidia, the 4060 Ti, is the only card with a decent perf/dollar. Building on the last gen but lacking VRAM. EVERY other NVidia GPU is much worse. Oh well.
More transistors = more compute units = more performance. Performance per compute unit is comparable between ADA and RDNA3.
OK, RTX4090 is Nvidia's successful halo product at the top but RTX 4080 should not be competing with 7900XT, much less 7900XTX. It should be competing with the upcoming 7800XT.
76SM vs 84CU, 256-bit vs 320-bit memory bus, 64MB vs 80MB of LLC.
For whatever reason AMD had to move the product stack down a notch and as a result ended up with more VRAM at same product stack/price/performance levels. Double the GDDR6X chips. 12x2GB vs 24x1GB. That comes with a nasty tradeoff in power consumption.
I paid about the same price an RX 6750 XT costs nowadays for my Radeon VII, which I bought brand new in box (on sale). GPU prices are high because the companies realized that they sell at those inflated prices regardless - and it's interesting to note that the VII was considered a "poor value" choice at $699 when you could have the 5700 XT with RDNA architecture (but less memory and a sliver less performance at the time - though it's faster now) for less money indeed...
As for 7600, you are missing the point about performance. More efficient? Where do you base this? We are using those.... imaginary numbers from TPU reviews. Maybe the reviews are flawed? What do you say?
- AMD went heavily into LLC and got the chance to cut memory bus widths as a result without significant loss in performance.
- Nvidia wanted faster VRAM and went for GDDR6X - with the primary problem that it was only available (probably against predictions) in 1GB chips. This led to VRAM sizes on Ampere being lower than on RDNA2 despite wider memory buses.
In that comparison Nvidia either failed or maybe just had less luck this time around.
This generation - RDNA3 vs Ada - Nvidia followed AMD example of adding a large cache and cutting the memory bus width.
RDOA3 has its name for an eversolid reason. More of the same is almost the complete opposite of what AMD should've been doing if they're aiming at gains and not losses. 7900XTX is a complete mess driverwise and RT-wise. 7900XT is even more of a total nonsense because it shares the 7900XTX problems and is even worse moneywise (even though you have never expected that to be possible). And... 7600 is a marginally overclocked 6650XT marketed as something new. Since the whole line-up is a 10 outta 10 failure how do you expect these 33% to justify anything? Those who already have 3090 will at least buy 4090 or, which is more likely, wait till something can really beat it effortlessly aka x2.5 performance. x2.5, not x1.33.
Those whose best card is at most 3070/6700 XT will still upgrade (if decided to do it now which is senseless but I take it) to 4070 Ti or 4090 because the former has DLSS and better RT and the latter is really providing with massive performance gains over an old GPU.
Nothing, even big discounts, can help RDOA3. x700 and x800 area is doomed because almost everyone who wanted such performance card has already got such a card. And the leatherjerket boy will make even worse products in his RTX 5000 line-up. 8 PCI-e lane 5080 and $800 5060 just because nothing competes.
6900XT vs 3090 were roughly equal (figuring out the SKUs aside where AMD seems to have reacted with 6900XT)
- 80CU vs 82SM, roughly same amount of transistors and shader units. Nvidia had slight disadvantage from being half a node behind.
- AMD bet on LLC to make up for 256-bit memory bus vs 384-bit on 3090. A successful bet, in hindsight.
This is simply not the case for 4090 vs 7900XTX. 128SM vs 96CU on same process node, same memory bus width, similar enough LLC.
There are definitely cases where 79000XTX can get close, mostly when power or memory becomes the limiting factor.
www.newegg.ca/msi-geforce-rtx-4070-ti-rtx-4070-ti-gaming-x-trio-12g/p/N82E16814137771?Description=4070TI&cm_re=4070TI-_-14-137-771-_-Product
www.newegg.ca/msi-radeon-rx-7900-xt-rx-7900-xt-gaming-trio-classic-20g/p/N82E16814137782?Description=7900XT&cm_re=7900XT-_-14-137-782-_-Product
So which would a knowledgeable Gamer buy in a World of 4K benchmarks?
Nvidia has gone full in for Greed and are paying the price. In some ways it is the same as Intel. The issue is the hubris of Nvidia Fanboys that qoute high power draw in a world of burning connectors and use desultory words to describe something they have no real experience with.
7900 XTX was launched marginally cheaper than 4080 and has nothing to brag about. +8 GB VRAM does nothing since 4080 can turn DLSS3 on and yecgaa away from the 7900 XTX. "Sooper dooper mega chiplet arch" does also do nothing when 4080 can preserve 60ish framerates RT On whilst 7900 XTX goes for shambly 30ish FPS with permanent stuttering inbound. More raw power per buck? WHO CARES?
If you ask me, I would make a case for the N31 GCD being technically a more complex design than the portion responsible for graphics in AD102. And of course, the 7900 XTX can never get close unless you pump double the wattage into it.