Monday, May 2nd 2022

New Specs of AMD RDNA3 GPUs Emerge

A new list of specifications of AMD's next-generation "Navi 3x" GPUs based on the RDNA3 graphics architecture emerged, with lower CU counts than previously reported. It turns out that the large "Navi 31" GPU comes with 12,288 stream processors across 48 WGP (workgroup processors), 12 SA (shader arrays), and 6 SE (shader engines). This still amounts to a 140% increase in stream processors over the Navi 21. This chip will power SKUs that succeed the Radeon RX 6800-series and RX 6900-series.

The second largest silicon from the series is the Navi 32, with two-thirds the number-crunching machinery of the Navi 31. That's 8,192 stream processors across 32 WGPs, 8 SAs, and 4 SEs. The Navi 32 silicon powers successors of the RX 6700-series. The third largest chip is the Navi 33, with half the muscle of the Navi 32, and one-third that of the Navi 31. This means 4,096 stream processors spread across 16 WGP, 4 SA, and 2 SE. There's no word on other specs such as memory bus width, but we've heard rumors of AMD doubling down on the Infinity Cache memory technology, by giving these chips even larger on-die caches. RDNA3 is also expected to improve ray tracing performance, as more of the ray tracing pipeline is handled by fixed-function hardware.
Sources: Redfire75369 (Twitter), VideoCardz
Add your own comment

74 Comments on New Specs of AMD RDNA3 GPUs Emerge

#1
wolf
Performance Enthusiast
Very excited for RDNA3.
Posted on Reply
#2
Lionheart


This is the first 2 thing's that came to mind even though it's all rumours & speculation.
Posted on Reply
#3
Chrispy_
I thought RDNA3 was going to be MCM, so we'd see fewer unique dies and a range of GPUs based more like Zen2 when it first lauched MCM on desktop:

Single harvested die (3600/3600X)
Single fully-enabled die (3700X/3800X)
Dual harvested dies (3900X)
Dual fully-enabled die (3950X)

If AMD was going MCM it wouldn't need three different sizes, would it? Perhaps just a performance-class die to scale up with multiple modules and a tiny one for the entry-level and lower midrange.
Posted on Reply
#4
Durvelle27
Chrispy_I thought RDNA3 was going to be MCM, so we'd see fewer unique dies and a range of GPUs based more like Zen2 when it first lauched MCM on desktop:

Single harvested die (3600/3600X)
Single fully-enabled die (3700X/3800X)
Dual harvested dies (3900X)
Dual fully-enabled die (3950X)

If AMD was going MCM it wouldn't need three different sizes, would it? Perhaps just a performance-class die to scale up with multiple modules and a tiny one for the entry-level and lower midrange.
MCM wouldn't make sense on a GPU
Posted on Reply
#5
Aldain
Chrispy_I thought RDNA3 was going to be MCM, so we'd see fewer unique dies and a range of GPUs based more like Zen2 when it first lauched MCM on desktop:

Single harvested die (3600/3600X)
Single fully-enabled die (3700X/3800X)
Dual harvested dies (3900X)
Dual fully-enabled die (3950X)

If AMD was going MCM it wouldn't need three different sizes, would it? Perhaps just a performance-class die to scale up with multiple modules and a tiny one for the entry-level and lower midrange.
Um N31 and N32 are MCM , the N33 is monolithic
Posted on Reply
#6
Aquinus
Resident Wat-man
Durvelle27MCM wouldn't make sense on a GPU
Could you elaborate on that a bit? I have the opposite impression given the advantages of MCM designs. Huge monolithic dies do have drawbacks and ultimately have an upper limit to how big they can be made along with the typical issues with yields.
Posted on Reply
#7
Chrispy_
Durvelle27MCM wouldn't make sense on a GPU
Oh, they definitely make sense and they're almost certainly coming with RDNA3:

AldainUm N31 and N32 are MCM , the N33 is monolithic
so I guessed right then, just a performance-class die to scale up with multiple modules and a tiny one for the entry-level and lower midrange.

It doesn't explain why N31 and N32 aren't multiples of each other, unless 4096 is the size of one chiplet and N31 is 3x chiplets, N32 is 2x chiplets
Posted on Reply
#8
Jism
Durvelle27MCM wouldn't make sense on a GPU
Both AMD and Nvidia have their own approach. At the end of the day what matters is getting out a better product.

Nvidia's top end model will consume roughly 900W for a big monolitic chip.

A MCM approach can cut into half of that consumption if done right.
Posted on Reply
#9
Steevo
AMD has historically taken the route of testing new tech on smaller chips/lower end products. Maybe we will get a hint from product leaks.
Posted on Reply
#10
Chrispy_
JismBoth AMD and Nvidia have their own approach. At the end of the day what matters is getting out a better product.

Nvidia's top end model will consume roughly 900W for a big monolitic chip.

A MCM approach can cut into half of that consumption if done right.
if I'm right in my wild guess that a single chiplet contains 4096 SPs, then we're looking at roughly 3x the power consumption of an RX6800 which has a core-only draw of about 200W under full load (and an additional 50W for VRM dissipation, GDDR6, and fan power)

If you assume that AMD will use the full power savings of the new process node for 40% reduced power at the same clocks as N7FF, then that's a drop from 600W for three chiplets to more like 360W but then you still need to add maybe 50-80W for VRM/VRAM/fans. That's a 400-450W card range right there, just based on some extremely flaky and optimistic guesswork.

The chances are good that AMD want to clock the thing as high as is feasible so expect either the new 16-pin power connector or at least 525W (450 from 3x8-pin and 75W slot power).
Posted on Reply
#11
TheLostSwede
News Editor
Durvelle27MCM wouldn't make sense on a GPU
Well, I guess we'll have to wait and see.

Posted on Reply
#12
Aquinus
Resident Wat-man
Chrispy_three chiplets
Considering historical precedent, I doubt that we'll see odd numbers of chiplets. I would expect to see the same kind of thing we see with their CPUs. Something like 1, 2, and 4 make sense.
Posted on Reply
#13
spnidel
Chrispy_if I'm right in my wild guess that a single chiplet contains 4096 SPs, then we're looking at roughly 3x the power consumption of an RX6800 which has a core-only draw of about 200W under full load (and an additional 50W for VRM dissipation, GDDR6, and fan power)

If you assume that AMD will use the full power savings of the new process node for 40% reduced power at the same clocks as N7FF, then that's a drop from 600W for three chiplets to more like 360W but then you still need to add maybe 50-80W for VRM/VRAM/fans. That's a 400-450W card range right there, just based on some extremely flaky and optimistic guesswork.

The chances are good that AMD want to clock the thing as high as is feasible so expect either the new 16-pin power connector or at least 525W (450 from 3x8-pin and 75W slot power).
reading this comment makes me laugh, because I remember before 6000 series launched people speculated that there's NO WAY the 6800 xt would be faster than a 2080 ti and there's NO WAY it would use less power than the 3000 series
turns out it did both - was faster than a 2080 ti and used less power than the 3080
Posted on Reply
#14
Chrispy_
AquinusConsidering historical precedent, I doubt that we'll see odd numbers of chiplets. I would expect to see the same kind of thing we see with their CPUs. Something like 1, 2, and 4 make sense.
I could be wrong but when AMD were presenting their slides, they laid the MCMs out in a line with the infinitycache acting as a linear fabric, connecting dies together in a line with the memory controller on the left. and 1, 2, 3 dies off to the right etc.

I'm not sure it has the same restrictions around an I/O die as the CPUs do, since the connecting fabric is shared cache in a GPU rather than a shared interconnect with the PHY as in a CPU. But this isn't definite at all, I'm only going on nothing other than some months-old AMD slides that were light on technical details.
Posted on Reply
#15
Aquinus
Resident Wat-man
Chrispy_I could be wrong but when AMD were presenting their slides, they laid the MCMs out in a line with the infinitycache acting as a linear fabric, connecting dies together in a line with the memory controller on the left. and 1, 2, 3 dies off to the right etc.

I'm not sure it has the same restrictions around an I/O die as the CPUs do, since the connecting fabric is shared cache in a GPU rather than a shared interconnect with the PHY as in a CPU. But this isn't definite at all, I'm only going on nothing other than some months-old AMD slides that were light on technical details.
Ehhh, slides are for marketing and investors.
Posted on Reply
#16
gasolina
JismBoth AMD and Nvidia have their own approach. At the end of the day what matters is getting out a better product.

Nvidia's top end model will consume roughly 900W for a big monolitic chip.

A MCM approach can cut into half of that consumption if done right.
i don't think nvidia 900w is a geforce product but a workstation or server class gpu this feature 48gb vram so defi not a geforce one . A 550W may be but from red camp is around 450w i think 100w is not much and by undervolting both 6800xt and 3080 both uc + uv they gave me same results as fps + power consumption so i guess they're kinda close in term of watt/per plus nvidia gives more features than amd is also a big plus .
Facts show us that the navi can't do ray tracing even worse than rtx 2000 , my best bet the next navi ray tracing level is between rtx 2000 & 3000 still too weak i wish amd just remove this ray tracing and reduce the price by 30% and 10 years later make a big come back with real ray tracing .
Posted on Reply
#17
Chrispy_
spnidelreading this comment makes me laugh, because I remember before 6000 series launched people speculated that there's NO WAY the 6800 xt would be faster than a 2080 ti and there's NO WAY it would use less power than the 3000 series
turns out it did both - was faster than a 2080 ti and used less power than the 3080
I'm optimistic that AMD don't join Nvidia in raising the power consumption higher than 3x8-pin can provide. TSMC have said that you can either get the same clocks as N7FF at 40% reduced power consumption, OR 20% faster. If AMD choose the latter, we're looking at a monstrously fast 800W card that likely requires a new case and a new PSU in addition to however many thousand dollars the GPU itself costs....

Honestly, for a lot of people, cooling a 300W card is bad enough. Not all of us live in cool climates and not every form-factor can handle a 300mm-long 4-slot GPU that requires an additional two slots to breathe!
Posted on Reply
#18
gasolina
spnidelreading this comment makes me laugh, because I remember before 6000 series launched people speculated that there's NO WAY the 6800 xt would be faster than a 2080 ti and there's NO WAY it would use less power than the 3000 series
turns out it did both - was faster than a 2080 ti and used less power than the 3080
these people don't know to read specs though since the 5700xt 2560sp i equal to 1080ti / 2070 super level and which is 35% lower than the 2080ti while the 6800xt offer almost double sp + higher bandwidth plus 30%-40% higher clock speed .
Posted on Reply
#19
DeathtoGnomes
AquinusConsidering historical precedent, I doubt that we'll see odd numbers of chiplets. I would expect to see the same kind of thing we see with their CPUs. Something like 1, 2, and 4 make sense.
If this was the case, N31 would have the 16.5k CUs as originally expected predicted.
Posted on Reply
#20
gasolina
Chrispy_I'm optimistic that AMD don't join Nvidia in raising the power consumption higher than 3x8-pin can provide. TSMC have said that you can either get the same clocks as N7FF at 40% reduced power consumption, OR 20% faster. If AMD choose the latter, we're looking at a monstrously fast 800W card that likely requires a new case and a new PSU in addition to however many thousand dollars the GPU itself costs....

Honestly, for a lot of people, cooling a 300W card is bad enough. Not all of us live in cool climates and not every form-factor can handle a 300mm-long 4-slot GPU that requires an additional two slots to breathe!
they join together for a long time i think after failry x 2015 , they both want to increase power consumption which mean we have to buy bigger psu + more money for coolers or wb which boost the industry more . Why can't they make gpu with same power consumption provide 20-30% higher performance instead just pumping more trans => more power => more heat. This happen to the cpu side also particularly intel , remember back then the core 2 arch less clock speed , less power consumption still kick ass...... sometimes i just want the 14th gen 12 P cores + 16 E cores 4ghz consume around 200w beat the hell out of 12900ks by 40% .
Posted on Reply
#21
_Flare
maybe something like this
Posted on Reply
#22
AnotherReader
Chrispy_I'm optimistic that AMD don't join Nvidia in raising the power consumption higher than 3x8-pin can provide. TSMC have said that you can either get the same clocks as N7FF at 40% reduced power consumption, OR 20% faster. If AMD choose the latter, we're looking at a monstrously fast 800W card that likely requires a new case and a new PSU in addition to however many thousand dollars the GPU itself costs....

Honestly, for a lot of people, cooling a 300W card is bad enough. Not all of us live in cool climates and not every form-factor can handle a 300mm-long 4-slot GPU that requires an additional two slots to breathe!
I hope so. I've had two power hogs in succession: 290X and Vega 64. With more fans, my case handled the open-air 290X, but it still raised the temperature of the other components at full tilt. The Vega 64 blower, on the other hand, is louder, but doesn't raise the temperature of any other components appreciably. Given that blowers are going the way of the dodo, I don't want any cards over 300 W and preferably would like them to stay under 225 W.
Posted on Reply
#23
Aquinus
Resident Wat-man
DeathtoGnomesIf this was the case, N31 would have the 16.5k CUs as originally expected predicted.
I think you're assuming that all the dies will be fully operational. I find it interesting that N23 has twice as many SPs as N24, N21 has twice as many as N22, and N32 has twice as many as N33. N31 is the only odd one out in this respect. FWIW, take the information with a grain of salt. I wouldn't call Twitter the most reliable source for information.
Posted on Reply
#24
AnotherReader
Chrispy_I could be wrong but when AMD were presenting their slides, they laid the MCMs out in a line with the infinitycache acting as a linear fabric, connecting dies together in a line with the memory controller on the left. and 1, 2, 3 dies off to the right etc.

I'm not sure it has the same restrictions around an I/O die as the CPUs do, since the connecting fabric is shared cache in a GPU rather than a shared interconnect with the PHY as in a CPU. But this isn't definite at all, I'm only going on nothing other than some months-old AMD slides that were light on technical details.
I think that, given the bandwidth requirements of GPUs, a separate memory controller and PHY die would be problematic. However, Nvidia seems to think otherwise and their key idea, the large L3 cache, has been executed very well by AMD. Still, I think it only makes sense for large dies, i.e. those exceeding the reticle limit of EUV processes of a bit over 400 square mm.
Posted on Reply
#25
SLObinger
gasolinai don't think nvidia 900w is a geforce product but a workstation or server class gpu this feature 48gb vram so defi not a geforce one . A 550W may be but from red camp is around 450w i think 100w is not much and by undervolting both 6800xt and 3080 both uc + uv they gave me same results as fps + power consumption so i guess they're kinda close in term of watt/per plus nvidia gives more features than amd is also a big plus .
Facts show us that the navi can't do ray tracing even worse than rtx 2000 , my best bet the next navi ray tracing level is between rtx 2000 & 3000 still too weak i wish amd just remove this ray tracing and reduce the price by 30% and 10 years later make a big come back with real ray tracing .
I think next navi will likley beat rtx 3000 soundly in magic cherry picked scenarios for marketing slides with a bunch of little asterisks and paragraphs of notes at the bottom that says something like in 1080P Medium with FSR Performance+++ enabled!!! Got a 6900XT and it just so disappointing to lose 100+ frames when switching to RT on.
Posted on Reply
Add your own comment
May 14th, 2024 00:32 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts