Monday, August 26th 2024

AMD Radeon RX 8000 "RDNA 4" GPU Spotted on Geekbench

AMD's upcoming Radeon RX 8000 "RDNA 4" GPU has been spotted on Geekbench, revealing some of its core specifications. These early benchmark appearances indicate that AMD is now testing the new GPUs internally, preparing for a launch expected next year. The leaked GPU, identified as "GFX1201", is believed to be the Navi 48 SKU - the larger of two dies planned for the RDNA 4 family.

It features 28 Compute Units in the Geekbench listing, which in this case refers to Work Group Processors (WGPs). This likely translates to 56 Compute Units positioning it between the current RX 7700 XT (54 CU) and RX 7800 XT (60 CU) models. The clock speed is listed at 2.1 GHz, which seems low compared to current RDNA 3 GPUs that can boost to 2.5-2.6 GHz. However, this is likely due to the early nature of the samples, and we can expect higher frequencies closer to launch. Memory specifications show 16 GB of VRAM, matching current high-end models and suggesting a 256-bit bus interface. Some variants may feature 12 GB VRAM with a 192-bit bus. While not confirmed, previous reports indicate AMD will use GDDR6 memory.
Performance in the OpenCL benchmark is currently unimpressive, but this is typical for early engineering samples and should be disregarded.

The RDNA 4 GPUs are expected to introduce new ray tracing engines with significant performance improvements. AMD aims to bring high-end performance to the $400-$500 price range with this new generation. More information is likely to be revealed at the upcoming CES event.
Source: Wccftech
Add your own comment

43 Comments on AMD Radeon RX 8000 "RDNA 4" GPU Spotted on Geekbench

#1
Chaitanya
Will be waiting to see how much power consumption is improved over predecessor.
Posted on Reply
#2
the54thvoid
Super Intoxicated Moderator
Give us 4070ti level performance for $400 and it's a win.
Posted on Reply
#3
ARF
ChaitanyaWill be waiting to see how much power consumption is improved over predecessor.
It's very important to be factory power optimised, with emphasis on undervolting, not severe overvolting which results in 50% higher than normal power consumption, but miserable 5% performance increase.
It must strictly stay in its sweet power-efficiency curve.

From the leaked images, I see that they will keep the stupid fins orientation which overheats the neighbouring PC components - the CPU, M.2 drives, etc.

So, no hopes... :banghead:

Posted on Reply
#4
Nordic
the54thvoidGive us 4070ti level performance for $400 and it's a win.
This is rough napkin math. If RDNA4 is 15% faster than RDNA3 per compute unit, at 56 compute units RDNA4 would be close to 4070 super in speed. The 4070ti performance may be out of reach, unless RDNA4 is a lot more significant of an improvement.
Posted on Reply
#5
Daven
I've been thinking about what might be going on with all the next gen discrete GPU releases. We have been on a GPU release schedule of about two years for a while now. Here are the dates of the last releases:

A770 - Oct 2022
4090 - Oct 2022
7900XTX - Dec 2022

So there is still time for the next gen to come out this year. Rumors swirling around the tech media might be due to filling that 24/7 news cycle. The sites are probably getting bored and jumping at every bit of information. However, if the rumors are true and the next gens are delayed, all three families most likely are delayed for very different reasons:

Blackwell - Nvidia has all the resources in the world to release whatever they want whenever they want regardless of AI/Data center prioritization rumors. Therefore, the delay is most likely external. I think Nvidia needs TSMC 3 nm in order to produce Blackwell. I don't think 4 nm will yield the number of units and power consumption levels that will allow them to go above the 4090. Blackwell for the client might be delayed until 2025H2 since 3NP and 3NX are not ready until later this year or the middle of next depending on which node version is needed. With rumors suggesting that neither Intel or AMD will reach or exceed the 4090, there is just no rush for Nvidia to push out high end Blackwell.

Battlemage - Intel still has resources but they are dwindling fast. It is still possible that they may discontinue their entire GPU division for both data center and client. Rumors indicate that the GPU is still coming and Intel has a variety of choices when it comes to process node. But their financial situation is dire and there has been very little information regarding their next gen data center GPU which would get higher priority than the client.

RDNA4 - AMD has the least resources of all three companies so they probably do need to prioritize product releases based on the most important markets. As some have pointed out, AMD is focusing on the data center so any delays to RDNA4 would most likely be caused by data center GPU product schedules. The process node is not a problem as AMD might be leaving the top end GPU market for a while and they have yet to utilize 4 nm.
Posted on Reply
#6
WonderSnail
ARFIt's very important to be factory power optimised, with emphasis on undervolting, not severe overvolting which results in 50% higher than normal power consumption, but miserable 5% performance increase.
It must strictly stay in its sweet power-efficiency curve.

From the leaked images, I see that they will keep the stupid fins orientation which overheats the neighbouring PC components - the CPU, M.2 drives, etc.

So, no hopes... :banghead:

I don't expect the design to change much tbh so your point still holds, but I just wanted to point out that that's a picture of an RX 7600 with a picture of Navi 33 on top of it and "Navi 48" written on it... :laugh:
Posted on Reply
#7
ymdhis
ChaitanyaWill be waiting to see how much power consumption is improved over predecessor.
The rx6600 and rx7600 are currently the most power efficient gpus in idle, they use something like 2-4W. I'm hoping for a 8600 that follows suit.
Posted on Reply
#8
Tomorrow
DavenBlackwell - Nvidia has all the resources in the world to release whatever they want whenever they want regardless of AI/Data center prioritization rumors. Therefore, the delay is most likely external. I think Nvidia needs TSMC 3 nm in order to produce Blackwell. I don't think 4 nm will yield the number of units and power consumption levels that will allow them to go above the 4090. Blackwell for the client might be delayed until 2025H2 since 3NP and 3NX are not ready until later this year or the middle of next depending on which node version is needed. With rumors suggesting that neither Intel or AMD will reach or exceed the 4090, there is just no rush for Nvidia to push out high end Blackwell.
Blackwell will not be 3nm for several reasons. Especially if it's releasing this year. People need to let go of this silly dream.
1. If Nvidia's AI Blackwell cards were N4P then there's very small change that gaming cards would have node advantage.
2. 3nm is no doubt more expensive than N4P. If this expense could not be justified for AI then how could it be justified for gaming?
3. Capacity. I doubt TSMC has enough capacity this year to ship enough 3nm wafers to satisfy Nvidia's demand either in AI or Gaming. Much less both.
4. Need. Like you said - since competitors are not really snapping on the heels there's little incentive for Nvidia to push the envelope in terms of node or other specs. It will likely be a repeat of RTX 20 series with segmentation and performance.
DavenBattlemage - Intel still has resources but they are dwindling fast. It is still possible that they may discontinue their entire GPU division for both data center and client. Rumors indicate that the GPU is still coming and Intel has a variety of choices when it comes to process node. But their financial situation is dire and there has been very little information regarding their next gen data center GPU which would get higher priority than the client.
There has been little to no leaks about BM. I very much doubt it's coming this year. If it were we should have leaked performance leaks almost every week from OEM's.
And when it finally does launch it seems it maybe limited midrange card or cards. Possibly even smaller scope than RDNA4.
DavenRDNA4 - AMD has the least resources of all three companies so they probably do need to prioritize product releases based on the most important markets. As some have pointed out, AMD is focusing on the data center so any delays to RDNA4 would most likely be caused by data center GPU product schedules. The process node is not a problem as AMD might be leaving the top end GPU market for a while and they have yet to utilize 4 nm.
I would not say AMD has the least resources. They just bough a server company for billions of dollars and are constructing several new R&D centers.
As with Nvidia they too will likely use N4P that Zen 5 already uses. Not sure if it will launch within this year. There are performance leaks but im betting it will be announced at CES 2025 and launches in Q1 2025.
Posted on Reply
#9
Chaitanya
ymdhisThe rx6600 and rx7600 are currently the most power efficient gpus in idle, they use something like 2-4W. I'm hoping for a 8600 that follows suit.
Problem with RX 7000 is the power consumption in media playback scenarios.
ARFIt's very important to be factory power optimised, with emphasis on undervolting, not severe overvolting which results in 50% higher than normal power consumption, but miserable 5% performance increase.
It must strictly stay in its sweet power-efficiency curve.

From the leaked images, I see that they will keep the stupid fins orientation which overheats the neighbouring PC components - the CPU, M.2 drives, etc.

So, no hopes... :banghead:

Currently MSI(on few 4070 models), Sapphire, PowerColor and XFX offer handful of GPUs with fins oriented in parallel to GPU but otherwise almost every single SKU uses that fin orientation to dump heat into motherboard(while idling it does help with some airflow on M.2 slots but underload its makes things worse). Also all dumb board makers continue to add M.2 slots under primary x16 slot instead of just providing additional PCIe slot with bifurcation so users can add their own SSDs later doesnt help the situation either.
Posted on Reply
#10
ARF
TomorrowBlackwell will not be 3nm
It makes no sense to not be. What will be offered? A new chip at the reticle limit?
ChaitanyaCurrently MSI(on few 4070 models), Sapphire, PowerColor and XFX offer handful of GPUs with fins oriented in parallel to GPU but otherwise almost every single SKU uses that fin orientation to dump heat into motherboard(while idling it does help with some airflow on M.2 slots but underload its makes things worse). Also all dumb board makers continue to add M.2 slots under primary x16 slot instead of just providing additional PCIe slot with bifurcation so users can add their own SSDs later doesnt help the situation either.
I think it's not about the brands, but about the product tier. All RX 7600 have the fins oriented right.
Posted on Reply
#11
Firedrops
Anyone know what's the rough Geekbench OpenCL score for a 7800XT clock limited to ~2.1 GHz? I see 139,953, presumably at stock ~2.4 GHz clocks. 7800 XT is 4x as fast as this? How sensitive is this benchmark to clock speeds?
Posted on Reply
#12
Daven
TomorrowBlackwell will not be 3nm for several reasons. Especially if it's releasing this year. People need to let go of this silly dream.
1. If Nvidia's AI Blackwell cards were N4P then there's very small change that gaming cards would have node advantage.
2. 3nm is no doubt more expensive than N4P. If this expense could not be justified for AI then how could it be justified for gaming?
3. Capacity. I doubt TSMC has enough capacity this year to ship enough 3nm wafers to satisfy Nvidia's demand either in AI or Gaming. Much less both.
4. Need. Like you said - since competitors are not really snapping on the heels there's little incentive for Nvidia to push the envelope in terms of node or other specs. It will likely be a repeat of RTX 20 series with segmentation and performance.

There has been little to no leaks about BM. I very much doubt it's coming this year. If it were we should have leaked performance leaks almost every week from OEM's.
And when it finally does launch it seems it maybe limited midrange card or cards. Possibly even smaller scope than RDNA4.

I would not say AMD has the least resources. They just bough a server company for billions of dollars and are constructing several new R&D centers.
As with Nvidia they too will likely use N4P that Zen 5 already uses. Not sure if it will launch within this year. There are performance leaks but im betting it will be announced at CES 2025 and launches in Q1 2025.
I think we are more in agreement and maybe my post wasn't worded correctly. I agree with you that any change in requirement to use 3 nm would delay ANY GPU, data center or client, well into 2025. Rumors indicate that all Blackwell chips have been delayed:

NVIDIA Blackwell AI Chips Reportedly Delayed By Several Months, Culprit Being A Design Flaw (wccftech.com)

Rumors are saying design flaw but that could include using 4 nm.

There actually has been some talk from Intel about Battlemage.

Intel Arc "Battlemage" Graphics Card with 12GB of 19 Gbps GDDR6 Memory Surfaces | TechPowerUp
Intel Lunar Lake Technical Deep Dive - So many Revolutions in One Chip - Battlemage Xe2 Graphics | TechPowerUp
Intel Arc Battlemage "Xe2" GPUs Heading For Launch In 2024, Bring Big Performance Improvements (wccftech.com)

I didn't consider what you said about AMD.
FiredropsAnyone know what's the rough Geekbench OpenCL score for a 7800XT clock limited to ~2.1 GHz? I see 139,953, presumably at stock ~2.4 GHz clocks. 7800 XT is 4x as fast as this? How sensitive is this benchmark to clock speeds?
The scores in the leak should not be considered for anything. Most likely an early production version.

AMD Radeon RX 8000 RDNA4 graphics card with 56 CUs and 16GB memory appears on Geekbench - VideoCardz.com

Posted on Reply
#13
ARF
DavenI think we are more in agreement and maybe my post wasn't worded correctly. I agree with you that any change in requirement to use 3 nm would delay ANY GPU, data center or client, well into 2025. Rumors indicate that all Blackwell chips have been delayed:

NVIDIA Blackwell AI Chips Reportedly Delayed By Several Months, Culprit Being A Design Flaw (wccftech.com)

Rumors are saying design flaw but that could include using 4 nm.
That means Blackwell will be a rebrand of Ada Lovelace with small to negligible performance improvements, hence it gets cancelled because the market doesn't demand such products..
Posted on Reply
#15
Daven
ARFThat means Blackwell will be a rebrand of Ada Lovelace with small to negligible performance improvements, hence it gets cancelled because the market doesn't demand such products..
That could be a 'stop-gap' solution until 3 nm is ready. Nvidia has used a progressively smaller process node going all the way back to the 28 nm days for each generation. I just don't see Blackwell proper working on the 4 nm node when Ada Lovelace already seemed to max it out.
Posted on Reply
#16
ARF
DavenThat could be a 'stop-gap' solution until 3 nm is ready. Nvidia has used a progressively smaller process node going all the way back to the 28 nm days for each generation. I just don't see Blackwell proper working on the 4 nm node when Ada Lovelace already seemed to max it out.
They could have simply put the chips in the right boxes. Remember how they cancelled the RTX 4080-12GB because of the negative press?!
Same - RTX 4090 should be dumped down to RTX 5070. RTX 4080 to RTX 5060. RTX 4060 to RT 5030, and here we go.
Without a single penny spent on stupid development processes and the need to back port from 3nm to 4nm.
Posted on Reply
#17
Tomorrow
ARFIt makes no sense to not be. What will be offered? A new chip at the reticle limit?
It makes perfect sense for it to be N4P. It makes very little sense for it to be 3nm and thus far i have not seen convincing arguments as to why 3nm makes more sense. There are plenty of arguments for N4P. Also enterprise hardware always gets priority these days. There's near zero change that Nvidia would waste expensive 3nm wafers for games when they could sell those for AI at 20x markup. It's the same reason those cards or i should rather say accelerators get first dibs on HBM where as gamers get G6X or perhaps G7 if we're lucky.
DavenThere actually has been some talk from Intel about Battlemage.
Intel Arc "Battlemage" Graphics Card with 12GB of 19 Gbps GDDR6 Memory Surfaces | TechPowerUp
Intel Lunar Lake Technical Deep Dive - So many Revolutions in One Chip - Battlemage Xe2 Graphics | TechPowerUp
Intel Arc Battlemage "Xe2" GPUs Heading For Launch In 2024, Bring Big Performance Improvements (wccftech.com)
First one is some sort of boot log. It only means it's running but no performance test.
Second one is about Lunar Lake iGPU using the architecture, not discrete cards.
Third one is pure speculation and again no performance tests.

RDNA4 seems much further along at this point and if this seems to launch in Q1 2025, the BM that does not even have performance leaks is likely even further away. I think the only product releasing this year using this architecture will be Lunar Lake so Intel can say they released BM in 2024 for their investors.
ARFThat means Blackwell will be a rebrand of Ada Lovelace with small to negligible performance improvements, hence it gets cancelled because the market doesn't demand such products..
Why? It will be a repeat of RTX 20 series like i said before. There's precedent for this from 2018 already. Only 2080 Ti moved the performance forward over 1080 Ti and even that was at massive price increase from 700 to 1200 despite them claiming a fake 999 that never materialized. That was a move from 16nm to 12nm.
As for the reticle limit - yes. Nvidia pretty much always maxes this out for their 102 die, but gamers, even 4090 buyers always get roughly 10% cut die.
DavenThat could be a 'stop-gap' solution until 3 nm is ready. Nvidia has used a progressively smaller process node going all the way back to the 28 nm days for each generation. I just don't see Blackwell proper working on the 4 nm node when Ada Lovelace already seemed to max it out.
Ada is 5nm. N4P would still be a smaller node. We can see from Zen 4, 5nm to Zen 5, N4P that there are meaningful efficiency gains even from this small node change.
Posted on Reply
#18
john_
Adding the speculation of higher RT performance, this needs to be priced at $399 and a possible 192bit cheaper model with, for example, 44-48 CUs at $299 to really make some noise.
If AMD comes out with a 56 CUs model at $499 or more, it will mean that they are not ready yet to offer anything competitive to Nvidia and only wish to maintain a position in the discrete market to keep the Radeon brand alive.
Posted on Reply
#19
Daven
TomorrowIt makes perfect sense for it to be N4P. It makes very little sense for it to be 3nm and thus far i have not seen convincing arguments as to why 3nm makes more sense.
As for the reticle limit - yes. Nvidia pretty much always maxes this out for their 102 die, but gamers, even 4090 buyers always get roughly 10% cut die.

Ada is 5nm. N4P would still be a smaller node. We can see from Zen 4, 5nm to Zen 5, N4P that there are meaningful efficiency gains even from this small node change.
There is no need for a 3 nm argument as it is not something inherently different than printing transistors closer together. If Nvidia can use 3 nm (and eventually everyone will) then they will. The reticle limit and runaway power are very real concerns. Otherwise, why doesn't Nvidia use 28 nm for Blackwell. That's real cheap with tons of capacity available. :)

Ada Lovelace already uses the N4 process which is an enhanced version of N5. It is not apparent whether or not the N4P node (another enhanced version of N5) will be enough over N4 found in Ada to meet Nvidia's performance goals. I'm guessing that it is not.
Posted on Reply
#20
john_
TomorrowPossibly even smaller scope than RDNA4.
I am expecting Intel to be closer with it's second gen GPUs to RDNA4 than they where with their first gen GPUs to RDNA2/3.
Posted on Reply
#21
evernessince
TomorrowBlackwell will not be 3nm for several reasons. Especially if it's releasing this year. People need to let go of this silly dream.
1. If Nvidia's AI Blackwell cards were N4P then there's very small change that gaming cards would have node advantage.
2. 3nm is no doubt more expensive than N4P. If this expense could not be justified for AI then how could it be justified for gaming?
3. Capacity. I doubt TSMC has enough capacity this year to ship enough 3nm wafers to satisfy Nvidia's demand either in AI or Gaming. Much less both.
4. Need. Like you said - since competitors are not really snapping on the heels there's little incentive for Nvidia to push the envelope in terms of node or other specs. It will likely be a repeat of RTX 20 series with segmentation and performance.
You missed the most important reason: Yields

Both Samsung and TSMC are struggling with 3nm yields, TSMC reportedly being below 50% and Samsung below 30%. Particularly important when you are talking large GPU dies where a single defect can cost Nvidia the entire die.

I don't see 3nm being suitable for their larger dies, they would have to massively increases price to compensate for all the wafers they'd be throwing in the trash.
Posted on Reply
#22
Patriot
ARFThat means Blackwell will be a rebrand of Ada Lovelace with small to negligible performance improvements, hence it gets cancelled because the market doesn't demand such products..
I don't understand why people insist on talking out of extreme ignorance. No wonder AI gaslights... it was trained on us...

Blackwell is already announced in significant detail. Formed from two massive dies glued togehter with a brand new tsmc silicon bridge interconnect making a 12TB/s bridge with native cache coherency between the chips. AKA making monolithic performance out of two dies. And what do you know an insanely complicated new tech didn't work on the first try, Allegedly.
Posted on Reply
#23
ARF
TomorrowAda is 5nm.
Wrong. It is 4 nm..

TomorrowWhy? It will be a repeat of RTX 20 series like i said before. There's precedent for this from 2018 already. Only 2080 Ti moved the performance forward over 1080 Ti and even that was at massive price increase from 700 to 1200 despite them claiming a fake 999 that never materialized. That was a move from 16nm to 12nm.
As for the reticle limit - yes. Nvidia pretty much always maxes this out for their 102 die, but gamers, even 4090 buyers always get roughly 10% cut die.
RTX 2 was on Samsung process. Which was cheaper. There is a very significant difference this time..
Posted on Reply
#24
Tomorrow
DavenThere is no need for a 3 nm argument as it is not something inherently different than printing transistors closer together. If Nvidia can use 3 nm (and eventually everyone will) then they will. The reticle limit and runaway power are very real concerns. Otherwise, why doesn't Nvidia use 28 nm for Blackwell. That's real cheap with tons of capacity available. :)

Ada Lovelace already uses the N4 process which is an enhanced version of N5. It is not apparent whether or not the N4P node (another enhanced version of N5) will be enough over N4 found in Ada to meet Nvidia's performance goals. I'm guessing that it is not.
TPU database is wrong, because that is what's listed.
en.wikipedia.org/wiki/Ada_Lovelace_(microarchitecture) . Wikipedia correctly lists it as 4N so a custom 4nm variant.
Still N4P would offer advantages even over this because 4N is years old at this point.
evernessinceYou missed the most important reason: Yields

Both Samsung and TSMC are struggling with 3nm yields, TSMC reportedly being below 50% and Samsung below 30%. Particularly important when you are talking large GPU dies where a single defect can cost Nvidia the entire die.

I don't see 3nm being suitable for their larger dies, they would have to massively increases price to compensate for all the wafers they'd be throwing in the trash.
Certainly not this year and not for gamers. I can totally see Rubin launching for AI next year on 3nm and for example AMD using it for Zen5c.
ARFWrong. It is 4 nm..


www.techpowerup.com/gpu-specs/geforce-rtx-4090.c3889
ARFRTX 2 was on Samsung process. Which was cheaper. There is a very significant difference this time..
Wrong. RTX 30 series used Samsung's 8nm node. RTX 20 was using TSMC's "special" 12nm node.
Posted on Reply
#25
gffermari
  • TSMC 4N process (5 nm custom designed for Nvidia)[1] – not to be confused with N4
Posted on Reply
Add your own comment
Oct 15th, 2024 21:41 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts