• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon RX 7000-series RDNA3 GPUs Approach 4 GHz GPU Clocks

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,848 (7.39/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
AMD's upcoming Radeon RX 7000-series GPUs based on the RDNA3 graphics architecture, are rumored to be capable of engine clocks (GPU clocks) close to 4 GHz. This is plausible, given that the current-gen RX 6000-series can hit 3 GHz. AMD's play against the RTX 4090 will hence be a product with +50% performance/Watt gain over the previous generation, a significantly increased shader-count, an over 70% increase in memory bandwidth (384-bit memory running at 20 Gbps or more), faster/larger Infinity Cache, and to top it all off, engine clocks approaching 4 GHz.



View at TechPowerUp Main Site | Source
 
I'm almost certain RDNA3 will be my next cards, probably 7800XT. 7900X3D + 7800XT combo with 64GB CL30ish DDR56000 RAM is sounding better by the day.
hope that 64GB isn't for gaming... or will be under utilized by 70%+ during its lifetime
 
Interesting if true! Ada vs RDNA3 is going to be a lot of fun it seems.
 
Interesting if true! Ada vs RDNA3 is going to be a lot of fun it seems.
Interesting, as in the conversations will go... AMD is 1% behind Nvidia in Ray-tracing, AMD is crap!
 
Interesting, as in the conversations will go... AMD is 1% behind Nvidia in Ray-tracing, AMD is crap!
More just a heated battle, competition benefits us all and I wont limit myself to either camp, ever.

Naturally there are haters and cherry pickers on both sides, especially amusing considering the rich history of both swapping who has had leads in various areas like VRAM, efficiency, features, holding the crown etc, but spending a lot of time in forums and subreddits, I've seen a lot of takes.

In Ampere vs RDNA2 I've seen this from the Pro RDNA2 over Ampere folk, including but not limited to;
  • VRAM is king and certain Ampere will age poorly for this
  • Efficiency is everything
  • RDNA2 has more "raw power" than Ampere
  • RT is a gimmick
  • DLSS is a gimmick (admittedly now that FSR is out, this has largely subsided)
  • Nvenc/RTX voice/CUDA isn't a selling point
  • Ngreedia/they're evil/shady tactics/closed ecosystem/holding the industry back never a dime again etc etc
In Ampere vs RDNA2 I've seen this from the Pro Ampere over RDNA2 folk, including but not limited to;
  • Ampere is more forward looking as an architecture
  • the VRAM amount is lower than desired, but largely suitable for the powerband the respective cards occupy
  • Very efficient when tweaked
  • GDDR6X is a major cause of the power consumption, the core/s themselves aren't entirely inefficient
  • Equal "raw power" to RDNA2 but better RT
  • RT is the future and Ampere already does decently well in respect to the each products targeted res/framerate
  • Image reconstruction (DLSS) is amazing and without Nvidia pushing this new wave, we wouldn't have FSR/XeSS
  • AMD drivers still are meh according to a vocal minority
Lets get the popcorn ready for what Ada vs RDNA3 will bring hey, some of either points will surely remain, but a lot of the rest could change or equalize.
 
Last edited:
More just a heated battle, competition benefits us all and I wont limit myself to either camp, ever.

Naturally there are haters and cherry pickers on both sides, especially amusing considering the rich history of both swapping who has had leads in various areas like VRAM, efficiency, features, holding the crown etc, but spending a lot of time in forums and subreddits, I've seen a lot of takes.

In Ampere vs RDNA2 I've seen this from the Pro RDNA2 over Ampere folk, including but not limited to;
  • VRAM is king and certain Ampere will age poorly for this
  • Efficiency is everything
  • RDNA2 has more "raw power" than Ampere
  • RT is a gimmick
  • DLSS is a gimmick (admittedly now that FSR is out, this has largely subsided)
  • Nvenc/RTX voice/CUDA isn't a selling point
  • Ngreedia/they're evil/shady tactics/closed ecosystem/holding the industry back never a dime again etc etc
In Ampere vs RDNA2 I've seen this from the Pro Ampere over RDNA2 folk, including but not limited to;
  • Ampere is more forward looking as an architecture
  • the VRAM amount is lower than desired, but largely suitable for the powerband the respective cards occupy
  • Very efficient when tweaked
  • GDDR6X is a major cause of the power consumption, the core/s themselves aren't entirely inefficient
  • Equal "raw power" to RDNA2 but better RT
  • RT is the future and Ampere already does decently well in respect to the each products targeted res/framerate
  • Image reconstruction (DLSS) is amazing and without Nvidia pushing this new wave, we wouldn't have FSR/XeSS
  • AMD drivers still are meh according to a vocal minority
Lets get the popcorn ready for what Ada vs RDNA3 will bring hey, some of either points will surely remain, but a lot of the rest could change or equalize.
It seems we turn everything into us vs them.
 
4Ghz is like a CPU almost. Very high frequency. I only hope it wont be toasty because of it. Not to mention power consumption. Anyway, considering the strides both companies make in power consumption, this one probably will be sucking a lot of watts just like the NV cards will. I hope I'm wrong.
 
4Ghz is like a CPU almost. Very high frequency. I only hope it wont be toasty because of it. Not to mention power consumption. Anyway, considering the strides both companies make in power consumption, this one probably will be sucking a lot of watts just like the NV cards will. I hope I'm wrong.
Exactly my thoughts. For the first time ever, my next upgrade path will be decided by power consumption and heat, not performance or price.
 
4Ghz is like a CPU almost. Very high frequency. I only hope it wont be toasty because of it. Not to mention power consumption. Anyway, considering the strides both companies make in power consumption, this one probably will be sucking a lot of watts just like the NV cards will. I hope I'm wrong.

I mean go back to the 2600k and its higher then a cpu, so yep it is about that speed.
But I always find it interesting when stuff like this is mentioned (the title of the article I mean) considering it in itself is rather meaningless.

a 5ghz pentium D is slower then a core2duo 3.6ghz (yes old example I know), I guess we have indeed RDNA2 to compare it slightly but still....
 
I mean go back to the 2600k and its higher then a cpu, so yep it is about that speed.
But I always find it interesting when stuff like this is mentioned (the title of the article I mean) considering it in itself is rather meaningless.

a 5ghz pentium D is slower then a core2duo 3.6ghz (yes old example I know), I guess we have indeed RDNA2 to compare it slightly but still....
It has been apparent that the frequency is not everything. It has been showed by the Pentium D (if I remember correctly) and Athlon era. Pentiums were clocked higher and yet still were slower. That is why Intel had to revise the architecture. Looking only on clocks will get you nowhere.
 
It has been apparent that the frequency is not everything. It has been showed by the Pentium D (if I remember correctly) and Athlon era. Pentiums were clocked higher and yet still were slower. That is why Intel had to revise the architecture. Looking only on clocks will get you nowhere.
Soon enough, looking only at performance will get you nowhere, either. When Random Joe buys his GTRTX 797979500 XT Ti Super, and realizes that the system with his noname 500 W power supply runs slow / won't start / burns the house down. Or do we live in that era already?
 
If this true, I'm really happy for the engineering team at AMD.
If the die sizes are at around the leaked levels and taking account the >50% performance/Watt claim, the design choices are really smart with focus on keeping die size & power consumption low and according to this rumor the 5nm designs can hit extremely high clocks also if pushed.
Regarding features set it won't be competitive with Ada, my impression is that it will be at Turing level (finally) in rendering features (level of RT, AI based technics like DLSS etc included, I also mean the % hit you take in the frame rate when implementing forward looking features like these) and maybe at Ampere level regarding display & multimedia engine.
But this isn't bad if you think consoles are the base and that introduced just 2 years before.
The performance of reference Navi31 flagship in relation with 3090Ti (100%) should be in the below region imo depending the TBP that AMD will target, below 3 examples:

TBPBest caseWorst case
450W192%173.5%
400W181%163.5%
350W168%152%
 
Last edited:
4 gigahertz isn't enough, I need 4 gigawatts!
 
Soon enough, looking only at performance will get you nowhere, either. When Random Joe buys his GTRTX 797979500 XT Ti Super, and realizes that the system with his noname 500 W power supply runs slow / won't start / burns the house down. Or do we live in that era already?
was multiple instability issues on reddit that's been solved due to bad/weak PSU already, so even more power-expensive cards will probably cause more of them
 
Congratz AMD, now lets have some real reviews, whaddya say?
 
Soon enough, looking only at performance will get you nowhere, either. When Random Joe buys his GTRTX 797979500 XT Ti Super, and realizes that the system with his noname 500 W power supply runs slow / won't start / burns the house down. Or do we live in that era already?
To be fair, average Joe is still going to buy it anyway since I have found out that those are being stubborn and know better. I find the 220v in EU very useful nowadays than the 110v US. Normally I would argue furiously against 220v. Now, I have to bite my tongue. Maybe this situation with the global power problem will open some eyes although I doubt it.
Joe will still buy it, then buy a new CPU with new mobo and obviously PSU with obvious reasons or burn his household and argue about pricing or being tricked on when the burning thing happens. Then maybe they will open their eyes.
Either the scenario the companies selling products win and nothing changes. Price goes up, consumption goes up (even though everyone claims how efficient technological advancement is).
 
What you can't do with instructions per cycle, you do with clockspeed, nothing new.
 
I thought something like that was impossible...

When you de-couple chips in MCM type of designs you have freedom in regards of how fast or how hard you can run the chip, while restrain power conditions or target, and without the disadvantage of a large monolithic die.

Look at the Xbox vs PS4 GPU. The Xbox has a tad more shaders and runs slower, then the PS4's GPU which has less shader but runs faster.

They perform equal. The PS4 has more power budget available because of that.

I think something is pulled here by AMD as well.
 
It's starting to get really exciting and dramatic, and TEAM ARC is definitely not in the competition it seems :D
Ada Vs RDNA
 
Lot of people look at the AMD cards from the gamer point of view and they are missing something. The major rendering programs support only CUDA, or are better optimized for CUDA.
Blender with the 3.0 build ditched the Open CL support and AMD was forced to introduce a new API, called HIP, that works only with their modern gpu series. And this API is slower than CUDA, just like Open CL before.
Some years ago AMD released a rendering engine called AMD Pro Render, but this never reached the popularity of stuff like V-Ray, Redshift, Renderman, etc.
Basically AMD is cut out from an entire piece of market and if someone needs to make complex renderings for its job, AMD can't be taken in consideration.

Not just this. The NVIDIA cards can be joined together with the NVLink and the rendering program sees one single card. Meaning that two 24GB cards are seen as one with 48GB of memory. What before was the limit of the GPU rendering, the small amount of memory, is not a problem anymore.
And NVIDIA cards have Tensor cores, that can be used by games too.
In other words, AMD is years behind and unless they pay some billions in order to get their gpus fully supported by the major rendering programs, they will never keep up with NVIDIA in the workstations gpu market.
 
When you de-couple chips in MCM type of designs you have freedom in regards of how fast or how hard you can run the chip, while restrain power conditions or target, and without the disadvantage of a large monolithic die.

Look at the Xbox vs PS4 GPU. The Xbox has a tad more shaders and runs slower, then the PS4's GPU which has less shader but runs faster.

They perform equal. The PS4 has more power budget available because of that.

I think something is pulled here by AMD as well.
Yeah... But according to the leaks, the GPU chip is monolithic, only the 3D cache is separated into smaller chips..
 
Back
Top