• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon "Navy Flounder" Features 40CU, 192-bit GDDR6 Memory

Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
It's not necessarily a clock speed increase it could be better IPC at the same clock speeds which would also drop power consumption. It's worth nothing that AMD's transistor density is still quite a bit lower than Nvidia's so I wouldn't at all say it's impossible or unprecedented. Also look what Intel's done with 14nm+++++++++++++++ to counteract and hold it's ground and retain the single thread high frequency scaling performance advantages it still carries. Sure that's happened over a longer period of time, but there is no question AMD's had more R&D emphasis in the last 5 years or so devoted to Ryzen, but gradually shifting more back towards Radeon at the same time. I feel RDNA was the first major pushback from AMD on the graphics side and RDNA2 could be a continuance of it. Nvidia with Ampere and node shrink coinciding makes that more difficult, but let's face it we know AMD didn't eek out all the performance that can be tapped into with 7nm.

Nvidia has a higher transistor count on a larger node for starters and we've seen what Intel's done with 14nm+++++++++++++ as well the idea that 7nm isn't maturing and hasn't is just asinine to think it defiantly has improved from a year ago and AMD defiantly can squeeze more transistors into the design as well at least as many as Nvidia's previous designs or more is reasonable to comprehend being entirely feasible. We can only wait and see what happens. Let's also not forget AMD also use to be fabrication company and spun off global foundries the same can't be said of Nvidia they could certainly be working closely with TSMC on improvements to the node itself for their designs and we some some signs that they did for Ryzen in fact work alongside TSMC to incorporate some node tweaks to get more out of the chip designs on the manufacturing side.

It's just one of those things where everyone is going to have to wait and see what AMD did come up with for RDNA2 will it underwhelm, overwhelm, or be about what you can expect from AMD all things taken into consideration!!? Nvidia is transitioning to a smaller node so the ball is more in their court in that sense however AMD's transistor count is lower so it's defiantly not that simple. If AMD incorporated something clever and cost effective they could certainly make big leaps in performance and efficiency though and we know that AMD's compression is already trailing Nvidia's so they have room to improve there as well. Worth noting is AMD is transitioning toward RTRT hardware, but we really don't at all know to what extent and how invested into it they plan incorporate into that on this initial pushing into it. I think if they match a RTX2080 on the RTRT side non SUPER model they honestly are doing fine with it RTRT isn't going to take off overnight and the RDNA3 design can be more aggressive things will have changed a lot by then and hopefully it'll be 5nm by that point and perhaps HBM costs will have improved.
... that's not how this works. Please actually read the post when you are responding, as you are getting both your facts and the data form these leaks completely mixed up. This needs addressing point by point:
  • The recent leaks specifically mention clock speeds. Whether IPC has changed is thus irrelevant. 2.5GHz is 2.5GHz unless AMD has redefined what clock speed means (which they haven't). That's a 30-50% increase in clock speed from the fastest RDNA 1 SKU. If the rumors are accurate about this and about the power requirements - 150W! - and assuming IPC or perf/Tflop is the same, that's a more than 100% increase in perf/W before any IPC increases.
  • An increase in both absolute performance and performance per watt without moving to a new node is unprecedented. We have not seen a change like that on the same node for at least the past decade of silicon manufacturing, and likely not even the decade before that. Both silicon production and chip design is enormously complex and highly mature processes, making revolutionary jumps like this extremely unlikely. Can it happen? Sure! Is it likely to? Not at all.
  • The relationship between clock speed and transistor density is far too complex to be used in an argument the way you are doing. Besides, I never made any comparison to Intel or Nvidia, only to AMD's own previous generation, which is made on the same node (though it is now improved) and is based on an earlier version of the same architecture. We don't know the tweaks made to the node, nor how changed RDNA 2 is from RDNA 1, but assuming the combination is capable of doubling perf/W is wildly optimistic.
  • Your example from Intel actually speaks against you: they spent literally four years improving their 14nm node, and what did they get from it? No real improvement in perf/W (outside of low power applications at least), but higher boost clocks and higher maximum power draws. They went from 4c/8t 4.2GHz boost/4GHz base at 91W (with max OC somewhere around 4.5-4.7GHz) to 10c/20t 3.7GHz base/various boost speeds up to 5.3GHz at 125W to sustain the base clock or ~250W for boost clocks (max OC around 5.3-5.4GHz). For a more apples to apples comparison, their current fastest 4c/8t chip is the 65W i3-10320 at 3.7GHz base/4.6GHz 1c/4.4GHz all-core. That's a lower TDP, but it still needs 90W for its boost clocks, and the base clock is lower. IPC has not budged. So, Intel, one of the historically best silicon manufacturing companies in the world, spent four years improving their node and got a massive boost in maximum power draw and thus maximum clocks, but essentially zero perf/W improvement. But you're expecting AMD to magically advise TSMC into massively improving their node in a single year?
  • There's no doubt AMD is putting much more R&D effort into Radeon now than 3-5 years ago - they have much more cash on hand and a much stronger CPU portfolio, so that stands to reason. That means things ought to be improving, absolutely, but it does not warrant this level of optimism.
  • I never said 7nm wasn't maturing. Stop putting words in my mouth.
You're arguing as if I'm being extremely pessimistic here or even saying I don't expect RDNA 2 to improve whatsoever, which is a very fundamental misreading of what I've been saying. I would be very, very happy if any of this turned out to be true, but this looks far too good to be true. It's wildly unrealistic. And yes, it would be an unprecedented jump in efficiency - bigger even than Kepler to Maxwell (which also happened on the same node). If AMD could pull that off? That would be amazing. But I'm not pinning my hopes on that.
 
Joined
Mar 21, 2016
Messages
2,197 (0.74/day)
Personally I'm not pinning my hopes on anything nor am I expecting anything. We won't know anything definitively until the dust settles. The node shrink isn't the only metric to consider when looking at AMD and RDNA2 improvements that can be made or realized is what I was alluding toward. Now with that in mind 30% is more achievable than 50% w/o a doubt. We don't know anything about the clock speeds or how it might be handled and achieved perhaps it's a short burst clock speed, but only sustained briefly similar to Intel's turbo and perhaps not across all stream cores. We literally know just about nothing about it officially AMD is being tight lipped about it and playing their cards close.
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Personally I'm not pinning my hopes on anything nor am I expecting anything. We won't know anything definitively until the dust settles. The node shrink isn't the only metric to consider when looking at AMD and RDNA2 improvements that can be made or realized is what I was alluding toward. Now with that in mind 30% is more achievable than 50% w/o a doubt. We don't know anything about the clock speeds or how it might be handled and achieved perhaps it's a short burst clock speed, but only sustained briefly similar to Intel's turbo and perhaps not across all stream cores. We literally know just about nothing about it officially AMD is being tight lipped about it and playing their cards close.
It's true that we don't know anything about how these future products will work, but we do have some basic guidelines from the history of silicon manufacturing. For example, your comparison to Intel's boost strategy is misleading - in Intel's case, boost is a short-term clock speed increase that bypasses baseline power draw limits but must operate within the thermal and voltage stability limits of the silicon (otherwise it would crash, obviously). Thus, the only thing stopping the chip from operating at that clock speed all the time is power and cooling limitations. Which is why desktop chips on certain motherboards and with good coolers can often run at these speeds 24/7. GPUs already do this - that's why they have base and boost speed specs - but no GPU has ever come close to 2.5 GHz with conventional cooling. RDNA 1 is barely able to exceed 2 GHz when overclocked with air cooling. It wouldn't matter whatsoever if a boost spec going higher than this was for a short or long period, as it would crash. It would not be stable, no matter what. You can't bypass stability limits by shortening the time spent past those limits, as you can't predict when the crash will happen. So, reaching 2.5GHz, no matter the duration, would then mean exceeding the maximum stable clock of RDNA 1 by near 25%. Without a node change, just a tweaked node. Would that alone be possible? Sure. Not likely, but possible. But it would cost a lot of power, as we have seen by the changes Intel has made to their 14nm node to reach their high clocks - higher clocks require higher voltages, which increase power draw.

The issue comes with the leaks also saying that this will happen at 150W (170W in other leaks), down from 225W for a stock 5700 XT and more like 280W for one operating at ~2GHz. Given that power draw on the same node increases more than linearly as clock speeds increase, that would mean a massive architectural and node efficiency improvement on top of significant tweaks to the node to reach those clock speeds at all. This is where the "this isn't going to happen" perspective comes in, as the likelihood for both of these things coming true at the same time is so small as to render it impossible.

And remember, these things stack, so we're not talking about the 30-50% numbers you're mentioning here (that's clock speed alone), we're talking an outright >100% increase in perf/W if the rumored numbers are all true. That, as I have said repeatedly, is completely unprecedented in modern silicon manufacturing. I have no problem thinking that AMD's promised "up to 50%" perf/W increase might be true (especially given that they didn't specify the comparison, so it might be between the least efficient RDNA 1 GPU, the 5700 XT, and an ultra-efficient RDNA 2 SKU similar to the 5600 XT). But even a sustained 50% improvement would be extremely impressive and far surpassing what can typically be expected without a node improvement. Remember, even Maxwell only beat Kepler by ~50% perf/W, so if AMD is able to match that it would be one hell of an achievement. Doubling that is out of the question. I would be very, very happy if AMD managed a 50% overall improvement, but even 30-40% would be very, very good.
 
Joined
Jul 10, 2015
Messages
749 (0.23/day)
Location
Sokovia
System Name Alienation from family
Processor i7 7700k
Motherboard Hero VIII
Cooling Macho revB
Memory 16gb Hyperx
Video Card(s) Asus 1080ti Strix OC
Storage 960evo 500gb
Display(s) AOC 4k
Case Define R2 XL
Power Supply Be f*ing Quiet 600W M Gold
Mouse NoName
Keyboard NoNameless HP
Software You have nothing on me
Benchmark Scores Personal record 100m sprint: 60m
A man can dream...
 
Joined
May 15, 2020
Messages
697 (0.48/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
Just give it one week guys, we'll know more. i'm pretty sure some of the latest leaks are fakes.
 
Joined
May 15, 2020
Messages
697 (0.48/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
Next week is the Zen 3 announcement. End of October for the Navi 2x thing.
Indeed, but I'm hoping for some non-fake leaks to come out in the following days ;) , the official launch is still a bit far off.

Anyways, seeing how this launch feels so rushed by Nvidia, I don't think their marketing just got dumb all of a sudden, I think they have better info than us and they feel some pressure. I do not think that pressure comes from consoles, because 400 USD consoles do not compete with 800USD graphic cards, so I think the pressure must come from RDNA2. But the numbers we've seen in the past 5 days really look to good to be true, like it's not even a hype train anymore, it's a hype jet.
 
Joined
Apr 24, 2020
Messages
2,561 (1.75/day)
Indeed, but I'm hoping for some non-fake leaks to come out in the following days ;) , the official launch is still a bit far off.

Anyways, seeing how this launch feels so rushed by Nvidia, I don't think their marketing just got dumb all of a sudden, I think they have better info than us and they feel some pressure. I do not think that pressure comes from consoles, because 400 USD consoles do not compete with 800USD graphic cards, so I think the pressure must come from RDNA2. But the numbers we've seen in the past 5 days really look to good to be true, like it's not even a hype train anymore, it's a hype jet.

The NVidia thing is just "anti-leak stupidity" IMO. They didn't give working drivers to any board partner pre-launch, because they were too worried about leaks. I mean, I understand the anti-leak mindset. But NVidia went too far, and it affected their launch partners and diminished the quality of their drives (temporarily so far: but... its not a good look regardless).
 
Joined
May 15, 2020
Messages
697 (0.48/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
The NVidia thing is just "anti-leak stupidity" IMO. They didn't give working drivers to any board partner pre-launch, because they were too worried about leaks. I mean, I understand the anti-leak mindset. But NVidia went too far, and it affected their launch partners and diminished the quality of their drives (temporarily so far: but... its not a good look regardless).
Well, I agree that the driver/crash/POSCAP issue is mostly that, that and the fact that the Samsung node is not so awesome and they have pushed it very close to its maximum capabilities, unlike in past generations.

But there's very little availability of the cards everywhere and for that, I can think of only 2 reasons, they launched very in a hurry without building up any stock (there must've been like 20k total cards worldwide), or their yields are much lower than expected, but I feel that they should've known how the yields are since August, at least.
 
Joined
Apr 24, 2020
Messages
2,561 (1.75/day)
But there's very little availability of the cards everywhere and for that, I can think of only 2 reasons, they launched very in a hurry without building up any stock (there must've been like 20k total cards worldwide), or their yields are much lower than expected, but I feel that they should've known how the yields are since August, at least.

Think economics: Luxury good vs normal good vs inferior good.

A big part of the draw of these things is having an item no one else has. This plays into NVidia's marketing strategy, and is overall beneficial to NVidia IMO. Its how you market luxury goods. If anything, AMD should learn from NVidia and work towards that kind of marketing. If everyone had a thing, it isn't a luxury anymore. Its just normal.

AMD, for better or worse, seems to be using an inferior good strategy. IMO, that diminishes the brand a bit, but it does make AMD's stuff a bit more personable. I don't believe that the average Joe buys a $799 GPU, and seeing AMD consistently release stuff in the $150 to $400 market is a laudable goal (especially because NVidia seems to ignore that market). The argument AMD makes is almost always price/performance, but that just solidifies the idea of "inferior goods" to the mindset of people. Its subconscious, but that's the effect.
 
Joined
May 15, 2020
Messages
697 (0.48/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
Think economics: Luxury good vs normal good vs inferior good.

A big part of the draw of these things is having an item no one else has. This plays into NVidia's marketing strategy, and is overall beneficial to NVidia IMO. Its how you market luxury goods. If anything, AMD should learn from NVidia and work towards that kind of marketing. If everyone had a thing, it isn't a luxury anymore. Its just normal.

AMD, for better or worse, seems to be using an inferior good strategy. IMO, that diminishes the brand a bit, but it does make AMD's stuff a bit more personable. I don't believe that the average Joe buys a $799 GPU, and seeing AMD consistently release stuff in the $150 to $400 market is a laudable goal (especially because NVidia seems to ignore that market). The argument AMD makes is almost always price/performance, but that just solidifies the idea of "inferior goods" to the mindset of people. Its subconscious, but that's the effect.
Meh, Nvidia sells millions of these card at each generation, and I'm pretty sure if there were 100k 3080s they would sell fast at prices significantly above MSRP, but I think nobody has them, not even Nvidia. But i guess we'll know more soon.
 
Last edited:
Top