Saturday, August 22nd 2020

NVIDIA GeForce RTX 3090 Founders Edition Potentially Pictured: 3-slot Behemoth!

The rumor mill has no weekend break, and it churned out photos of what appears to be an NVIDIA Founders Edition version of the upcoming GeForce RTX 3090 next to the equivalent FE RTX 2080, with the latter looking like a toy compared to the massive triple slotter. The cooler comprises of the same design we discussed in detail in June, with the unique obverse dual-fan + aluminium heatsink seen in the images below. We also covered alleged PCB photos, in case you missed them before, and all lines up with the most recent leaks. The only difference here is that pricing for the RTX 3090 FE is claimed to be $1400, a far cry from the $2000 mark we saw for certain aftermarket offerings in the makings, and yet significantly higher from the previous generation- a worrying trend that we eagerly await to see justified with performance, before we even get into case compatibility concerns with the increased length here. Either way, if the images below are accurate, we are equally curious about the cooling capability and how it affects partner solutions and pricing.
Source: Twitter user @GarnetSunset
Add your own comment

183 Comments on NVIDIA GeForce RTX 3090 Founders Edition Potentially Pictured: 3-slot Behemoth!

#151
siki
Wilson
How about getting some proper case for 60$ if throwing $1000+ on VGA?
If PC case doesn't fall apart while sitting there doing nothing, its good enough.
Posted on Reply
#152
Valantar
EarthDog
I've been singing the same tune for several weeks.. if this power rumor us true (3 slot monster seems like that is coming true) amd doesn't stand a chance to compete with non titan flagship. Unless this silicon is completely borked, how does a new architecture and a die shrink at 300W+ compare against a new arch with a tweaked process? Remember 5700XT was 45% slower than a 2080ti. If ampere is 50% faster, then AMD needs to be ~100% faster to compete. We havent see a card come close, from any camp, ever. That saod, maybe its RTRT performance is where the big increase is... who knows.

So lomg as amd's card lands between them and is notably cheaper, it will be a win for everyone. But I just don't think rdna2 flagship will be within 15%.
Hey, for once we disagree on something! Not necessarily in your conclusion (I also find it unlikely that AMD would be able to compete with a 350-400W Nvidia card assuming +~10% IPC and a reasonable efficiency boost from the new process node) but mostly your reasoning. Firstly, I find it unlikely that AMD will compete at this level mainly because I find it unlikely that they'll make a GPU this (ridiculously) power hungry. (As you said, assuming that the power draw rumors are true, obviously.) Beyond that though, you're comparing a 215-225W GPU against a 275-300W GPU and extrapolating from that as if both were equal, which is obviously not true. The 5700 XT was ~45% slower but also used 28% less power. On this point it's also worth noting that TPU measures power at 1080p, where the performance delta is just 35%, and that the 2808ti is one of the most efficient renditions of Turing while the 5700XT is the least efficient rendition of RDNA by quite a bit. AMD has also promoted "up to 50%" improved perf/W for RDNA 2, which one should obviously take with a heaping pile of salt (does that for example mean up 50% from the 5700 XT, or from any RDNA 1 GPU?), but must also be correct in some sense lest they be subjected to yet another shareholder lawsuit. So IMO it's reasonable to expect a notable perf/W jump from RDNA overall even if it's just an improved arch on a tweaked node. Will it be on par with Ampere? I don't quite think they'll be there, but I think it will be closer than we're used to seeing. Which would/could also explain Nvidia deciding to make a power move of a GPU like this to cement themselves as having the most powerful GPU despite much tougher competition, as AMD would be very unlikely to gamble on a >300W GPU given their history in GPUs for the past half decade or so.
Posted on Reply
#153
medi01
Valantar
The 5700 XT was ~45% slower
If 2080Ti is 45% faster, then 5700xt is 27% slower.
Just saying :)
Posted on Reply
#154
EarthDog
5700XT is a 225W card in reference form. A 2080Ti is 260W in FE (not reference, = 250W) form....a 17% difference. It is totally irrelevant sweetspot/efficiencyversus over extending... it is what it is for each. In fact I would think that supports my thoughts more, no? If AMD is already reaching and over extending themselves to be 45% below 2080Ti (I get it, never intended to compete there, RDNA2 will be a true high-end) and NV isn't....so what if they try the same thing with big navi running it out of the sweetspot again to be closer? I don't think many will have an issue with a 250W BNavi... but it had better be within 20% of flagship Ampere. I think few doubt it will be more efficient but it isn't catching up within 10-15% if I had to guess.

AMD has a hell of a leap to catch up and be competitive. I think they'll do it.. but they'll be on the outside looking in by at least 10-15%. It will be slower, cheaper and use less power...AMD's motto on the GPU side.
Posted on Reply
#155
Jayp
It's a big card but mostly when compared to reference size card. If you put that next to the AIB 2080 Ti cards it probably won't look so huge. Also, just because the cooler is that big doesn't necessarily mean it is minimum/reasonable cooling as often found on Nvidia reference cards. It is possible that Nvidia wanted to have an AIB competitive factory cooler this time around instead of the barely sufficient reference cooler on the 2080 Ti.
Posted on Reply
#156
Legacy-ZA
I am really curious to see what Big Navi has to offer, I am sure we will start to see leaks from the AMD camp after the 1st of September.
Posted on Reply
#157
lexluthermiester
JalleR
Well as a temp setup while building hard tubing in my primary pc I put my Asus Rog Strix OC 1080TI in a T3500 I had to tweak the Closing mechanism for the cards a little but it worked like a charm :)
That can be a problem too. How did you fix it?
Posted on Reply
#158
Ubersonic
lexluthermiester
That beast will not be going into my Dell T3500. It literally won't fit in physically.
Actually from the image I'm pretty sure it would. It looks to be ~40-50mm longer than the pictured 2080 and 10-15mm higher, so should fit in a T3500 physically (just) but you will have to remove the left HDD (if fitted) and take out the blanking plate from the HDD bay (it's removable for fitting expansion cards).

Having said that, You probably wouldn't want to do it as a 5700XT will bottleneck like mad in a T3500 (even with a 3.8GHz 6c12t CPU) so this GPU would be choked to death.
dont whant to set it"'
Gtx 480 might have a succesor with respect to power drawn and heat outputed.
Isn't that exactly what the GTX580 was, it beat the GTX480 in both, hell some AIB 580s were sucking over 100w more than the 480 lol
Posted on Reply
#159
lexluthermiester
Ubersonic
Actually from the image I'm pretty sure it would.
It will not, unless I modify the case.
Ubersonic
Having said that, You probably wouldn't want to do it as a 5700XT will bottleneck like mad in a T3500 (even with a 3.8GHz 6c12t CPU)
I currently have an RTX2080 that is only CPU bottlenecked in some games. It's not severe. However...
Ubersonic
so this GPU would be choked to death.
...this is correct, which is why I will be building a new system. I only started using the T3500 as a daily driver on a challenge and then was impressed enough from it's performance that I just kept it. It is starting to show it's age these days and I'm jones'ing for a ThreadRipper.. with 32GB of DDR4-3800. I'm likely going to put an RTX 30xx in that system.
Posted on Reply
#160
Valantar
EarthDog
5700XT is a 225W card in reference form. A 2080Ti is 260W in FE (not reference, = 250W) form....a 17% difference. It is totally irrelevant sweetspot/efficiencyversus over extending... it is what it is for each. In fact I would think that supports my thoughts more, no? If AMD is already reaching and over extending themselves to be 45% below 2080Ti (I get it, never intended to compete there, RDNA2 will be a true high-end) and NV isn't....so what if they try the same thing with big navi running it out of the sweetspot again to be closer? I don't think many will have an issue with a 250W BNavi... but it had better be within 20% of flagship Ampere. I think few doubt it will be more efficient but it isn't catching up within 10-15% if I had to guess.

AMD has a hell of a leap to catch up and be competitive. I think they'll do it.. but they'll be on the outside looking in by at least 10-15%. It will be slower, cheaper and use less power...AMD's motto on the GPU side.
Average gaming power vs. average gaming power in TPU's benchmarks, they are 219W vs. 273W, which makes the 5700 XT consume 80% of the 2080 Ti's power, or the 2080Ti consume 125% the power of the 5700 XT. I guess I should have looked up the numbers more thoroughly (saying 215 vs. 275 did skew my percentages a bit), but overall, your 17% number is inaccurate. Comparing TDPs between manufacturers isn't a trustworthy metric due to the numbers being defined differently.

As for the 5700 being stretched in efficiency somehow proving they're further behind: obviously not, which you yourself mention. The 5700 XT is a comparatively small die, which AMD chose to push the clocks of to make it compete at a higher level than it was likely designed for originally. The 2080 Ti on the other hand is a classic wide-and-(relatively-)slow big die GPU, which gives it plenty of OC headroom if the cooling is there, but also makes it operate in a more efficient DVFS range. AMD could in other words compete better simply by building a wider chip and clocking it lower. Given just how much more efficient the 5700 non-XT is (166W average gaming power! With the 5700 XT just winning by ~14%!) we know even RDNA 1 can get a lot more efficient than the 5700 XT (not to mention the 5600 XT, of course, which beats any Nvidia GPU out there for perf/W). And the 2080 Ti still can't get 2x the performance of the 5700 non-XT (+54-76% depending on resolution). Which tells us that AMD could in theory build a slightly downclocked double 5700 non-XT and clean the 2080 Ti's clock at the same power, as long as the memory subsystem keeps up. Of course they never built such a GPU, and it's entirely possible there are architectural bottlenecks that would have prevented this scaling from working out, but the efficiency of the architecture and node is there. And RDNA 2 GPUs promise to improve that both architecturally and from the node. We also know that they can clock to >2.1GHz even in a console (which means limited power delivery and cooling), so there's definitely improvements to be found in RDNA 2.

The point being: if AMD is finally going to compete in the high end again, they aren't likely to go "hey, let's clock the snot out of this relatively small GPU" once again, but rather design as wide a GPU as is reasonable within their cost/yield/balancing/marketability constraints. Then they might go higher on clocks if it looks like Nvidia are pulling out all the stops, but I would be downright shocked if the biggest big Navi die had less than 80 CUs (all might not be active for the highest consumer SKU of course). They might still end up releasing a >350W clocked-to-the-rafters DIY lava pool kit, but if so that would be a reactive move rather than one due to design constraints (read: a much smaller die/core count than the competition) as in previous generations (RX 590, Vega 64, VII, 5700 XT).

I don't think anyone will mind a 250W Big Navi being more than 20% behind Ampere if said Ampere is 350W or more. On the other hand, if it was more than 20% Ampere at the same power? That would be a mess indeed - but it's looking highly unlikely at this point. If Nvidia decided to go bonkers with power for their high end card, that's on them.
Posted on Reply
#161
EarthDog
Valantar
Average gaming power vs. average gaming power in TPU's benchmarks, they are 219W vs. 273W, which makes the 5700 XT consume 80% of the 2080 Ti's power, or the 2080Ti consume 125% the power of the 5700 XT. I guess I should have looked up the numbers more thoroughly (saying 215 vs. 275 did skew my percentages a bit), but overall, your 17% number is inaccurate. Comparing TDPs between manufacturers isn't a trustworthy metric due to the numbers being defined differently.

As for the 5700 being stretched in efficiency somehow proving they're further behind: obviously not, which you yourself mention. The 5700 XT is a comparatively small die, which AMD chose to push the clocks of to make it compete at a higher level than it was likely designed for originally. The 2080 Ti on the other hand is a classic wide-and-(relatively-)slow big die GPU, which gives it plenty of OC headroom if the cooling is there, but also makes it operate in a more efficient DVFS range. AMD could in other words compete better simply by building a wider chip and clocking it lower. Given just how much more efficient the 5700 non-XT is (166W average gaming power! With the 5700 XT just winning by ~14%!) we know even RDNA 1 can get a lot more efficient than the 5700 XT (not to mention the 5600 XT, of course, which beats any Nvidia GPU out there for perf/W). And the 2080 Ti still can't get 2x the performance of the 5700 non-XT (+54-76% depending on resolution). Which tells us that AMD could in theory build a slightly downclocked double 5700 non-XT and clean the 2080 Ti's clock at the same power, as long as the memory subsystem keeps up. Of course they never built such a GPU, and it's entirely possible there are architectural bottlenecks that would have prevented this scaling from working out, but the efficiency of the architecture and node is there. And RDNA 2 GPUs promise to improve that both architecturally and from the node. We also know that they can clock to >2.1GHz even in a console (which means limited power delivery and cooling), so there's definitely improvements to be found in RDNA 2.

The point being: if AMD is finally going to compete in the high end again, they aren't likely to go "hey, let's clock the snot out of this relatively small GPU" once again, but rather design as wide a GPU as is reasonable within their cost/yield/balancing/marketability constraints. Then they might go higher on clocks if it looks like Nvidia are pulling out all the stops, but I would be downright shocked if the biggest big Navi die had less than 80 CUs (all might not be active for the highest consumer SKU of course). They might still end up releasing a >350W clocked-to-the-rafters DIY lava pool kit, but if so that would be a reactive move rather than one due to design constraints (read: a much smaller die/core count than the competition) as in previous generations (RX 590, Vega 64, VII, 5700 XT).

I don't think anyone will mind a 250W Big Navi being more than 20% behind Ampere if said Ampere is 350W or more. On the other hand, if it was more than 20% Ampere at the same power? That would be a mess indeed - but it's looking highly unlikely at this point. If Nvidia decided to go bonkers with power for their high end card, that's on them.
As far as wattages, I simply used the nameplate values for ease of scope and context.

You're going a lot further down the wormhole than I ever want to go. Time will tell... but I dont see big navi within 15%. That said, we'll all take that as a win im sure (depending on price).
Posted on Reply
#162
neatfeatguy
RTX 2080 FE is:
10.5" long
4.6" high
1.4" wide

My 980Ti AMP! Omega (and the Extreme version) is:
12.9" long
5.25" high
??? wide - can't find specific width dimensions listed, but it takes up just shy of 3 slots (by "just shy" I mean about 1/4" of an inch, if that)

My guess is the pictured (supposedly) 3090 is similar in size as my 980Ti AMP Omega card.
Manoa
you don't have to go out of the PC systems guys, you just have to go out of the video cards (for a while) :) just don't buy 10,900K for 600$ and you are ok :)
I running 780Ti now 6+ years and no problems with any games :)
Only problem you run into if you keep a card for a very long period of time is they will eventually drop support. I had some GTX 280 in SLI for about 3.5 years. About a year after I stopped using them I gifted one to my younger brother and he used it for about 3 years. This put the age of card just over 7 years old of the release date (originally released June 2008). Nvidia stopped driver support for that series of cards in 2014, if I remember correctly.

He used that card until the release of Dying Light (which was 2015), but the driver support was gone for his 280 and Dying Light literally wouldn't work because the driver was too old. The game would tell him his card/driver was not supported.

My point is, sure, you can use a card for a good amount of time, but unfortunately it will stop getting support and new games won't run.
Posted on Reply
#163
Easo
I thought technicaly progress mean't that the cards should not grow in size, but at least stay the same. :kookoo:
This does look like it would need a support inside the case, because I can already imagine cards breaking the PCIe slots/their own connectors...
Posted on Reply
#164
watzupken
Xaled
Are you focking serious? 1060 is THREE years older than 16xx's. In old days you could've get 100% performance for same price in such period.
In my opinion, it would have been possible for Nvidia to create a successor to the GTX 1060 that is close to 100% faster. If you look at the RTX cards which are supposed to be premium Nvidia cards, Nvidia invested sideways and heavily into RT and DLSS. I believe significant die space where they could have cram in more powerful hardware to spruce up performance, was allocated to the RT and Tensor cores. Just comparing the transistor count between the GTX 1660 vs RTX 2060, there is a whooping 4.2 billion difference. The latter has more CUDA cores, but still I feel the extra CUDA cores will not be contribute to the bulk of the difference.

With the premium series being capped in performance, Nvidia will need to artificially gimp their GTX series to avoid cannibalizing the sales of RTX series. The same should be expected with the upcoming 3xxx series as I am sure Nvidia will double down on the likes of RT and DLSS. In addition, I am not sure what sorts of bespoke tech will Nvidia introduce at the hardware level since they tend to do this with every new generation.
Easo
I thought technicaly progress mean't that the cards should not grow in size, but at least stay the same. :kookoo:
This does look like it would need a support inside the case, because I can already imagine cards breaking the PCIe slots/their own connectors...
I don't agree. While I am not fan of giant graphic card/ cooler, the reality is that with every few passing generations, we are observing a jump in size. When I started on my first PC, the graphic card I used relies on passive cooling with a small heatsink. Then active cooling started creeping in after a few years. The active coolers grew in size over the years but maintained as a single slot. Then 2 slots cooler appeared, 2x fans. Fast forward to the recent 3 to 4 years, it is not uncommon to see coolers with 3x fan, taking up 3 slots, and also taller than the graphic card. As technology improve, the graphic card makers get more aggressive with adding hardware and features, pushing the boundaries and also power consumption.
Posted on Reply
#165
Vayra86
Arjai


Looks like this thing runs hot enough to bubble the fan hub cover. ?
I think that's some leftover from an old Asus AREZ sticker under there...

Once more, all I can say is... credibility... LOW
Candor

Is that Jerry? Lol. I can sort of hear his soothing voice :twitch:
EarthDog
As far as wattages, I simply used the nameplate values for ease of scope and context.

You're going a lot further down the wormhole than I ever want to go. Time will tell... but I dont see big navi within 15%. That said, we'll all take that as a win im sure (depending on price).
Big Navi might gain 30, best case 40% over RDNA2. Best case. Or AMD has gone similarly mental and this whole 3090 BS is true and they do both sport 400+W cards that I won't ever buy :p In that case I'm staying far away from any res higher than 1440p for the foreseeable future and keep rolling with sensible pieces of kit... but then I might do that anyway.
EarthDog
5700XT is a 225W card in reference form. A 2080Ti is 260W in FE (not reference, = 250W) form....a 17% difference. It is totally irrelevant sweetspot/efficiencyversus over extending... it is what it is for each. In fact I would think that supports my thoughts more, no? If AMD is already reaching and over extending themselves to be 45% below 2080Ti (I get it, never intended to compete there, RDNA2 will be a true high-end) and NV isn't....so what if they try the same thing with big navi running it out of the sweetspot again to be closer? I don't think many will have an issue with a 250W BNavi... but it had better be within 20% of flagship Ampere. I think few doubt it will be more efficient but it isn't catching up within 10-15% if I had to guess.

AMD has a hell of a leap to catch up and be competitive. I think they'll do it.. but they'll be on the outside looking in by at least 10-15%. It will be slower, cheaper and use less power...AMD's motto on the GPU side.
Perhaps the far more interesting question is what AMD is going to offer across the stack below their top end RDNA part. Because it was AMD itself that once told us when they were gonna do RT, it would be from midrange on up. Where is it? .... Its starting to smell a lot like late to the party again.
Posted on Reply
#166
medi01
Legacy-ZA
I am really curious to see what Big Navi has to offer, I am sure we will start to see leaks from the AMD camp after the 1st of September.
Well, looking at what we reasonably expect from AMD is:
1) 505mm2 (a rumor, however from a source with good track record) and 80CUs (over 5700XTs 40CUs), all sounds reasonable
2) PS5 being able to push GPU to 2.1Ghz (with some power consumption reservations)
3) RDNA2 should be a bit faster, not slower, than RDNA1

Optimistically, next gen, improved fab node, twice 5700XT with faster RAM could be about 100% faster.
Taking 2080Ti as being 45% faster than 5700Xt, we get:

RDNA2 505mm2 thing with 80CUs = 2/1.45 =38% faster than 2080Ti, or somewhat lower (it would, of course, be drastically different in different games, but note how optimizing for RDNA2 becomes unavoidable, given AMD's dominance in console market)
Posted on Reply
#167
kiriakost
Chloe Price
Why Asus? It's not the best which comes to coolers. MSI's been hella fine for several years, Asus has insane brand premium for its decent cards.
Its not ASUS , its about a choice of ASUS about them using quality cooler at 1660 Super so them to impress INTEL and become their business partner at MINI PC.
They did form an good card in a package with best ever cooler system, this does not happen every day.
www.ittsb.eu/forum/index.php?topic=1598.0
Posted on Reply
#168
EarthDog
Vayra86
Big Navi might gain 30, best case 40% over RDNA2.
Oh..I fully believe big navi will beat the 2080Ti... and that is AT LEAST 45%... I think it will land between the 2080Ti and 3090... I just hope it is closer the latter, not the former.

I also believe if big navi does that, it will be at least a 225W GPU... more likely 250W.
Posted on Reply
#169
Valantar
Vayra86
Big Navi might gain 30, best case 40% over RDNA2. Best case. Or AMD has gone similarly mental and this whole 3090 BS is true and they do both sport 400+W cards that I won't ever buy :p In that case I'm staying far away from any res higher than 1440p for the foreseeable future and keep rolling with sensible pieces of kit... but then I might do that anyway.
30-40% absolute performance, perf/W, or something else? 30-40% increased absolute performance could theoretically be done just by scaling up RDNA 1 to flagship power draw levels with a wider die, so that seems like a too low bar IMO. 30-40% increased perf/W could make for a potent Ampere competitor - if going from the (least efficient rendition of RDNA1, the) 5700 XT, that would mean +30% performance at ~225W (slightly lower at stock according to TPU's numbers, but let's go by what it says on the tin for now). For the sake of simplicity, let's assume perf/W is flat across the RDNA 2 range - it won't be, but it's not a crazy assumption either - which then puts a 275W RDNA 2 GPU at ~159% the performance of the 5700 XT, matching or beating the 2080 Ti even at 4k where it wins by the highest margin (35/46% for 1080p/1440p), which is admittedly not a high bar in 2020, or a 300W RDNA 2 GPU at 173% of the 5700 XT, soundly beating the 2080Ti overall. That is going by 30% increased overall/average perf/W though, which for me is the minimum reasonable expectation when AMD has said "up to 50%". I'm by no means expecting +50% perf/W overall based on that statement, obviously, but 30% overall seems reasonable based on that.
EarthDog
Oh..I fully believe big navi will beat the 2080Ti... and that is AT LEAST 45%... I think it will land between the 2080Ti and 3090... I just hope it is closer the latter, not the former.

I also believe if big navi does that, it will be at least a 225W GPU... more likely 250W.
If AMD is going back to competing for the GPU crown, wouldn't the safe assumption be that Big Navi is in the 275-300W range? That's where the flagships tend to live, after all. It would be exceptionally weird for them to aim for flagship performance yet limit themselves to upper midrange power draw levels.
Posted on Reply
#170
Vayra86
Valantar
30-40% absolute performance, perf/W, or something else? 30-40% increased absolute performance could theoretically be done just by scaling up RDNA 1 to flagship power draw levels with a wider die, so that seems like a too low bar IMO. 30-40% increased perf/W could make for a potent Ampere competitor - if going from the (least efficient rendition of RDNA1, the) 5700 XT, that would mean +30% performance at ~225W (slightly lower at stock according to TPU's numbers, but let's go by what it says on the tin for now). For the sake of simplicity, let's assume perf/W is flat across the RDNA 2 range - it won't be, but it's not a crazy assumption either - which then puts a 275W RDNA 2 GPU at ~159% the performance of the 5700 XT, matching or beating the 2080 Ti even at 4k where it wins by the highest margin (35/46% for 1080p/1440p), which is admittedly not a high bar in 2020, or a 300W RDNA 2 GPU at 173% of the 5700 XT, soundly beating the 2080Ti overall. That is going by 30% increased overall/average perf/W though, which for me is the minimum reasonable expectation when AMD has said "up to 50%". I'm by no means expecting +50% perf/W overall based on that statement, obviously, but 30% overall seems reasonable based on that.


If AMD is going back to competing for the GPU crown, wouldn't the safe assumption be that Big Navi is in the 275-300W range? That's where the flagships tend to live, after all. It would be exceptionally weird for them to aim for flagship performance yet limit themselves to upper midrange power draw levels.
+30-40% perf compared to 5700XT, to clarify. Maybe if I have a good day I'd be ootimistic enough to say +50%.

Any more would have me very surprised.
Posted on Reply
#171
EarthDog
Valantar
If AMD is going back to competing for the GPU crown, wouldn't the safe assumption be that Big Navi is in the 275-300W range? That's where the flagships tend to live, after all. It would be exceptionally weird for them to aim for flagship performance yet limit themselves to upper midrange power draw levels.
It depends on who you ask and what you expect out of them and a minor node tweak. I fully expect it to be over extended to perform closer to NV cards and run into a similar situation as the 5700XT did. So yeah, nameplate values (again, I don't play this review said XXX W stuff right now), I expect it to be 225-250W in reference form. I'm trying to give them some credit on the arch change and minor node tweak. If big navi is any closer than 15% I''ll expect 250+ out of it for sure.
Posted on Reply
#172
Valantar
EarthDog
It depends on who you ask and what you expect out of them and a minor node tweak. I fully expect it to be over extended to perform closer to NV cards and run into a similar situation as the 5700XT did. So yeah, nameplate values (again, I don't play this review said XXX W stuff right now), I expect it to be 225-250W in reference form. I'm trying to give them some credit on the arch change and minor node tweak. If big navi is any closer than 15% I''ll expect 250+ out of it for sure.
Again, I think that is a really weird expectation. The 225W rating of the 5700 XT is on the high side but nothing abnormal for an upper midrange card. For a flagship GPU in 2020 that kind of power draw (if it is at all competitive) would be revolutionary. The 7970 GHz edition was 300W. The 290X was 290W. The 390X was (admittedly a minor tweak of the 290X, and) 275W. The Fury X was 275W. The Vega 64 was 295W. The VII was 295W. You would need to go back to 2010 and the 6970 to find a single-GPU AMD flagship at 250W, and the 5870 in 2009 at 188W. And Nvidia's flagships have consistently been at or above 250W for more than a decade as well. The 5700 XT never made any claim to being or performing on the level of a flagship GPU. AMD's current fastest GPU is an upper midrange offering, is explicitly positioned as such, so expecting their well publicized upcoming flagship offering to be in the same power range seems to entirely disregard the realities of GPU power draw. Higher end = higher performance = more power draw.
Vayra86
+30-40% perf compared to 5700XT, to clarify. Maybe if I have a good day I'd be ootimistic enough to say +50%.

Any more would have me very surprised.
That sounds overly pessimistic to me. The 5700 XT was never designed to be anything but upper midrange, and pushed a small die higher than was efficient. As I said above, on paper even RDNA (1) could hit that performance level if scaled up to flagship-level power draws with a matching wide die. AMD is promising significant perf/W gains for RDNA 2, so expecting increases beyond that seems sensible simply from the fact that this time around they'll be designing a die for the high end and not the midrange.
Posted on Reply
#173
Vayra86
Valantar
Again, I think that is a really weird expectation. The 225W rating of the 5700 XT is on the high side but nothing abnormal for an upper midrange card. For a flagship GPU in 2020 that kind of power draw (if it is at all competitive) would be revolutionary. The 7970 GHz edition was 300W. The 290X was 290W. The 390X was (admittedly a minor tweak of the 290X, and) 275W. The Fury X was 275W. The Vega 64 was 295W. The VII was 295W. You would need to go back to 2010 and the 6970 to find a single-GPU AMD flagship at 250W, and the 5870 in 2009 at 188W. And Nvidia's flagships have consistently been at or above 250W for more than a decade as well. The 5700 XT never made any claim to being or performing on the level of a flagship GPU. AMD's current fastest GPU is an upper midrange offering, is explicitly positioned as such, so expecting their well publicized upcoming flagship offering to be in the same power range seems to entirely disregard the realities of GPU power draw. Higher end = higher performance = more power draw.


That sounds overly pessimistic to me. The 5700 XT was never designed to be anything but upper midrange, and pushed a small die higher than was efficient. As I said above, on paper even RDNA (1) could hit that performance level if scaled up to flagship-level power draws with a matching wide die. AMD is promising significant perf/W gains for RDNA 2, so expecting increases beyond that seems sensible simply from the fact that this time around they'll be designing a die for the high end and not the midrange.
My pessimism has been on the right track more often than not though, when it comes to these predictions.

So far AMD has not shown us a major perf/w jump on anything GCN-based ever, but now they call it RDNA# and they suddenly can? Please. Tonga was a failure and that is all they wrote. Then came Polaris - more of the same. Now we have RDNA2 and already they've been clocking the 5700XT out of its comfort zone to get the needed performance. And to top it off they felt the need to release vague 14Gbps BIOS updates that nobody really understood, post/during launch. You don't do that if you've got a nicely rounded, future proof product here.

I'm not seeing the upside here, and I don't think we can credit AMD with trustworthy communication surrounding their GPU department. It is 90% left to the masses and the remaining 10% is utterly vague until it hits shelves. 'Up to 50%'... that sounds like Intel's 'Up to' Gigahurtz boost and to me it reads 'you're full of shit'.

Do you see Nvidia market 'up to'? Nope. Not a single time. They give you a base clock and say a boost is not guaranteed... and then we get a slew of GPUs every gen that ALL hit beyond their rated boost speeds. That instills faith. Its just that simple. So far, AMD has not released a single GPU that was free of trickery - either with timed scarcity (and shitty excuses to cover it up, I didn't forget their Vega marketing for a second, it was straight up dishonest in an attempt to feed hype), cherry picked benches (and a horde of fans echoing benchmarks for games nobody plays), supposed OC potential (Fury X) that never materialized, supposed huge benefits from HBM (Fury X again, it fell off faster than GDDR5 driven 980ti which is still relevant with 6GB), the list is virtually endless.

Even in the shitrange they managed to make an oopsie with the 560D. 'Oops'. Wasn't that their core target market? Way to treat your customer base. Of course we both know they don't care at all. Their revenue is in the consoles now. We get whatever falls off the dev train going on there.

Nah, sorry. AMD's GPU division has lost the last sliver of faith a few generations back, over here. I don't see how or why they would suddenly provide us with a paradigm shift. So far, they're still late with RDNA as they always have been - be it version 1, 2 or 3. They still haven't shown us a speck of RT capability, only tech slides. The GPUs they have out lack feature set beyond RT. Etc etc ad infinitum. They've relegated themselves to followers and not leaders. There is absolutely no reason to expect them to leap ahead. Even DX12 Ultimate apparently caught them by surprise... hello? Weren't you best friends with MS for doing their Xboxes? Dafuq happened?

On top of that, they still haven't managed to create a decent stock cooler to save their lives, and they still haven't got the AIBs in line like they should. What could possibly go wrong eh

//end of AMD roast ;) Sorry for the ninja edits.
Posted on Reply
#174
medi01
Unbelievable how one could be regularly posting on a tech savvy forum, yet be so ignorant.
Posted on Reply
#175
Valantar
Vayra86
My pessimism has been on the right track more often than not though, when it comes to these predictions.

So far AMD has not shown us a major perf/w jump on anything GCN-based ever, but now they call it RDNA# and they suddenly can? Please. Tonga was a failure and that is all they wrote. Then came Polaris - more of the same. Now we have RDNA2 and already they've been clocking the 5700XT out of its comfort zone to get the needed performance. And to top it off they felt the need to release vague 14Gbps BIOS updates that nobody really understood, post/during launch. You don't do that if you've got a nicely rounded, future proof product here.

I'm not seeing the upside here, and I don't think we can credit AMD with trustworthy communication surrounding their GPU department. It is 90% left to the masses and the remaining 10% is utterly vague until it hits shelves. 'Up to 50%'... that sounds like Intel's 'Up to' Gigahurtz boost and to me it reads 'you're full of shit'.

Do you see Nvidia market 'up to'? Nope. Not a single time. They give you a base clock and say a boost is not guaranteed... and then we get a slew of GPUs every gen that ALL hit beyond their rated boost speeds. That instills faith. Its just that simple. So far, AMD has not released a single GPU that was free of trickery - either with timed scarcity (and shitty excuses to cover it up, I didn't forget their Vega marketing for a second, it was straight up dishonest in an attempt to feed hype), cherry picked benches (and a horde of fans echoing benchmarks for games nobody plays), supposed OC potential (Fury X) that never materialized, supposed huge benefits from HBM (Fury X again, it fell off faster than GDDR5 driven 980ti which is still relevant with 6GB), the list is virtually endless.

Even in the shitrange they managed to make an oopsie with the 560D. 'Oops'. Wasn't that their core target market? Way to treat your customer base. Of course we both know they don't care at all. Their revenue is in the consoles now. We get whatever falls off the dev train going on there.

Nah, sorry. AMD's GPU division has lost the last sliver of faith a few generations back, over here. I don't see how or why they would suddenly provide us with a paradigm shift. So far, they're still late with RDNA as they always have been - be it version 1, 2 or 3. They still haven't shown us a speck of RT capability, only tech slides. The GPUs they have out lack feature set beyond RT. Etc etc ad infinitum. They've relegated themselves to followers and not leaders. There is absolutely no reason to expect them to leap ahead. Even DX12 Ultimate apparently caught them by surprise... hello? Weren't you best friends with MS for doing their Xboxes? Dafuq happened?

On top of that, they still haven't managed to create a decent stock cooler to save their lives, and they still haven't got the AIBs in line like they should. What could possibly go wrong eh

//end of AMD roast ;) Sorry for the ninja edits.
I don't disagree with the majority of what you're saying here, though I think you're ignoring the changing realities behind the past situations you are describing vs. AMD in 2020. AMD PR has absolutely done a lot of shady stuff, have overpromised time and time again, and is generally not to be trusted until we have a significant body of proof to build trust on. Their product positioning and naming in China (like the 560D, various permutations of "580" and so on, etc.) is also deeply problematic. But so far I don't think I've seen Dr. Su overpromise or skew results in a significant way - but I might obviously have missed or forgotten something - and the "up to 50% improved perf/W" comes from her. (Sure, you could debate the value of Cinebench as a measure of overall performance - I think it shows AMD in too good a light if seen as broadly representative - but at least it's accurate and representative of some workloads.) And despite the fundamental shadyness of promising maximums ("up to") rather than averages or baselines, there's at least the security that it must in some sense be true for AMD to not be sued by their shareholders. And given how early that was said relative to the launch of the GPUs, I would say any baseline or average promise would be impossible to make.

Beyond that, most of what you describe is during the tenure of Koduri, and while it is obviously wrong to place the blame for this (solely) at his feet, he famously fought tooth and nail for near total autonomy for RTG, with him taking the helm for the products produced and deciding the direction taken with them. He obviously wasn't at fault for the 64 CU architectural limit of GCN, which crippled AMD's GPU progress from the Fury X and onwards, but he was responsible for how the products made both then and since were specified and marketed. And he's no longer around, after all. All signs point towards there having been some serious talking-tos handed out around AMD HQ in the past few years.

But beyond the near-constant game of musical chairs that is tech executive positions, the main change is AMD's fortunes. In 2015 they were near bankrupt, and definitely couldn't afford to splurge on GPU R&D. In 2020, they are riding higher than ever, with Ryzen carrying them to record revenues and profits. In the meantime they've shown with RDNA that even on a relatively slim budget (RDNA development must have started around 2016 or so, picking up speed around 2018 at the latest) they could improve things signifcantly, and now they're suddenly flush with cash, including infusions from both major high performance console manufacturers. The last time they had that last part was pre-2013, when they were already struggling financially, and both console makers went (very) conservative in cost and power draw for their designs. That is by no means the case this time around. They can suddenly afford to build as powerful a GPU as they want to within the constraints of their architecture, node and fab capacity.

And as I mentioned, RDNA has shown that AMD has found a way out of the GCN quagmire - while 7nm has obviously been enormously beneficial in allowing them to get close to (5700 XT), match (5700, 5500 XT) or even beat (5600 XT) Nvidia's perf/W, it is by no means the main reason for this, as is easily seen by comparing the efficiency of the Radeon VII vs. even the 5700 XT. And with RDNA being a new architecture with a lot of new designs, it stands to reason that there are more major improvements to be made to it in its second generation than there were to GCN 1.whatever.

As for the Fury X falling behind the 980 Ti: not by much. Sure, the 980 Ti is faster, and by a noticeable percentage (and a higher percentage than at launch), but they're still firmly in the same performance class. The 980 Ti has "aged" better, but by a few percent at best.

So while I'm all for advocating pessimism - you'll find plenty of my posts here doing that, including on this very subject - in this case I think you're being too pessimistic. I'm not saying I think AMD will beat or even necessarily match Ampere either in absolute performance or perf/W, but there are reasons to believe AMD has something good coming, just based on firm facts: We know the XSX can deliver a 52CU, 12TF GPU and an 8c16t 3.6GHz Zen2 CPU in a console package, and while we don't know its power draw, I'll be shocked if that SoC consumes more than 300W - console power delivery and cooling, even in the nifty form factor of the XSX, won't be up for that. We also know the XSX runs at a relatively low clock speed with its 52 CUs thanks to the PS5 demonstrating that RDNA 2 can sustain 2.1 GHz even in a more traditional (if enormous) console form factor. We also know that even RDNA 1 can clearly beat Nvidia in perf/W if clocked reasonably (hello, 5600 XT!). What can we ascertain from this? That RDNA 2 at ~1.8GHz is quite efficient; that RDNA 2 is capable of notably higher clock speeds than RDNA 1, and that AMD is entirely capable of building a wider die than the RX 5700 - even for a cost-sensitive console.
Posted on Reply
Add your own comment