Tuesday, January 14th 2020

RX 5950 XT, RX 5950, and RX 5800 XT: New AMD Radeon SKUs Reach Regulators

Confirmation of four new Radeon RX 5000-series SKUs came to light as AIB partner AFOX pushed them to regulators at the Eurasian Economic Commission. EEC filings have been a reliable early-sign of upcoming PC hardware. All thee new SKUs are positioned above the Radeon RX 5700 XT launched last year. These include the Radeon RX 5800 XT, the RX 5900 XT, the RX 5950, and the RX 5950 XT. Going by AMD's convention of two SKUs per resolution serving up to differentiated experiences, the RX 5800 XT could be a step up from the RX 5700 XT in offering 1440p high frame-rate AAA performance. This could possibly put it in direct competition with the GeForce RTX 2070 Super. AMD took a similar 2-pronged approach to 1080p, with the RX 5500 XT serving up 1080p at up to 60 fps, while the RX 5600 XT topping it up with a 40-50 percent performance uplift.

The Radeon RX 5950-series is completely new. This could very well be a new large "Navi" silicon, since dual-GPU is dead. Just as AMD carved out the RX 5700 XT, the RX 5700, and the RX 5600 XT, it could carve out the three new SKUs from this silicon. AMD CEO Dr Lisa Su already confirmed that her company is working to upscale the RX 5000-series "Navi" family. The RX 5900-series could be competition for the likes of the RTX 2080 or even RTX 2080 Super. The RX 5950-series could target premium 4K gaming (RTX 2080 Ti). It remains to be seen if the three new SKUs are based on the existing RDNA architecture or the new RDNA2 architecture designed for 7 nm EUV, featuring variable-rate shading.
Sources: Eurasian Economic Commission, Komachi Ensaka
Add your own comment

105 Comments on RX 5950 XT, RX 5950, and RX 5800 XT: New AMD Radeon SKUs Reach Regulators

#76
Vya Domus
Vayra86And then you top it off by saying 7nm is not going to be an efficiency jump when AMD showed that it was all by itself with the Radeon VII.
You either don't want to understand the context or you just don't understand. I'm going to say it for the last time since there is no point in repeating myself, AMD went to 7nm form a node that was notably worse than 16nm, therefor the jump Nvidia will make to 7nm wont be anywhere as dramatic. That is unless they go straight to EUV and even then, it's not going to be earth shattering.

The main advantage that comes with 7nm is the bigger transistor count and that's what Nvidia will try to exploit first and foremost.
Vayra86I can separate my personal buying decisions from the analysis of how companies perform and how its products work.
Yet, here you are trying to convince me why a company should have the privilege to charge more for the same thing. Quite shocking.
Posted on Reply
#77
kapone32
For me the fact that AMD has been behind does not mean they have no chance.
Posted on Reply
#78
TheinsanegamerN
Vya DomusYou either don't want to understand the context or you just don't understand. I'm going to say it for the last time since there is no point in repeating myself, AMD went to 7nm form a node that was notably worse than 16nm, therefor the jump Nvidia will make to 7nm wont be anywhere as dramatic. That is unless they go straight to EUV and even then, it's not going to be earth shattering.

The main advantage that comes with 7nm is the bigger transistor count and that's what Nvidia will try to exploit first and foremost.



Yet, here you are trying to convince me why a company should have the privilege to charge more for the same thing. Quite shocking.
So 14nm, which is "worse then 16nm node", to 7nm is a big move, but going from 12nm to 7nm isnt? You're drunk mate. You're not even making sense here.

Like it or not, Nvidia's arch on 12nm is STILL more energy efficient then rDNA on 7nm. They Added compute with turing, while AMD removed computer to make rDNA, and yet turing is still more efficent:

www.techspot.com/article/1874-amd-navi-vs-nvidia-turing-architecture/

To suggest nvidia is only going to see minor improvements from 7nm, while AMD somehow saw MASSIVE improvements with 7nm is denying basic math. Polaris was not that far behind turing in transistor tech, its not like they were still 28nm. And if 7nm EUV is as big a jump as predicted, then nvidia going to it directly from 12nm will absolutely be as big a jump as AMD going from 16 to 7nm was. rDNA2 needs to hit it out of the park for any chance of reaching nvidia's performance per watt, and ultimately the ability to produce a GPU that can compete with nvidia's big monster chips.
Posted on Reply
#79
ratirt
HenrySomeoneSay what now? :roll:And here we were, thinking AMD was the first to do 40nm, 28nm, only like a month behind on 16nm and now 6 months ahead and counting on 7nm! Actually, you're right, they haven't been on equal ground in terms of nodes at all - team red has had the advantage for at least the last decade and they still got their asses kicked hard for the last 5 years! :p I can't wait for the dumbstruck expressions of all the amd fanboys when Ampere finally lands...:D
That will be something. Better ebay your kidney now case when new NV release arrives, everyone will start selling and if you are the first one you will get enough for 3070?
You need to google what fanboy means :) because you call others fanboys and you act like one yourself without even realizing it. :)
Vya DomusThe main advantage that comes with 7nm is the bigger transistor count and that's what Nvidia will try to exploit first and foremost.
Higher transistor count on the same space as different nodes in comparison. Better to clarify this cause I'm sure it is a matter of time when someone takes it wrong.
Posted on Reply
#80
Vayra86
Vya DomusYou either don't want to understand the context or you just don't understand. I'm going to say it for the last time since there is no point in repeating myself, AMD went to 7nm form a node that was notably worse than 16nm, therefor the jump Nvidia will make to 7nm wont be anywhere as dramatic. That is unless they go straight to EUV and even then, it's not going to be earth shattering.

The main advantage that comes with 7nm is the bigger transistor count and that's what Nvidia will try to exploit first and foremost.
Alright, point taken and understood, but it doesn't really weigh much in the overall competitiveness of AMD versus Nvidia, and you're trying to make it seem like it does. I understand your point perfectly, and I am expanding upon it by looking at what both companies have today and what they can still go towards. Even today Nvidia has better efficiency. Even if they would do a figurative Radeon nVIIdia so to speak, on 7nm DUV, they will destroy RDNA on efficiency alone across the whole stack.

But the reports so far do point at Nvidia moving straight to EUV. And that makes sense too because DUV is more complex to make and was just a stand in until EUV was ready, which is already overdue. ASML planned this for years ago...

And this one...
Yet, here you are trying to convince me why a company should have the privilege to charge more for the same thing. Quite shocking.
Why always the schoolyard style calimero approach?! 'boohoo he has the privilege'.. that is precisely NOT my argument. I say the privilege was carved out by Nvidia itself because it conducts its business as it does. And that is why, once again, Nvidia is not AMD! These aren't lame excuses because you put them off as such, they are real things to customers and the proof is everywhere around you when you even just click on system specs on this forum. You only need to open your eyes.
Posted on Reply
#81
HenrySomeone
ratirtYou need to google what fanboy means :) because you call others fanboys and you act like one yourself without even realizing it. :)
I didn't call out anyone specifically, did I? Or are you trying to say, amd fanboys in general don't exist? :D If however you recognize yourself in the description...you know what they say - If the shoe fits...:p
Posted on Reply
#82
Vya Domus
TheinsanegamerNSo 14nm, which is "worse then 16nm node", to 7nm is a big move, but going from 12nm to 7nm isnt?
What ?

12nm TSMC and 16nm TSMC are the same nodes, one has just has a bigger reticle limit. Are you drunk or you just don't have a clue about any of these things ?
Vayra86Alright, point taken and understood, but it doesn't really weigh much in the overall competitiveness of AMD versus Nvidia, and you're trying to make it seem like it does. I understand your point perfectly, and I am expanding upon it by looking at what both companies have today and what they can still go towards. Even today Nvidia has better efficiency. Even if they would do a figurative Radeon nVIIdia so to speak, on 7nm DUV, they will destroy RDNA on efficiency alone across the whole stack.
I am not doing any of that, I am simply pointing out how Nvidia cannot improve infinitely performance, power efficiency, cost, etc in one big sweep. And also how now everything that Nvidia can do with a new node so can AMD and vice-versa. Nvidia gets on 7nm DUV so can AMD, AMD get's on 7nm EUV so can Nvidia. There will no large gap anymore, no more "destroying" of any kind.

That is of course if they both want to play the game.
Posted on Reply
#83
ratirt
HenrySomeoneI didn't call out anyone specifically, did I? Or are you trying to say, amd fanboys in general don't exist? :D If however you recognize yourself in the description...you know what they say - If the shoe fits...:p
Sure you didn't. You're such a good boy. :)
I only hope your Ampere landing prediction will deliver. :)

Do we know the release date of these Radeons or is it still a mystery? It's been a while since this one's been out but nothing specific.
Posted on Reply
#84
HenrySomeone
Vya DomusI am not doing any of that, I am simply pointing out how Nvidia cannot improve infinitely performance, power efficiency, cost, etc in one big sweep. And also how now everything that Nvidia can do with a new node so can AMD and vice-versa. Nvidia gets on 7nm DUV so can AMD, AMD get's on 7nm EUV so can Nvidia. There will no large gap anymore, no more "destroying" of any kind.

That is of course if they both want to play the game.
They don't have to improve infinitely as they are already ahead and any improvement will take them even further, but if all the Ampere leaks (and some not-too-difficult extrapolation) hold at least some truth...well RTG is in for a world of hurt, so to say. And you know it, but you've sneakily left yourself an argumentative crutch by saying "if they both want to play the game" to lean onto when Radeons once again won't be able to pass 3070 at best if not less...
Posted on Reply
#85
Vya Domus
HenrySomeoneThey don't have to improve infinitely as they are already ahead and any improvement will take them even further, but if all the Ampere leaks (and some not-too-difficult extrapolation) hold at least some truth...well RTG is in for a world of hurt, so to say. And you know it, but you've sneakily left yourself an argumentative crutch by saying "if they both want to play the game" to lean onto when Radeons once again won't be able to pass 3070 at best if not less...
Cool.
Posted on Reply
#86
mahoney
700W PSU's will become a trend again it seems
Posted on Reply
#87
Totally
R-T-BI'm not an AMD fanboy. You forgot people like me who just aren't sold on raytracing and don't want to waste die space on it.
Nope, if you utter a word of dissent toward RT/Nvidia you are a AMD fanboy in these parts.
Posted on Reply
#88
TheinsanegamerN
Vya DomusWhat ?

12nm TSMC and 16nm TSMC are the same nodes, one has just has a bigger reticle limit. Are you drunk or you just don't have a clue about any of these things ?
So, thanks for confirming that this previous statement by you:
AMD went to 7nm form a node that was notably worse than 16nm, therefor the jump Nvidia will make to 7nm wont be anywhere as dramatic
Is complete BS. If they are the same node, then Both nvidia and AMD would see roughly the same improvements from 7nm, confirming once and for all that you have no idea what you are talking about. Also, Polaris was 14nm FinFet. Not 16nm. Are you also claiming that 14nm finfet was somehow worse then 16nm was? Because that is your claim, bolded for your reading pleasure.

Just stop, you are only making yourself out to be a complete m0r0n.
mahoney700W PSU's will become a trend again it seems
My 1200W platinum is still chugging along just fine.
Posted on Reply
#89
Vayra86
Vya DomusI am not doing any of that, I am simply pointing out how Nvidia cannot improve infinitely performance, power efficiency, cost, etc in one big sweep. And also how now everything that Nvidia can do with a new node so can AMD and vice-versa. Nvidia gets on 7nm DUV so can AMD, AMD get's on 7nm EUV so can Nvidia. There will no large gap anymore, no more "destroying" of any kind.

That is of course if they both want to play the game.
'Can' in theory, but not in practice. That is the crucial difference here in our view on this. Of course nobody can improve 'infinitely', but even if Nvidia didn't improve at all and just relied on the shrink there will be quite a bit of 'destroying' left for them. The fact remains that even the Super line up can still compete just fine - and then some.

AMD has already eaten quite a bit into its remaining headroom with Navi. It is already on 7nm. It is already doing similar architecture. And yet, it still falls short on all the important metrics EXCEPT die size. So that is what they have left. Now they pre-empt Ampere with larger dies so they can capture the crowd that was waiting for 2080ti to get price slashed... a very short lived USP, that. What's left? They are already very close to the point of diminishing returns, as in, they're at peak power budget (~300W) and that will force them into even larger chips at even lower clocks, not exactly the best profitability and .... a repeat of the past half dozen generations.

Oh... and here is the kicker. AMD still needs to reserve die space for RT as well ;) Nvidia has already paid that tax.
Posted on Reply
#90
kapone32
I wonder how big the GPU difference based on the DIE are for the 5700Xt vs the 2080Ti?

31mm x 25mm for the 2080TI but I can't seem to find dimensions for the 5700XT other than 251mm squared.

I was just reading an article that Big Navi could have as much as 16 billion transistors up from the 10.3 on 5700Xt.
Posted on Reply
#91
Vya Domus
TheinsanegamerNSo, thanks for confirming that this previous statement by you:

Is complete BS. If they are the same node, then Both nvidia and AMD would see roughly the same improvements from 7nm, confirming once and for all that you have no idea what you are talking about. Also, Polaris was 14nm FinFet. Not 16nm. Are you also claiming that 14nm finfet was somehow worse then 16nm was? Because that is your claim, bolded for your reading pleasure.

Just stop, you are only making yourself out to be a complete m0r0n.
So to make life easier for both of us and so that I don't have to read this incoherent utter garbage off to the ignore list you go.

Now I get it why you're "TheinsanegamerN", you're out of your mind buddy, I suggest you do some reading and improve your comprehension.
Posted on Reply
#92
TheinsanegamerN
Vya DomusSo to make life easier for both of us and so that I don't have to read this incoherent utter garbage off to the ignore list you go.

Now I get it why you're "TheinsanegamerN", you're out of your mind buddy, I suggest you do some reading and improve your comprehension.
So you have no response to your own argument, and resort to insults based on usernames? :roll: Somebody is mad they got proven wrong on the internet.
Posted on Reply
#93
HenrySomeone
He really painted himself into a corner there, first stating that AMD came to 7nm from a vastly inferior node to Nvidia's current 12nm, then claiming the latter is the same as 16nm, lol :D
Posted on Reply
#94
Vya Domus
HenrySomeonethen claiming the latter is the same as 16nm
I do sometimes wonder how it must be to be this out of touch.

www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm

I wonder why they don't talk about 12nm and 16nm separately and why it's always referred to as "16/12nm". Hmm, no particular reason probably.
Posted on Reply
#95
Vayra86
kapone32I wonder how big the GPU difference based on the DIE are for the 5700Xt vs the 2080Ti?

31mm x 25mm for the 2080TI but I can't seem to find dimensions for the 5700XT other than 251mm squared.

I was just reading an article that Big Navi could have as much as 16 billion transistors up from the 10.3 on 5700Xt.
The more interesting question I believe is what a performance equivalent to the 5700XT of Nvidia Turing (with RT) would be sized at on 7nm. I believe I saw that calculation somewhere at some point... bear with me....
EDIT: can't find it back. But if we consider a density increase of around 50%... which is generous because we've seen 60% as well for 7nm EUV; Ballpark half size. That puts the 751mm to a comfortable 375 to bring 2080ti performance. I reckon they can make do with about 200-240mm2 for 5700XT equivalents. Its not far off... but the 5700XT WITHOUT RT already weighs in at 251mm2. So, its a little stretch, but what RDNA2 will need to do is gain a bit of performance, AND add RT, at a similar die size. That's a pretty big assignment for just architecture. The outlook is that once again Nvidia will be getting more chips out of a wafer here and that is even in a worst case scenario where Ampere is no improvement over Turing.

The transistor count isn't really the right metric because the cards don't share feature sets or node. Its similar to TFLOPS. You can't compare outside the same gen. But die size vs overall performance are universal.
HenrySomeoneHe really painted himself into a corner there, first stating that AMD came to 7nm from a vastly inferior node to Nvidia's current 12nm, then claiming the latter is the same as 16nm, lol :D
Vya DomusI do sometimes wonder how it must be to be this out of touch.

www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm

I wonder why they don't talk about 12nm and 16nm separately and why it's always referred to as "16/12nm". Hmm, no particular reason probably.
Context... I mean, I hope you can see I at least am not here to ridicule you or your statements, but rather provide insight and argumentation... but the above is putting your head in the sand, is it not? Its okay to admit we were wrong sometimes. Happens to me all the time... still alive and kicking :) I also underlined how you missed this with the shot of the VII versus Vega in perf/watt gaps. Please don't respond saying 'but VII is larger, so more efficient' :D

But, more substance, because I like that; here is another pointer to consider the fact Nvidia will do just a little bit more than shrink
www.pcgamesn.com/nvidia/12nm-vs-amd-7nm-gpu-efficiency-incomparable
I'm not going to say everything Huang says is a golden rule, but... there's that.
Posted on Reply
#96
prtskg
Vya DomusI do sometimes wonder how it must be to be this out of touch.

www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm

I wonder why they don't talk about 12nm and 16nm separately and why it's always referred to as "16/12nm". Hmm, no particular reason probably.
Isn't the 12nm node which Nvidia is using a custom node?
Posted on Reply
#97
InVasMani
Just so long as AMD can chip away at Nvidia's lead and dominance on the GPU that's really all I care about. Hopefully these are RDNA2 based variable rate shading would be defiantly welcome.
Posted on Reply
#98
medi01
btarunrthe RX 5800 XT could be a step up from the RX 5700 XT in offering 1440p high frame-rate AAA performance. This could possibly put it in direct competition with the GeForce RTX 2070 Super.
Are you freaking kidding me?
yakkIf AMD added the same power saving tech found in consoles to this RDNA GPU and run it at even higher clocks, I'll be quite interested.
I heard rumors tower saving tech found in console is called "run it at a lover clock".
No idea what that could possibly mean.
Vayra862000 mhz clocked
Smaller number of shaders. Yep.
Basically 1080 had it comparable to 480. Had AMD managed to run them at 2Ghz, it would run circles around NV.
Just a different architecture.

Anyhow, I am not sure that 2080Ti sales pay off for its development.
It's a halo product, having it boosts entire lineup.
Posted on Reply
Add your own comment
Jun 11th, 2024 22:04 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts