Monday, May 27th 2019

AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6

AMD at its 2019 Computex keynote today unveiled the Radeon RX 5000 family of graphics cards that leverage its new Navi graphics architecture and 7 nm silicon fabrication process. Navi isn't just an incremental upgrade over Vega with a handful new technologies, but the biggest overhaul to AMD's GPU SIMD design since Graphics CoreNext, circa 2011. Called RDNA or Radeon DNA, the new compute unit by AMD is a clean-slate SIMD design with a 1.25X IPC uplift over Vega, an overhauled on-chip cache hierarchy, and a more streamlined graphics pipeline.

In addition, the architecture is designed to increase performance-per-Watt by 50 percent over Vega. The first part to leverage Navi is the Radeon RX 5700. AMD ran a side-by-side demo of the RX 5700 versus the GeForce RTX 2070 at Strange Brigade, where NVIDIA's $500 card was beaten. "Strange Brigade" is one game where AMD fares generally well as it is heavily optimized for asynchonous compute. Navi also ticks two big technology check-boxes, PCI-Express gen 4.0, and GDDR6 memory. AMD has planned a July availability for the RX 5700, and did not disclose pricing.
Add your own comment

202 Comments on AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6

#26
HTC
cucker tarlson said:

I think those "RDNA" perf/wat gains are 14nm-7nm largely.Note how they did not compare R7 but Vega.


more like DARN :laugh:
It's impossible to compare because there's no RDNA 14nm card. If there were, then your statement would most likely be correct.

One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.

There's simply WAY too little information to go by, @ this point in time.
Posted on Reply
#27
R0H1T
I think you're giving too much credit to this new name, sure RDNA sounds cool but it can't be a 180° from previous (gen) GCN uarch. I'd be slightly surprised if it was a major departure from GCN, also wasn't Raja the architect of Navi?
Posted on Reply
#28
cucker tarlson
HTC said:

It's impossible to compare because there's no RDNA 14nm card. If there were, then your statement would most likely be correct.

One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.

There's simply WAY too little information to go by, @ this point in time.
:confused:
that's what a generational increase is,old vs new
Posted on Reply
#29
M2B
25% IPC increase over Vega is more than what Nvidia achieved with Turing over Pascal But keep in mind Nvidia did claim turing shaders are 50% faster which turned out to be bullshit, at least for today's software; same thing can happen to AMD's claims.
Nvidia is so architecturally ahead that even a significantly improved, new architecture on a much better node doesn't seem impressive to people.
Posted on Reply
#31
Vayra86
HTC said:

It's impossible to compare because there's no RDNA 14nm card. If there were, then your statement would most likely be correct.

One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.

There's simply WAY too little information to go by, @ this point in time.
Just because you name something a fancy new bunch of letters doesn't magically make it a different piece of kit, and the use of Strange Brigade only confirms we're looking at another GCN / Polaris.

When is there enough information? When the Youtubers come out of the woodwork with wild performance claims and exotic tweaked results?

Come on buddy, 1+1=2.

M2B said:

25% IPC increase over Vega is more than what Nvidia achieved with Turing over Pascal But keep in mind Nvidia did claim turing shaders are 50% faster which turned out to be bullshit, at least for today's software
Nvidia is so architecturally ahead that even a significantly improved, new architecture on a much better node doesn't seem impressive to people.
More like AMD dropped the ball for só many years they can never catch up again, even with Nvidia slowing down. People said this in 2015-16 already, but none of that was true and AMD had a revolution coming.
Posted on Reply
#32
Ibotibo01
Vayra86 said:

AMD had a revolution coming.
I think it is CPU revolution but Intel will join Med tier GPU's in 2020. AMD will be forced. Nvidia's Ampere 7NM will come in 2020-2021. It will be exciting.
Posted on Reply
#33
ratirt
Ibotibo01 said:

I don't think so. RX 5700 is %10 faster than RTX 2070 in Strange Brigade but Radeon 7 is %20 faster than RTX 2080. So it won't be same with RTX 2070. I think it will be same with RTX 2060. RX 5800 will be same with RTX 2070.

and I do think so


Like I said it depends on the game suite :)

Ibotibo01 said:

AMD's benchmarks

Real Benchmarks

AMD showed the benchmarks with 3 games. If you want to compare maybe you should consider only these 3 games from TPU to be more accurate not relative performance across entire game suite? It always depends on the games picked. Of course AMD picked games at which their products are better. NV does the same thing and any company would do this that way. That's just obvious.

HTC said:

It's impossible to compare because there's no RDNA 14nm card. If there were, then your statement would most likely be correct.

One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.

There's simply WAY too little information to go by, @ this point in time.
Well that really is a good point. RDNA is nothing like GCN. Therefore we don't know how will it act in the games.
Posted on Reply
#34
Xuper
AMD claims up to 50% Perf per watt , so Vega 64 consumes 292w, RTX 2060 = 165w , RTX2070 = 195w , Vega 64 with 7nm and redesign arch will be around 145w but in reality probably between 160w to 190w , also take it ( 25% Perf per clock ) account.so in term of Perf per Wat , RX5700 will be around Turning Arch, then AMD Card 7nm probably will be match Nvidia Card 12 nm.

( yes yes , I know what happens if NVidia takes 7nm !!)
Posted on Reply
#35
steen
R0H1T said:

That can't be right or at least for AMD's sake it ought to be Vega VII i.e. if they want to get anywhere in the mid range segment. The IPC is the same for all Vegas I guess, it mostly boils down to efficiency & if Navi is barely efficient to the level of VII then AMD might as well shut the RTG division for the time being!
Vega=GFX9, Navi=GFX10. They've ditched some Vega IP & re-used previous blocks. Until we see tech details, I'm not entirely convinced how different it really is. Work distribution is likely changed but I just can't see a ground up RTL rewrite for this gen. They might bifurcate their product line to graphics as a compute service (Vega) & a more fixed function (Navi), but it doesn't make sense given the gains made by recent console titles that are finally coding to the compute paradigm. I presume they're comparing IPC to Vega20, else node change alone mostly explains the gains. As mentioned above, transistor count/die size will be interesting.
Posted on Reply
#36
kings
Comparing the card on the best case scenario Strange Brigade... so, in general it probably means it will fall between RTX 2060 and RTX 2070...
Posted on Reply
#37
Manoa
what a waste of card, bether didn't make card at all
Posted on Reply
#38
Valantar
Hm. This doesn't align all that well with previous rumors. AMD is saying this will be the basis for gaming for the coming decade. In other words, Arcturus (if that's even a thing) can clearly not be a major architectural overhaul. Then again, if they deliver a 25% IPC increase, it won't be needed anyhow.

I'm more excited for this than I thought I would be.
Posted on Reply
#39
Xuper
steen said:

Vega=GFX9, Navi=GFX10. They've ditched some Vega IP & re-used previous blocks. Until we see tech details, I'm not entirely convinced how different it really is. Work distribution is likely changed but I just can't see a ground up RTL rewrite for this gen. They might bifurcate their product line to graphics as a compute service (Vega) & a more fixed function (Navi), but it doesn't make sense given the gains made by recent console titles that are finally coding to the compute paradigm. I presume they're comparing IPC to Vega20, else node change alone mostly explains the gains. As mentioned above, transistor count/die size will be interesting.
Navi Probably uses the GCN ISA but with different arch.Lisa said "RDNA is a ground up redesign from GCN"

kings said:

Comparing the card on the best case scenario Strange Brigade... so, in general it probably means it will fall between RTX 2060 and RTX 2070...
pretty much yes.but i hear rumor about Nvidia releases new card like RTX2070 Ti , I think i read somewhere.
Posted on Reply
#40
Valantar
Xuper said:

Navi Probably uses the GCN ISA but with different arch.Lisa said "RDNA is a ground up redesign from GCN"


pretty much yes.but i hear rumor about Nvidia releases new card like RTX2070 Ti , I think i read somewhere.
If so they'd need to use a cut-down 2080 die. That won't be cheap, and Nvidia's margins will still hurt.
Posted on Reply
#41
bug
Sadly, +50% perf/Wdoesn't close the gap to Nvidia :(
Posted on Reply
#43
jabbadap
Xuper said:

Navi Probably uses the GCN ISA but with different arch.Lisa said "RDNA is a ground up redesign from GCN"

pretty much yes.but i hear rumor about Nvidia releases new card like RTX2070 Ti , I think i read somewhere.
Well yeah, Vega was called NCU and was still GCN. But other than that 1.25 clock to clock performance increase sounds very promising. Pascal to Turing clock to clock performance increase were mostly inherit from concurrent int32 fp32 math. All in all that figure made Navi as a mach tad more interesting.

Edit: Had to look back, Vega NCU were promised to be 2x perfomance per clock and 4x perfomance per Watt(the devil is in the detail). So take it as you wish, I'm waiting more concrete evidence.
Posted on Reply
#44
Vayra86
jabbadap said:

Well yeah, Vega was called NCU and was still GCN. But other than that 1.25 clock to clock performance increase sounds very promising. Pascal to Turing clock to clock performance increase were mostly inherit from concurrent int32 fp32 math. All in all that figure made Navi as a mach tad more interesting.
25% IPC and 50% perf/watt is probably in the best-case Strange Brigade scenario versus the worst-case Vega scenario.

Also, the other twist here is the shader itself. Sure, it may get a lot faster, but if you get a lower count of them, all you really have is some reshuffling that leads to no performance gain. Turing is a good example of that. Perf per shader is up, but you get less shaders and the end result is that for example a TU106 with 2304 shaders ends up alongside a GP104 that rocks 2560 shaders. It gets better, if you then defend your perf/watt figure by saying 'perf/watt per shader', its not all too hard after all.

If it was across the board / averaged over many games we would have seen those many games. Wishful thinking vs realism... take your pick ;)

These slides are meaningless. Read between the lines.
Posted on Reply
#46
steen
Xuper said:

Navi Probably uses the GCN ISA but with different arch.Lisa said "RDNA is a ground up redesign from GCN"
Yeah, except uarch. Commits for gaming Navi are just as valid for improving the perf of a compute Navi. I don't see AMD changing the structure of their CUs from 64 alus. It would break scheduling/wavefront. They mention improved CUs & a new cache hierarchy, but apart from L0 tied to CUs & more L2, I don't know what's different to Vega.

bug said:

Sadly, +50% perf/Wdoesn't close the gap to Nvidia :(
By definition it does. Of course we don't know what this means in practice. Is it the chip, whole card TDP, clk-clk, etc. The 7nm node isn't all beer & skittles given the increased density/smaller die. That's why Nv pulled the trigger on the optimized 12nm & large dies. 7N+ will help, but density, electron migration, etc is still there.

jabbadap said:

Well yeah, Vega was called NCU and was still GCN. But other than that 1.25 clock to clock performance increase sounds very promising. Pascal to Turing clock to clock performance increase were mostly inherit from concurrent int32 fp32 math. All in all that figure made Navi as a mach tad more interesting.
Didn't you know? That's now called "async compute". ;)

TU concurrent int & fp is more flexible than just 32bit data types. Half floats & lower precision int ops can also be packed. Conceptually works well with VRS.
Posted on Reply
#47
medi01
Divide Overflow said:

Wait and see how well this actually performs in a full, hands on review though.
Ibotibo01 said:

I don't think so. RX 5700 is %10 faster than RTX 2070
@btarunr
May I ask something about the choice of games by TPU?
So I check "average gaming" diff between VII and 2080 on TPU and computerbase.
TPU states nearly 20% diff, computerbase states it's half of that.
Oh well, I think, different games, different results.

But then somebody does 35 games comparison:



and results match computerbase's results, but not TPUs.

35 is quite a list. Is it time, perhaps, to re-think the choice of games to test?


Ibotibo01 said:

Real Benchmarks
Of a different set of games.
Nice try.
Posted on Reply
#48
EarthDog
medi01 said:

Is it time, perhaps, to re-think the choice of games to test?
That time has gone...there was even a thread on it too a week or so back from wizard. ;)
Posted on Reply
#49
Mats
Radeon DeoxyriboNucleic Acid?

What a nice name. :)
Posted on Reply
Add your own comment