• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6

bug

Joined
May 22, 2015
Messages
6,805 (4.08/day)
Processor Intel i5-6600k (AMD Ryzen5 3600 in a box, waiting for a mobo)
Motherboard ASRock Z170 Extreme7+
Cooling Arctic Cooling Freezer i11
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V (@3200)
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 3TB Seagate
Display(s) HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Depends on your benchmark. If the Vega 56 is the starting point, it would bring AMD to 99% of the efficiency of the RTX 2070. 66*1,5=99. On the other hand, if the V64 was the benchmark, that's just 84%.

AMD is almost half as efficient as Nvidia today. +50% will not close that gap.
 
Joined
May 2, 2017
Messages
1,639 (1.72/day)
Processor AMD Ryzen 5 1600X
Motherboard Biostar X370GTN
Cooling Custom CPU+GPU water loop
Memory 16GB G.Skill TridentZ DDR4-3200 C16
Video Card(s) AMD R9 Fury X
Storage 500GB 960 Evo (OS ++), 500GB 850 Evo (Games)
Display(s) Dell U2711
Case NZXT H200i
Power Supply EVGA Supernova G2 750W
Mouse Logitech G602
Keyboard Lenovo Compact Keyboard with Trackpoint
Software Windows 10 Pro

AMD is almost half as efficient as Nvidia today. +50% will not close that gap.
My numbers were from the 2070 review. The 1660 is an odd comparison for a card that's meant to compete with much more powerful cards.
 

bug

Joined
May 22, 2015
Messages
6,805 (4.08/day)
Processor Intel i5-6600k (AMD Ryzen5 3600 in a box, waiting for a mobo)
Motherboard ASRock Z170 Extreme7+
Cooling Arctic Cooling Freezer i11
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V (@3200)
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 3TB Seagate
Display(s) HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Joined
May 2, 2017
Messages
1,639 (1.72/day)
Processor AMD Ryzen 5 1600X
Motherboard Biostar X370GTN
Cooling Custom CPU+GPU water loop
Memory 16GB G.Skill TridentZ DDR4-3200 C16
Video Card(s) AMD R9 Fury X
Storage 500GB 960 Evo (OS ++), 500GB 850 Evo (Games)
Display(s) Dell U2711
Case NZXT H200i
Power Supply EVGA Supernova G2 750W
Mouse Logitech G602
Keyboard Lenovo Compact Keyboard with Trackpoint
Software Windows 10 Pro
25% IPC and 50% perf/watt is probably in the best-case Strange Brigade scenario versus the worst-case Vega scenario.
That sentence makes no sense unless you're implying that they're comparing numbers from different benchmarks, which ... well, would be bonkers. Vega (up until now) is no worst-case scenario for efficiency for AMD - it's entirely on par with Polaris if not a tad better.

Also, the other twist here is the shader itself. Sure, it may get a lot faster, but if you get a lower count of them, all you really have is some reshuffling that leads to no performance gain. Turing is a good example of that. Perf per shader is up, but you get less shaders and the end result is that for example a TU106 with 2304 shaders ends up alongside a GP104 that rocks 2560 shaders. It gets better, if you then defend your perf/watt figure by saying 'perf/watt per shader', its not all too hard after all.
But you're ignoring market segmentation and product pricing here. Less shaders with more performance/w/shader means cheaper dies and cheaper cards at lower power and equivalent performance or higher performance at equivalent power. Overall Turing gives you a significant increase in shaders per product segment - they just cranked up the pricing to 11 to match, sadly.
 
Joined
Mar 10, 2014
Messages
1,691 (0.80/day)
Didn't you know? That's now called "async compute". ;)

TU concurrent int & fp is more flexible than just 32bit data types. Half floats & lower precision int ops can also be packed. Conceptually works well with VRS.
Well kind of true, async compute is capability of using graphics queue and compute queue at the same time. It really does not matter what precision are we speaking.
 
Joined
Feb 3, 2017
Messages
1,886 (1.81/day)
Processor i5-8400
Motherboard ASUS ROG STRIX Z370-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-3200 CL16
Video Card(s) Gainward GeForce RTX 2080 Phoenix
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Logitech G700
Keyboard Corsair K60
25% IPC and 50% perf/watt is probably in the best-case Strange Brigade scenario versus the worst-case Vega scenario.
Perf/clock is 30 games at 4K Ultra settings with 4xAA (geomean?).
Perf/watt is Division 2 at 1440p Ultra settings.
AMD unveiled RDNA, the next foundational gaming architecture that was designed to drive the future of PC gaming, console, and cloud for years to come. With a new compute unit [10] design, RDNA is expected to deliver incredible performance, power and memory efficiency in a smaller package compared to the previous generation Graphics Core Next (GCN) architecture. It is projected to provide up to 1.25X higher performance-per-clock [11] and up to 1.5X higher performance-per-watt over GCN[12], enabling better gaming performance at lower power and reduced latency.
...
10. AMD APUs and GPUs based on the Graphics Core Next and RDNA architectures contain GPU Cores comprised of compute units, which are defined as 64 shaders (or stream processors) working together. GD-142
11. Testing done by AMD performance labs 5/23/19, showing a geomean of 1.25x per/clock across 30 different games @ 4K Ultra, 4xAA settings. Performance may vary based on use of latest drivers. RX-327
12. Testing done by AMD performance labs 5/23/19, using the Division 2 @ 25x14 Ultra settings. Performance may vary based on use of latest drivers. RX-325
 
Joined
May 2, 2017
Messages
1,639 (1.72/day)
Processor AMD Ryzen 5 1600X
Motherboard Biostar X370GTN
Cooling Custom CPU+GPU water loop
Memory 16GB G.Skill TridentZ DDR4-3200 C16
Video Card(s) AMD R9 Fury X
Storage 500GB 960 Evo (OS ++), 500GB 850 Evo (Games)
Display(s) Dell U2711
Case NZXT H200i
Power Supply EVGA Supernova G2 750W
Mouse Logitech G602
Keyboard Lenovo Compact Keyboard with Trackpoint
Software Windows 10 Pro
I wasn't looking at a specific card, just at numbers put out by Nvidia vs AMD.
If so, then my numbers are just as valid as yours. That's the danger of dealing with relative percentages - you can get big changes when the underlying numbers change just a little. I have no doubt AMD wants to present themselves in as positive a light as possible, but you seem to be going the diametrically opposite route.
 
Joined
Jul 9, 2015
Messages
1,976 (1.22/day)
System Name My all round PC
Processor i5 750
Motherboard ASUS P7P55D-E
Memory 8GB
Video Card(s) Sapphire 380 OC... sold, waiting for Navi
Storage 256GB Samsung SSD + 2Tb + 1.5Tb
Display(s) Samsung 40" A650 TV
Case Thermaltake Chaser mk-I Tower
Power Supply 425w Enermax MODU 82+
Software Windows 10
+50% will not close that gap.
Simply undervolting VII without losing any perf beats a bunch of NVDA cards, including 2080:

123849


there is a gap, but it's smaller than one thinks (especially when checking for it on sites favoring green games, like TPU does).

That time has gone...there was even a thread on it too a week or so back from wizard.
 
Joined
Mar 18, 2008
Messages
4,719 (1.10/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA 2080Ti
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) Acer K272HUL, HTC Vive
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
Software Windows 10 Professional/Linux Mint
Truly..who cares??
A lot, well except AMD fanboiz.

When you are late to the party, better bring more stuff. If RX5700 matches RTX line with performance it better be priced well, otherwise the lacking of feature set will hurt them in the eyes of general public.
 
  • Like
Reactions: bug
Joined
Feb 3, 2017
Messages
1,886 (1.81/day)
Processor i5-8400
Motherboard ASUS ROG STRIX Z370-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-3200 CL16
Video Card(s) Gainward GeForce RTX 2080 Phoenix
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Logitech G700
Keyboard Corsair K60
4K Ultra with 4xAA 14nm vega class? gpu, wonder what kind of FPS numbers are they getting...
"Previous generation GCN" might not even be Vega considering this will be successor to Polaris :)
 

M2B

Joined
Jun 2, 2017
Messages
201 (0.22/day)
Location
Iran
Processor Intel Core i5-8600K @4.9GHz
Motherboard MSI Z370 Gaming Pro Carbon
Cooling Cooler Master MasterLiquid ML240L RGB
Memory XPG 8GBx2 - 3200MHz CL16
Video Card(s) Asus Strix GTX 1080 OC Edition 8G 11Gbps
Storage 2x Samsung 850 EVO 1TB
Display(s) BenQ PD3200U
Case Thermaltake View 71 Tempered Glass RGB Edition
Power Supply EVGA 650 P2
The Radeon VII is using HBM2 which is so much more efficient than the GDDR6 memory on Nvidia cards .(Uses around 30-35W less power if i'm not mistaken)
You're comparing Graphics Cards to Graphics Cards, not a GPU with another one.
 
Joined
Feb 3, 2017
Messages
1,886 (1.81/day)
Processor i5-8400
Motherboard ASUS ROG STRIX Z370-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-3200 CL16
Video Card(s) Gainward GeForce RTX 2080 Phoenix
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Logitech G700
Keyboard Corsair K60
Simply undervolting VII without losing any perf beats a bunch of NVDA cards, including 2080:
*YMMV
Computerbase got one of the good ones, it would seem. There have been far worse examples in both review sites and retail.
 

bug

Joined
May 22, 2015
Messages
6,805 (4.08/day)
Processor Intel i5-6600k (AMD Ryzen5 3600 in a box, waiting for a mobo)
Motherboard ASRock Z170 Extreme7+
Cooling Arctic Cooling Freezer i11
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V (@3200)
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 3TB Seagate
Display(s) HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
If so, then my numbers are just as valid as yours. That's the danger of dealing with relative percentages - you can get big changes when the underlying numbers change just a little. I have no doubt AMD wants to present themselves in as positive a light as possible, but you seem to be going the diametrically opposite route.
I'm not sure how you read that graph, but this is how I do it:
1. Half of Nvidia's cards are in the 90-100% relative efficiency range.
2. AMD cards are generally at 50% or less relative efficiency. Vega 56 does better, at 60%. Radeon VII does even better at 68%, but that's already on 7nm.

If I take the best case scenario, Vega 56 and add 50% to that, it still puts AMD at 90% of the most efficient Nvidia card. And Nvidia is still on 12nm.
 
Joined
Mar 16, 2017
Messages
709 (0.71/day)
Location
Tanagra
Processor AMD 2700X
Motherboard Gigabyte B450M DS3H
Cooling Wraith Prism
Memory 16GB DDR4 3000
Video Card(s) Sapphire Pulse RX 570 4GB
Storage Inland 512GB NVMe
Display(s) LG 27UL500-W
Case NZXT H510
Audio Device(s) My ears
Power Supply EVGA 500W
Software Windows 10
I wonder how much PCIe 4.0 is at play here, and is RX 5700 the best they have, or is the most efficient? It seems like there could be a 5800, but then why wouldn’t they lead off with that?
 
Joined
Mar 21, 2016
Messages
268 (0.20/day)
Strange Brigade for comparison is meaningless. The game is known to lean anywhere from 10-20% towards AMD GPUs. We will have to wait for July to see how they really stack up.
Both 1.25x "IPC" as well as 1.5x power efficiency sound really good, that should bring Navi up to par with Turing, hopefully a little ahead considering it is on 7nm.
Well if it's comparable to RTX2070 at a bit lower price point that's not bad. The real question is how (NAVI/RDNA) setups with Zen2/X570 and crossfire? If a more cut down cheaper version of the RX5700 in crossfire is a lot more cost effective than a RTX2080 for example that would shake things up. I'd like to hope that most of the negative aspects to crossfire is mostly eliminated with PCIE 4.0 for a two card or even 3 card setup, but who knows. I'd certainly hope so. Time will tell how these things pan out.

I wonder how much PCIe 4.0 is at play here, and is RX 5700 the best they have, or is the most efficient? It seems like there could be a 5800, but then why wouldn’t they lead off with that?
Perhaps it needs more binning 7nm is still relatively new give it some time. Why wouldn't they lead with it perhaps TDP is a bit steep once you push frequency higher than they've already set it at.
 
Joined
Feb 3, 2017
Messages
1,886 (1.81/day)
Processor i5-8400
Motherboard ASUS ROG STRIX Z370-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-3200 CL16
Video Card(s) Gainward GeForce RTX 2080 Phoenix
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Logitech G700
Keyboard Corsair K60
We do not know the price point. Leaks/rumors put it at $499.

Why would Crossfire suddenly be better than it has been so far? Bandwidth is not the main problem and even then the increase from PCI-e 3.0 to 4.0 would not alleviate the need for communication that much. At the other side bidirectional 100GB/s did really not make that noticeable of a difference either.
 
Joined
May 15, 2014
Messages
107 (0.05/day)
@btarunr
May I ask something about the choice of games by TPU?
So I check "average gaming" diff between VII and 2080 on TPU and computerbase.
TPU states nearly 20% diff, computerbase states it's half of that.
Oh well, I think, different games, different results.

But then somebody does 35 games comparison:
It's a simple hierarchy. Top dozen or so tend to favor AMD, bottom dozen favor Nvidia. Pick the games to get the result you want. Test setup/procedure/settings/areas tested can make a difference. Of course, TU104 tends to be more effective than Vega20 in the chart below.




Well kind of true, async compute is capability of using graphics queue and compute queue at the same time.
You're being generous. :) Your definition is fine ofc. (Or multiple queues). Not really directed at you anyway. I kept seeing it in other threads where concurrent int/fp=async compute.

It really does not matter what precision are we speaking.
Exactly correct, nor is it defined by the ability to pack int/fp in the graphics pipeline.

There's another interesting "fine wine" effect for Vega. With Win10 (1803 IIRC) MS started promoting DX 11.0 games on GCN to DX12 feature level 11.1 that enabled the HW schedulers so should result in better perf than release under Win7/8.
 
Joined
Jul 9, 2015
Messages
1,976 (1.22/day)
System Name My all round PC
Processor i5 750
Motherboard ASUS P7P55D-E
Memory 8GB
Video Card(s) Sapphire 380 OC... sold, waiting for Navi
Storage 256GB Samsung SSD + 2Tb + 1.5Tb
Display(s) Samsung 40" A650 TV
Case Thermaltake Chaser mk-I Tower
Power Supply 425w Enermax MODU 82+
Software Windows 10
It's a simple hierarchy. Top dozen or so tend to favor AMD, bottom dozen favor Nvidia. Pick the games to get the result you want.
Thanks for linking a chart showing perf difference TWO TIMES SMALLER than TPU.
Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.
 
Joined
Sep 17, 2014
Messages
10,453 (5.46/day)
Location
Mars
Processor i7 8700k 4.7Ghz @ 1.26v
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) MSI GTX 1080 Gaming X @ 2100/5500
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Eizo Foris FG2421
Case Fractal Design Define C TG
Power Supply EVGA G2 750w
Mouse Logitech G502 Protheus Spectrum
Keyboard Sharkoon MK80 (Brown)
Software W10 x64
But you're ignoring market segmentation and product pricing here. Less shaders with more performance/w/shader means cheaper dies and cheaper cards at lower power and equivalent performance or higher performance at equivalent power. Overall Turing gives you a significant increase in shaders per product segment - they just cranked up the pricing to 11 to match, sadly.
Yes... and AMD is going to follow suit, so the net gain is zero for a consumer.

Perf/clock is 30 games at 4K Ultra settings with 4xAA (geomean?).
Perf/watt is Division 2 at 1440p Ultra settings.
That's nice but this is still AMD's little black box we're looking at, and based on history I'm using truckloads of salt with that. Especially when it comes to their GPUs. Still... there is hope, then, I guess :)

Thanks for linking a chart showing perf difference TWO TIMES SMALLER than TPU.
Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.
The relative number of games optimized towards Nvidia cards is way higher, so any 'representative' benchmark suite; as in, representative wrt the engines and games on the marketplace, is always going to favor Nvidia. But that still provides the most informative review/result, because gamers don't buy games based on the brand of their GPU.

What it really means and what you're actually saying is: AMD should be optimizing a far wider range of games instead of focusing on the handful that they get to run well. That is why AMD lost the DX11 race as well - too much looking at the horizon and how new APIs would save their ass, while Nvidia fine tuned around DX11.
 
Last edited:
Joined
Mar 21, 2016
Messages
268 (0.20/day)
Latency decreases since you can push twice as much bandwidth in each direction to and from. AMD themselves said it themselves reduced latency, higher bandwidth, lower power. Literally all of those things would benefit crossfire and on a cut down version might even improve if they can improve overall efficiency in the process while salvaging imperfect die's by disabling parts of them. I don't know why Crossfire wouldn't be improved a bit, but how much of a improvement is tough to say definitively. I would think the micro stutter would be lessened quite a bit for a two card setup and even a three card setup though less dramatically in the latter case while a quad card setup would "in theory" be identical to a two card one for PCIE 4.0 at least.
 
Joined
Jul 9, 2015
Messages
1,976 (1.22/day)
System Name My all round PC
Processor i5 750
Motherboard ASUS P7P55D-E
Memory 8GB
Video Card(s) Sapphire 380 OC... sold, waiting for Navi
Storage 256GB Samsung SSD + 2Tb + 1.5Tb
Display(s) Samsung 40" A650 TV
Case Thermaltake Chaser mk-I Tower
Power Supply 425w Enermax MODU 82+
Software Windows 10
...lacking of feature set will hurt them...
Such as G-Sync
Oh, hold on...

Nobody cares about yet another NVDA "only me" solution, it needs to get major support across the board to get to anything, but gimmicks developed in a handful of games just because NVDA paid them for it.

At this point it is obvious who's chips are going to rock the next gen of major consoles (historically "it's not about graphics" Nintendo opting for NVDA's dead mobile platform chip is almost an insult in this context, with even multiplat games mostly avoiding porting to it).
 
Joined
Feb 3, 2017
Messages
1,886 (1.81/day)
Processor i5-8400
Motherboard ASUS ROG STRIX Z370-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-3200 CL16
Video Card(s) Gainward GeForce RTX 2080 Phoenix
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Logitech G700
Keyboard Corsair K60
@InVasMani latency and bandwidth are not necessarily tied together.

Nobody cares about yet another NVDA "only me" solution, it needs to get major support across the board to get to anything, but gimmicks developed in a handful of games just because NVDA paid them for it.
You mean something standard like, say... DXR?
 
Joined
Jul 9, 2015
Messages
1,976 (1.22/day)
System Name My all round PC
Processor i5 750
Motherboard ASUS P7P55D-E
Memory 8GB
Video Card(s) Sapphire 380 OC... sold, waiting for Navi
Storage 256GB Samsung SSD + 2Tb + 1.5Tb
Display(s) Samsung 40" A650 TV
Case Thermaltake Chaser mk-I Tower
Power Supply 425w Enermax MODU 82+
Software Windows 10
You mean something standard like, say... DXR?
I remember that, my point still stands. (remind me, why it is a proprietary vendor extension in Vulkan)
NVDA was cooking something for years, found time when competition was absent in the highest end, spilled the beans.
Intel/AMD would need to agree that DXR approach is at all viable or the best from their POV.

Crytek has shown one doesn't even need dedicated (20-24% of Turing die) HW to do the RT gimmick:

 
Top