• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6

bug

Joined
May 22, 2015
Messages
13,210 (4.06/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Depends on your benchmark. If the Vega 56 is the starting point, it would bring AMD to 99% of the efficiency of the RTX 2070. 66*1,5=99. On the other hand, if the V64 was the benchmark, that's just 84%.

AMD is almost half as efficient as Nvidia today. +50% will not close that gap.
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro

AMD is almost half as efficient as Nvidia today. +50% will not close that gap.
My numbers were from the 2070 review. The 1660 is an odd comparison for a card that's meant to compete with much more powerful cards.
 

bug

Joined
May 22, 2015
Messages
13,210 (4.06/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
25% IPC and 50% perf/watt is probably in the best-case Strange Brigade scenario versus the worst-case Vega scenario.
That sentence makes no sense unless you're implying that they're comparing numbers from different benchmarks, which ... well, would be bonkers. Vega (up until now) is no worst-case scenario for efficiency for AMD - it's entirely on par with Polaris if not a tad better.

Also, the other twist here is the shader itself. Sure, it may get a lot faster, but if you get a lower count of them, all you really have is some reshuffling that leads to no performance gain. Turing is a good example of that. Perf per shader is up, but you get less shaders and the end result is that for example a TU106 with 2304 shaders ends up alongside a GP104 that rocks 2560 shaders. It gets better, if you then defend your perf/watt figure by saying 'perf/watt per shader', its not all too hard after all.
But you're ignoring market segmentation and product pricing here. Less shaders with more performance/w/shader means cheaper dies and cheaper cards at lower power and equivalent performance or higher performance at equivalent power. Overall Turing gives you a significant increase in shaders per product segment - they just cranked up the pricing to 11 to match, sadly.
 
Joined
Mar 10, 2014
Messages
1,793 (0.49/day)
Didn't you know? That's now called "async compute". ;)

TU concurrent int & fp is more flexible than just 32bit data types. Half floats & lower precision int ops can also be packed. Conceptually works well with VRS.

Well kind of true, async compute is capability of using graphics queue and compute queue at the same time. It really does not matter what precision are we speaking.
 
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
25% IPC and 50% perf/watt is probably in the best-case Strange Brigade scenario versus the worst-case Vega scenario.
Perf/clock is 30 games at 4K Ultra settings with 4xAA (geomean?).
Perf/watt is Division 2 at 1440p Ultra settings.
AMD unveiled RDNA, the next foundational gaming architecture that was designed to drive the future of PC gaming, console, and cloud for years to come. With a new compute unit [10] design, RDNA is expected to deliver incredible performance, power and memory efficiency in a smaller package compared to the previous generation Graphics Core Next (GCN) architecture. It is projected to provide up to 1.25X higher performance-per-clock [11] and up to 1.5X higher performance-per-watt over GCN[12], enabling better gaming performance at lower power and reduced latency.
...
10. AMD APUs and GPUs based on the Graphics Core Next and RDNA architectures contain GPU Cores comprised of compute units, which are defined as 64 shaders (or stream processors) working together. GD-142
11. Testing done by AMD performance labs 5/23/19, showing a geomean of 1.25x per/clock across 30 different games @ 4K Ultra, 4xAA settings. Performance may vary based on use of latest drivers. RX-327
12. Testing done by AMD performance labs 5/23/19, using the Division 2 @ 25x14 Ultra settings. Performance may vary based on use of latest drivers. RX-325
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I wasn't looking at a specific card, just at numbers put out by Nvidia vs AMD.
If so, then my numbers are just as valid as yours. That's the danger of dealing with relative percentages - you can get big changes when the underlying numbers change just a little. I have no doubt AMD wants to present themselves in as positive a light as possible, but you seem to be going the diametrically opposite route.
 
Joined
Jul 9, 2015
Messages
3,413 (1.06/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
+50% will not close that gap.
Simply undervolting VII without losing any perf beats a bunch of NVDA cards, including 2080:

123849


there is a gap, but it's smaller than one thinks (especially when checking for it on sites favoring green games, like TPU does).

That time has gone...there was even a thread on it too a week or so back from wizard.

 
Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
Truly..who cares??

A lot, well except AMD fanboiz.

When you are late to the party, better bring more stuff. If RX5700 matches RTX line with performance it better be priced well, otherwise the lacking of feature set will hurt them in the eyes of general public.
 
  • Like
Reactions: bug
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
4K Ultra with 4xAA 14nm vega class? gpu, wonder what kind of FPS numbers are they getting...
"Previous generation GCN" might not even be Vega considering this will be successor to Polaris :)
 

M2B

Joined
Jun 2, 2017
Messages
284 (0.11/day)
Location
Iran
Processor Intel Core i5-8600K @4.9GHz
Motherboard MSI Z370 Gaming Pro Carbon
Cooling Cooler Master MasterLiquid ML240L RGB
Memory XPG 8GBx2 - 3200MHz CL16
Video Card(s) Asus Strix GTX 1080 OC Edition 8G 11Gbps
Storage 2x Samsung 850 EVO 1TB
Display(s) BenQ PD3200U
Case Thermaltake View 71 Tempered Glass RGB Edition
Power Supply EVGA 650 P2
The Radeon VII is using HBM2 which is so much more efficient than the GDDR6 memory on Nvidia cards .(Uses around 30-35W less power if i'm not mistaken)
You're comparing Graphics Cards to Graphics Cards, not a GPU with another one.
 
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Simply undervolting VII without losing any perf beats a bunch of NVDA cards, including 2080:
*YMMV
Computerbase got one of the good ones, it would seem. There have been far worse examples in both review sites and retail.
 

bug

Joined
May 22, 2015
Messages
13,210 (4.06/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
If so, then my numbers are just as valid as yours. That's the danger of dealing with relative percentages - you can get big changes when the underlying numbers change just a little. I have no doubt AMD wants to present themselves in as positive a light as possible, but you seem to be going the diametrically opposite route.
I'm not sure how you read that graph, but this is how I do it:
1. Half of Nvidia's cards are in the 90-100% relative efficiency range.
2. AMD cards are generally at 50% or less relative efficiency. Vega 56 does better, at 60%. Radeon VII does even better at 68%, but that's already on 7nm.

If I take the best case scenario, Vega 56 and add 50% to that, it still puts AMD at 90% of the most efficient Nvidia card. And Nvidia is still on 12nm.
 
Joined
Mar 16, 2017
Messages
1,662 (0.64/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
I wonder how much PCIe 4.0 is at play here, and is RX 5700 the best they have, or is the most efficient? It seems like there could be a 5800, but then why wouldn’t they lead off with that?
 
Joined
Mar 21, 2016
Messages
2,197 (0.74/day)
Strange Brigade for comparison is meaningless. The game is known to lean anywhere from 10-20% towards AMD GPUs. We will have to wait for July to see how they really stack up.
Both 1.25x "IPC" as well as 1.5x power efficiency sound really good, that should bring Navi up to par with Turing, hopefully a little ahead considering it is on 7nm.
Well if it's comparable to RTX2070 at a bit lower price point that's not bad. The real question is how (NAVI/RDNA) setups with Zen2/X570 and crossfire? If a more cut down cheaper version of the RX5700 in crossfire is a lot more cost effective than a RTX2080 for example that would shake things up. I'd like to hope that most of the negative aspects to crossfire is mostly eliminated with PCIE 4.0 for a two card or even 3 card setup, but who knows. I'd certainly hope so. Time will tell how these things pan out.

I wonder how much PCIe 4.0 is at play here, and is RX 5700 the best they have, or is the most efficient? It seems like there could be a 5800, but then why wouldn’t they lead off with that?
Perhaps it needs more binning 7nm is still relatively new give it some time. Why wouldn't they lead with it perhaps TDP is a bit steep once you push frequency higher than they've already set it at.
 
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
We do not know the price point. Leaks/rumors put it at $499.

Why would Crossfire suddenly be better than it has been so far? Bandwidth is not the main problem and even then the increase from PCI-e 3.0 to 4.0 would not alleviate the need for communication that much. At the other side bidirectional 100GB/s did really not make that noticeable of a difference either.
 
Joined
May 15, 2014
Messages
235 (0.06/day)
@btarunr
May I ask something about the choice of games by TPU?
So I check "average gaming" diff between VII and 2080 on TPU and computerbase.
TPU states nearly 20% diff, computerbase states it's half of that.
Oh well, I think, different games, different results.

But then somebody does 35 games comparison:

It's a simple hierarchy. Top dozen or so tend to favor AMD, bottom dozen favor Nvidia. Pick the games to get the result you want. Test setup/procedure/settings/areas tested can make a difference. Of course, TU104 tends to be more effective than Vega20 in the chart below.




Well kind of true, async compute is capability of using graphics queue and compute queue at the same time.

You're being generous. :) Your definition is fine ofc. (Or multiple queues). Not really directed at you anyway. I kept seeing it in other threads where concurrent int/fp=async compute.

It really does not matter what precision are we speaking.

Exactly correct, nor is it defined by the ability to pack int/fp in the graphics pipeline.

There's another interesting "fine wine" effect for Vega. With Win10 (1803 IIRC) MS started promoting DX 11.0 games on GCN to DX12 feature level 11.1 that enabled the HW schedulers so should result in better perf than release under Win7/8.
 
Joined
Jul 9, 2015
Messages
3,413 (1.06/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
It's a simple hierarchy. Top dozen or so tend to favor AMD, bottom dozen favor Nvidia. Pick the games to get the result you want.
Thanks for linking a chart showing perf difference TWO TIMES SMALLER than TPU.
Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.
 
Joined
Sep 17, 2014
Messages
20,902 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
But you're ignoring market segmentation and product pricing here. Less shaders with more performance/w/shader means cheaper dies and cheaper cards at lower power and equivalent performance or higher performance at equivalent power. Overall Turing gives you a significant increase in shaders per product segment - they just cranked up the pricing to 11 to match, sadly.

Yes... and AMD is going to follow suit, so the net gain is zero for a consumer.

Perf/clock is 30 games at 4K Ultra settings with 4xAA (geomean?).
Perf/watt is Division 2 at 1440p Ultra settings.

That's nice but this is still AMD's little black box we're looking at, and based on history I'm using truckloads of salt with that. Especially when it comes to their GPUs. Still... there is hope, then, I guess :)

Thanks for linking a chart showing perf difference TWO TIMES SMALLER than TPU.
Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.

The relative number of games optimized towards Nvidia cards is way higher, so any 'representative' benchmark suite; as in, representative wrt the engines and games on the marketplace, is always going to favor Nvidia. But that still provides the most informative review/result, because gamers don't buy games based on the brand of their GPU.

What it really means and what you're actually saying is: AMD should be optimizing a far wider range of games instead of focusing on the handful that they get to run well. That is why AMD lost the DX11 race as well - too much looking at the horizon and how new APIs would save their ass, while Nvidia fine tuned around DX11.
 
Last edited:
Joined
Mar 21, 2016
Messages
2,197 (0.74/day)
Latency decreases since you can push twice as much bandwidth in each direction to and from. AMD themselves said it themselves reduced latency, higher bandwidth, lower power. Literally all of those things would benefit crossfire and on a cut down version might even improve if they can improve overall efficiency in the process while salvaging imperfect die's by disabling parts of them. I don't know why Crossfire wouldn't be improved a bit, but how much of a improvement is tough to say definitively. I would think the micro stutter would be lessened quite a bit for a two card setup and even a three card setup though less dramatically in the latter case while a quad card setup would "in theory" be identical to a two card one for PCIE 4.0 at least.
 
Joined
Jul 9, 2015
Messages
3,413 (1.06/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
...lacking of feature set will hurt them...
Such as G-Sync
Oh, hold on...

Nobody cares about yet another NVDA "only me" solution, it needs to get major support across the board to get to anything, but gimmicks developed in a handful of games just because NVDA paid them for it.

At this point it is obvious who's chips are going to rock the next gen of major consoles (historically "it's not about graphics" Nintendo opting for NVDA's dead mobile platform chip is almost an insult in this context, with even multiplat games mostly avoiding porting to it).
 
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
@InVasMani latency and bandwidth are not necessarily tied together.

Nobody cares about yet another NVDA "only me" solution, it needs to get major support across the board to get to anything, but gimmicks developed in a handful of games just because NVDA paid them for it.
You mean something standard like, say... DXR?
 
Joined
Jul 9, 2015
Messages
3,413 (1.06/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
You mean something standard like, say... DXR?
I remember that, my point still stands. (remind me, why it is a proprietary vendor extension in Vulkan)
NVDA was cooking something for years, found time when competition was absent in the highest end, spilled the beans.
Intel/AMD would need to agree that DXR approach is at all viable or the best from their POV.

Crytek has shown one doesn't even need dedicated (20-24% of Turing die) HW to do the RT gimmick:

 
Top