• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Underestimated AMD's Efficiency Gains from Tapping into TSMC 7nm: Report

Joined
Jun 3, 2010
Messages
2,540 (0.50/day)
AMD doubled schedulers which fix RDNA performance because RPM used to be required for load balanced performance. Now, it is a side bonus.
Jumping between warps so efficiently was the underlying pascal effect. I dunno what is new about turing.
 
Joined
Aug 17, 2015
Messages
45 (0.01/day)
Location
Greece
System Name Ryzen
Processor AMD RYZEN 5 3600 Six-Core Processor
Motherboard GIGABYTE B450 Aorus Pro
Cooling CoolerMaster MasterAir MA410P
Memory 2 x 8,00GB G Skill F4-3000C16S
Video Card(s) Sapphire RX 5700 pulse
Storage 512 GB Adata XPG Gammix S11 Pro (NVMe), 240 GB Intenso Sata III (SSD), 931 GB Western Digital (HDD)
Display(s) AOC G2590PX, Samsung Syncmaster 2233 RZ
Case Deepcool Matrexx 70
Power Supply Turbox 735W Power Series Modular
Mouse GENESIS NMG-0500 GX68 PROFESSIONAL LASER 3400DPI GAMING MOUSE
Keyboard Trust GXT 280
Software Windows 10 Pro 64-bit
I don't know about power efficiency, but I am quite sure that both RX 5700 and 5700 XT are way too good for their price. AMD lost the battle two years ago, because both Vega 56 and Vega 64 were strong, but overpriced and way too power hungry. GTX 1070 ti rog strix started at 570 €, Vega 56 rog strix at 697 € and the Vega 64 was at 780 € . There was no literally no reason to go for the Vega, unless you really wanted to support AMD.
 

ARF

Joined
Jan 28, 2020
Messages
3,892 (2.56/day)
Location
Ex-usa
Set the AMD Settings to something other than Ultra High, for example High or Very High and compare with Nvidia's result. There is your missing performance.
AMD's architectures are not worse. They simply don't use as aggressive compression techniques.

And. Underclock and undervolt Navi 10. It such case higher performance is possible.

 
Last edited:
Joined
Sep 26, 2014
Messages
68 (0.02/day)
Location
sydney australia
While it is true that AMD is on a newer 7nm which will surely give them an edge, I feel Nvidia was only expecting marginal improvements in power efficiency jumping for Vega to RDNA. AMD first released a 7nm GPU, the Vega 7, which performs better than the older Vega GPUs, but clearly still not as power efficient and good performing as Pascal. Even before AMD's move to 7nm, I am sure Nvidia made their decision to go with 12nm from 16nm for Turing could also be driven by the fact they do not expect AMD to catch up. From my opinion, it is a missed opportunity for them to widen the gap. Now, RDNA 2 and Ampere GPUs are slated for release this year, so it will be a very interesting year to see if AMD can cause a stir again like they did on the CPU space.

that does ring true for Nvidia/s thinking - more progress than needed to compete is money wasted.

they may have chosen a bad time to play hardball w/ a TSM with booming demand

they may have taken samsung outside its CISC fabbing comfort zone
 
Joined
Sep 15, 2011
Messages
6,457 (1.41/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
Underestimates AMD....yeh. Because AMD has at least 3 or 4 GPUs that can beat the Titan RTX on Gaming.... /sarcasm
 
Joined
Mar 24, 2012
Messages
528 (0.12/day)
While it is true that AMD is on a newer 7nm which will surely give them an edge, I feel Nvidia was only expecting marginal improvements in power efficiency jumping for Vega to RDNA. AMD first released a 7nm GPU, the Vega 7, which performs better than the older Vega GPUs, but clearly still not as power efficient and good performing as Pascal. Even before AMD's move to 7nm, I am sure Nvidia made their decision to go with 12nm from 16nm for Turing could also be driven by the fact they do not expect AMD to catch up. From my opinion, it is a missed opportunity for them to widen the gap. Now, RDNA 2 and Ampere GPUs are slated for release this year, so it will be a very interesting year to see if AMD can cause a stir again like they did on the CPU space.

to be honest i don't think that necessarily the case with nvidia. the decision not to go 7nm with turing most likely because of 7nm die size limitation. just look at AMD. they released RX5700 in mid last year. so a quarter after that or even by the end of last year they should be more than capable releasing even faster RDNA1 GPU with dies size ranging from 300mm2 to 400mm2. if they released those GPU they probably already have something that can compete evenly with nvidia RTX 2080Ti. and then release RDNA 2 by the end of this year. but why AMD did not release anything faster until RDNA2? nvidia for their part most likely jump to 7nm much earlier if it really can give them the benefit that they want.
 

ARF

Joined
Jan 28, 2020
Messages
3,892 (2.56/day)
Location
Ex-usa
to be honest i don't think that necessarily the case with nvidia. the decision not to go 7nm with turing most likely because of 7nm die size limitation. just look at AMD. they released RX5700 in mid last year. so a quarter after that or even by the end of last year they should be more than capable releasing even faster RDNA1 GPU with dies size ranging from 300mm2 to 400mm2. if they released those GPU they probably already have something that can compete evenly with nvidia RTX 2080Ti. and then release RDNA 2 by the end of this year. but why AMD did not release anything faster until RDNA2? nvidia for their part most likely jump to 7nm much earlier if it really can give them the benefit that they want.

What dies size limit ?

1588770733305.png

 
Joined
Feb 23, 2012
Messages
37 (0.01/day)
Navi 1 is way past it's efficiency point, just look at voltages, Turing runs at about 1000mv compared to amd 1200mv on rx5700xt, also remember voltages should go down as you progresses in nodes

Nvidia is likely not comparing rx5700xt as it's configured to make those assumption, they know how to design a chip so they know what efficiency navi is capable of

Just look at consoles, xbox series x will likely not consumes more than 250/300W total, cpu, motherboard and all included, rx5700xt is 225W all alone and it's way weaker than series x
 
Joined
Jun 3, 2010
Messages
2,540 (0.50/day)
Navi 1 is way past it's efficiency point, just look at voltages, Turing runs at about 1000mv compared to amd 1200mv on rx5700xt, also remember voltages should go down as you progresses in nodes
That is how it has always been since Kepler I presume. Personally, I think it is a validation issue.
However, I got to admit I thought you meant inefficiency in a scheduling sense.
RDNA now has enough resources to issue 2 instructions per cycle which is kinda important if you need to extend RPM support to 16-bit dual issued instructions in scalar.
 
Joined
Nov 13, 2007
Messages
10,209 (1.71/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.6/ 5.5, 4.8Ghz Ring 200W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
NVIDIA underestimated two things:
1) RDNA which drastically improved performance per watt
2) AMD's need for TSMC wafers for CPU and GPU products

If Samsung's 7nm flops, NVIDIA is at risk of falling into second place over the next year or two.

Im confused, Isnt Nvidia using TSMC 7nm+ (and putting in orders for 5nm?) What would the impact of Samsung 7nm flopping have on nvidia at this point?
 
Joined
Oct 10, 2018
Messages
943 (0.47/day)
On the gpu front i do not expect amd to do what they did to Intel on cpu front.in fact, i would expect nvdia to open more gap when they release their next gpu. To be honest ATI should not have been sold to them.
 
Joined
Dec 22, 2011
Messages
285 (0.06/day)
Processor Ryzen 7 5800X3D
Motherboard Asus Prime X570 Pro
Cooling Deepcool LS-720
Memory 32 GB (4x 8GB) DDR4-3600 CL16
Video Card(s) Gigabyte Radeon RX 6800 XT Gaming OC
Storage Samsung PM9A1 (980 Pro OEM) + 960 Evo NVMe SSD + 830 SATA SSD + Toshiba & WD HDD's
Display(s) Samsung C32HG70
Case Lian Li O11D Evo
Audio Device(s) Sound Blaster Zx
Power Supply Seasonic 750W Focus+ Platinum
Mouse Logitech G703 Lightspeed
Keyboard SteelSeries Apex Pro
Software Windows 11 Pro
What efficiency? It has basically the same performance as Navi but uses about 30% more power. Not sure what they were going to get out of that.
Vegas "inefficiency" wasn't in the architecture, but in the too high clocks (as shown by sacrificing like 1-2% performance the power consumption dropped by 10s of %s.)
Also, GCN was always built for more flexible compute first, it does quite well in compute (much better than gaming relative to nvidia) and in many situations it beats RDNA silly (and this is why when AMD branched RDNA out of GCN, they also steered GCN more into compute to become CDNA)
 
Joined
Mar 10, 2015
Messages
3,984 (1.20/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
Vegas "inefficiency" wasn't in the architecture, but in the too high clocks (as shown by sacrificing like 1-2% performance the power consumption dropped by 10s of %s.)
Also, GCN was always built for more flexible compute first, it does quite well in compute (much better than gaming relative to nvidia) and in many situations it beats RDNA silly (and this is why when AMD branched RDNA out of GCN, they also steered GCN more into compute to become CDNA)

I get all that, the cards as shipped were not great. I know because I have one. Not Vega 2 but OG Vega.

I don't deny we are comparing a 7nm GPU with a 12nm one here. However if you look at 7nm Vega 7, it was clearly nowhere near as power efficient as the Turing. RDNA and 7nm basically allowed AMD to get within striking range of Turing. If the rumored power efficiency is true with RDNA 2, then perhaps they may be competitive with the 7nm Nvidia GPUs. We should get more clarity later this month on next gen graphics from Nvidia.


I mentioned efficiency, I did not say efficient. I think you misread/ misunderstood what I meant.

I definitely misunderstood because I still don't know what cue nvidia should have taken from Vega 2 that they didn't already get from any of AMDs last several launches.
 
Joined
Oct 4, 2017
Messages
693 (0.29/day)
Location
France
Processor RYZEN 7 5800X3D
Motherboard Aorus B-550I Pro AX
Cooling HEATKILLER IV PRO , EKWB Vector FTW3 3080/3090 , Barrow res + Xylem DDC 4.2, SE 240 + Dabel 20b 240
Memory Viper Steel 4000 PVS416G400C6K
Video Card(s) EVGA 3080Ti FTW3
Storage XPG SX8200 Pro 512 GB NVMe + Samsung 980 1TB
Display(s) Dell S2721DGF
Case NR 200
Power Supply CORSAIR SF750
Mouse Logitech G PRO
Keyboard Meletrix Zoom 75 GT Silver
Software Windows 11 22H2
As many here have pointed out Navi efficiency although kinda good compared to older AMD products is nothing impressive per say especially considering Nvidia own products manage to equal if not straight out top said efficiency on an inferior node ..... so saying Nvidia underestimated AMD efficiency gains on 7nm is a bit laughable and holds no water , if anything else the last 8years or so it's been AMD underestimating Nvidia efficiency gains !

This being said this article holds water on one point which is Nvidia might have indeed underestimated AMDs will to base their entire product portfolio ( CPUs + GPUs ) on advanced nodes such as 7nm , which ultimately threatens their allocation capacity on said nodes , hence why this time they mitigated their strategic error and went forward by securing 7nm EUV and 5nm allocation .

Ultimately that means that in near future AMD wont have the node advantage anymore for its upcoming products ( Navi 2 etc ) which lets face it was the only thing keeping their products somewhat relevant , so as far as competition goes im afraid AMD will have an even harder time than before matching Nvidia ( which sucks for our wallets ) .
 
Joined
Sep 19, 2015
Messages
21 (0.01/day)
What efficiency? It has basically the same performance as Navi but uses about 30% more power. Not sure what they were going to get out of that.

You guys don't know how to do math. Not only is Navi using less power to get higher performance than V56 or V64, they're also doing it with two thirds of the hardware. The fact 5700 XT is pushing above V64, at lower power with less than 66% of CUs should be enough to tell you how much an efficiency jump it really is.
 
Joined
Apr 30, 2011
Messages
2,648 (0.56/day)
Location
Greece
Processor AMD Ryzen 5 5600@80W
Motherboard MSI B550 Tomahawk
Cooling ZALMAN CNPS9X OPTIMA
Memory 2*8GB PATRIOT PVS416G400C9K@3733MT_C16
Video Card(s) Sapphire Radeon RX 6750 XT Pulse 12GB
Storage Sandisk SSD 128GB, Kingston A2000 NVMe 1TB, Samsung F1 1TB, WD Black 10TB
Display(s) AOC 27G2U/BK IPS 144Hz
Case SHARKOON M25-W 7.1 BLACK
Audio Device(s) Realtek 7.1 onboard
Power Supply Seasonic Core GC 500W
Mouse Sharkoon SHARK Force Black
Keyboard Trust GXT280
Software Win 7 Ultimate 64bit/Win 10 pro 64bit/Manjaro Linux
What efficiency? It has basically the same performance as Navi but uses about 30% more power. Not sure what they were going to get out of that.
Some basic math lessons needed there: 60% higher efficiency for both RX5700 vs Vega56 (98/61*100-100) and for RX5700XT vs Vega64 (114/71*100-100), while the increase for the duel RX5700XT vs Vega56 is 40% (98/71*100-100). And we have to compare the models from the same tier, so the pairs I showed first are the proper ones. Numbers were taken from the 1440P diagram in the latest 5500XT review.
 

ARF

Joined
Jan 28, 2020
Messages
3,892 (2.56/day)
Location
Ex-usa
What efficiency? It has basically the same performance as Navi but uses about 30% more power. Not sure what they were going to get out of that.


You can underclock and undervolt Navi 10, thus saving 35% of the power consumption while losing only 5% of the performance:

Did a few test runs today with my 5700XT reference design in Shadow of the tomb raider with different voltagesettings etc. Tested with TAA, dx12 and highest in 1080p. Custom fancurve. Temps are maxtemps.

Stock 2050/1200mv:
110fps, 185W max, 160W avg, rpm 2950, gputemp 74C, junction 93C
1900MHz/1000mv:
110fps, 149W max, 130W avg, rpm 2700, gputemp 70C, junction 81C.
1800/950mv:
108fps, 133W max, 115W avg, rpm 2300, gputemp 66C, junction 74C.
1750/910mv:
106fps, 134W max, 110W avg, rpm 2300, gputemp 66C, junction 74C.
1700/890mv:
104fps, 126W max, 105W avg, rpm 2200, gputemp 65C, junction 73C.

Conclusion:
Underclock to 1900 and UV to 1000mv gives no performanceloss, but temps are better and slightly less noise.

Underclock to 1800 only gives 3% performanceloss, but reduces powerusage by 10-15% and temps quite a bit.
Underclocking/undervolting further yields 5% lower power pr 2% performance and is not worth it in my opinion.
Seems like 1800/950mv is the sweetspot 8n my card :)
 
Joined
Oct 4, 2017
Messages
693 (0.29/day)
Location
France
Processor RYZEN 7 5800X3D
Motherboard Aorus B-550I Pro AX
Cooling HEATKILLER IV PRO , EKWB Vector FTW3 3080/3090 , Barrow res + Xylem DDC 4.2, SE 240 + Dabel 20b 240
Memory Viper Steel 4000 PVS416G400C6K
Video Card(s) EVGA 3080Ti FTW3
Storage XPG SX8200 Pro 512 GB NVMe + Samsung 980 1TB
Display(s) Dell S2721DGF
Case NR 200
Power Supply CORSAIR SF750
Mouse Logitech G PRO
Keyboard Meletrix Zoom 75 GT Silver
Software Windows 11 22H2
You guys don't know how to do math. Not only is Navi using less power to get higher performance than V56 or V64, they're also doing it with two thirds of the hardware. The fact 5700 XT is pushing above V64, at lower power with less than 66% of CUs should be enough to tell you how much an efficiency jump it really is.

This is ironic considering you are doing the wrong math yourself !

RX5700XT is around 25% faster than V64 in games . 5700XT has indeed 62,5 % less CUs than V64 but 5700XT has also around 76,9% ( Strix V64 vs Strix 5700XT ) higher clock speed than V64 which can be purely attributed to the node . So overall 5700XT architectural gains are not as high as you make them look like .........

Furthermore Navi efficiency becomes embarrassing when compared to the true competition which are Nvidia products . When products such as RTX2080 blow 5700XT out of the watter in performance/watt metrics , heck even 1080Ti on 16nm outpaces it in such metrics ..... it's no hard to see why peoples are not impressed by Navi efficiency !
 
Joined
Mar 10, 2015
Messages
3,984 (1.20/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
You guys don't know how to do math. Not only is Navi using less power to get higher performance than V56 or V64, they're also doing it with two thirds of the hardware. The fact 5700 XT is pushing above V64, at lower power with less than 66% of CUs should be enough to tell you how much an efficiency jump it really is.
Some basic math lessons needed there: 60% higher efficiency for both RX5700 vs Vega56 (98/61*100-100) and for RX5700XT vs Vega64 (114/71*100-100), while the increase for the duel RX5700XT vs Vega56 is 40% (98/71*100-100). And we have to compare the models from the same tier, so the pairs I showed first are the proper ones. Numbers were taken from the 1440P diagram in the latest 5500XT review.

This isn't about Navi's efficiency, this is about Vega2. The 25% was a quick number, read as this: Navi uses about 25% less power for the same performance as V2. I didn't look up the actual numbers but thought Navi was about 220W. V2 was about 300. It is close enough to ask wtf the person I replied to was talking about.

can underclock and undervolt Navi 10, thus saving 35% of the power consumption while losing only 5% of the performance:

I don't care what you can do with Navi once you buy it. We are talking about how it is sold and delivered.
 
Joined
Oct 4, 2017
Messages
693 (0.29/day)
Location
France
Processor RYZEN 7 5800X3D
Motherboard Aorus B-550I Pro AX
Cooling HEATKILLER IV PRO , EKWB Vector FTW3 3080/3090 , Barrow res + Xylem DDC 4.2, SE 240 + Dabel 20b 240
Memory Viper Steel 4000 PVS416G400C6K
Video Card(s) EVGA 3080Ti FTW3
Storage XPG SX8200 Pro 512 GB NVMe + Samsung 980 1TB
Display(s) Dell S2721DGF
Case NR 200
Power Supply CORSAIR SF750
Mouse Logitech G PRO
Keyboard Meletrix Zoom 75 GT Silver
Software Windows 11 22H2
Last edited:

ARF

Joined
Jan 28, 2020
Messages
3,892 (2.56/day)
Location
Ex-usa
I don't care what you can do with Navi once you buy it. We are talking about how it is sold and delivered.

It is delivered with the wrong factory settings. It's a mistake similar to the blower cooler which they promised to fix and Navi 2X will look like that:

1588789883812.png


This is ironic considering you are doing the wrong math yourself !

RX5700XT is around 25% faster than V64 in games . 5700XT has indeed 62,5 % less CUs than V64 but 5700XT has also around 76,9% ( Strix V64 vs Strix 5700XT ) higher clock speed than V64 which can be purely attributed to the node . So overall 5700XT architectural gains are not as high as you make them look like .........

Furthermore Navi efficiency becomes embarrassing when compared to the true competition which are Nvidia products . When products such as RTX2080 blow 5700XT out of the watter in performance/watt metrics , heck even 1080Ti on 16nm outpaces it in such metrics ..... it's no hard to see why peoples are not impressed by Navi efficiency !


Navi 10 XL is on par with RTX 2080 on the performance per watt table.
Without even considering the undervolting and underclocking which they should all get:

1588789937852.png

 
Joined
Mar 10, 2015
Messages
3,984 (1.20/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
It is delivered with the wrong factory settings. It's a mistake similar to the blower cooler which they promised to fix and Navi 2X will look like that

Unfortunately, it is delivered with the factory settings.
 
Joined
Oct 4, 2017
Messages
693 (0.29/day)
Location
France
Processor RYZEN 7 5800X3D
Motherboard Aorus B-550I Pro AX
Cooling HEATKILLER IV PRO , EKWB Vector FTW3 3080/3090 , Barrow res + Xylem DDC 4.2, SE 240 + Dabel 20b 240
Memory Viper Steel 4000 PVS416G400C6K
Video Card(s) EVGA 3080Ti FTW3
Storage XPG SX8200 Pro 512 GB NVMe + Samsung 980 1TB
Display(s) Dell S2721DGF
Case NR 200
Power Supply CORSAIR SF750
Mouse Logitech G PRO
Keyboard Meletrix Zoom 75 GT Silver
Software Windows 11 22H2
Navi 10 XL is on par with RTX 2080 on the performance per watt table.
Without even considering the undervolting and underclocking which they should all get:

View attachment 154019

Why don't you show the entire graph ?

Annotation 2020-05-06 204337.png


Turing is still 10% ahead than the most efficient AMD architecture while on massive node disadvantage so where are you going with this ...........

https://www.techpowerup.com/review/asus-radeon-rx-5500-xt-strix-8-gb/29.html
 
Joined
Nov 6, 2016
Messages
1,561 (0.58/day)
Location
NH, USA
System Name Lightbringer
Processor Ryzen 7 2700X
Motherboard Asus ROG Strix X470-F Gaming
Cooling Enermax Liqmax Iii 360mm AIO
Memory G.Skill Trident Z RGB 32GB (8GBx4) 3200Mhz CL 14
Video Card(s) Sapphire RX 5700XT Nitro+
Storage Hp EX950 2TB NVMe M.2, HP EX950 1TB NVMe M.2, Samsung 860 EVO 2TB
Display(s) LG 34BK95U-W 34" 5120 x 2160
Case Lian Li PC-O11 Dynamic (White)
Power Supply BeQuiet Straight Power 11 850w Gold Rated PSU
Mouse Glorious Model O (Matte White)
Keyboard Royal Kludge RK71
Software Windows 10
I truly believe that RDNA2 will be a rude awakening four Nvidia... And I believe AMD is going to put the pressure on even harder in the next few years.

I'm always scanning new patents, and last night I came across a new patent from AMD. The patent concerns a new technology which uses something called a "gpu mask". What it does is that it makes it so a computer (OS, API, Drivers) see multiple GPUs as a single logical device. The patent specifically concerns MCM GPUs. If this is indeed what I think it is, that means that AMD could be taking the Chipley approach to GPUs in the very near future AND that they've figured out a way so that game developers don't have to accomateultiple GPUs and programming for them. In my opinion, that is the hard part about a ln MCM GPU, not building the hardware, but developing the software so that it behaves like a solid monolithic die. Anyways, if this pans out AMD could be delivering Nvidia a "zen moment" quite soon.
 
Joined
Mar 17, 2017
Messages
4 (0.00/day)
I don't know about power efficiency, but I am quite sure that both RX 5700 and 5700 XT are way too good for their price. AMD lost the battle two years ago, because both Vega 56 and Vega 64 were strong, but overpriced and way too power hungry. GTX 1070 ti rog strix started at 570 €, Vega 56 rog strix at 697 € and the Vega 64 was at 780 € . There was no literally no reason to go for the Vega, unless you really wanted to support AMD.

There were reasons to go for Vega. But they weren't relevant to gaming. Vega was a beast for GPU computing, so it was a good choice for that application. That's just not what most people were shopping for.

Radeon VII was even more so. Which isn't a surprise, because it was really a $5000 card for AI work with its double precision floating point performance downgraded (so it wouldn't compete directly with the professional cards) and a video output added. And that 16GB of HBM 2 memory was great for professional graphics design and 3D modeling, though it was overkill for gaming.
 
Last edited:
Top