• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RX480 vs GTX 1060 at same clocks test?

Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
Tflops is only indicator for compute scenarios. So, heavy fluid or physics calculations or simply raw math. Graphics tend to fall down a lot more on everything else and depend on individual things within the render pipeline.
 

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.11/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
You could even go as far as to say Nvidias tflop numbers hold true while AMDs doesn't. The 1070 has high tflop and high perf while the RX 480 misses out on the latter, only coming out on par with the 1060. So I wouldn't hold my breath for Vega being fast just because of high tflop (theoretical) numbers. AMD since GCN 1 and higher generations had always underutilisation problems, if Vega could fix this, it would be great, if not, I dont think so. It depends too much on DX12 optimized games and Vulkan then. But the fact is, DX11 is still way more important so you have to hope it will be nice in DX11 as well to hold its ground. I hope you don't mind me talking about Vega. ;)
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
I don't think you can solve that unless AMD somehow figures a way to use shaders in a similar fashion to HT on CPU's, just the other way around. If game really only utilizes 1000 shader units out of 2000, how to use remaining 1000 to aid the first 1000. I have no clue if that is even possible, but AMD should try to figure it out since this is what's their biggest problem really (I hope they realized that during R9 Fury X era). And seeing RX Vega again utilizing same 4096 shaders design, they'll be facing this again if they don't change the stuff behind shaders. They have addressed quite few things in the rendering pipeline (it now has tile based rendering and GPU cache awareness so it knows what's being rendered exactly and where it's located in memory), but so far they never talked about any of this specifically. I've watched one interview about RX Vega and the guy said they still haven't released a lot of details about it. Maybe they figured it out...
 

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.11/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
Yes I agree with that. That technical marketing guy said Vega is doing nice compared to 1080 Ti, well I guess it's a bit slower then. They could release it for the vast 500-700 dollar market np. I think Vega will not be like Fiji at all, I actually think it will be great because it's not such a limited and used up design Fiji was. It's not simply Tonga x2. While I like Fiji I don't like that it was a simple and ineffective design in parts. I'm sure Vega will be much better, NCU and all these things, I'm kinda hopeful about it, despite all the things I know.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Tflops is only indicator for compute scenarios. So, heavy fluid or physics calculations or simply raw math. Graphics tend to fall down a lot more on everything else and depend on individual things within the render pipeline.
Everything you see is numbers and fundamentally, a GPU is thousands of calculators in parallel performing millions of calculations each per second. NVIDIA cards have very little idle hardware in the space of rendering a frame. AMD cards are about 10-40% idle by the same metric.

Why does Vulkan perform so much better on AMD? Because developers can perform the optimizations for GCN that AMD doesn't do (or does much later). The improved performance comes from better utilization of the card's resources to render the frame.
 
Last edited:

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.11/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
Why does Vulkan perform so much better on AMD? Because developers can perform the optimizations for GCN that AMD doesn't do (or does much later). The improved performance comes from better utilization of the card's resources to render the frame.
Hence the reason why AMD GPUs and even Fiji look so great in Doom. I think AMD right now is probably optimizing drivers and yield for Vega to be released as high performing as possible.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
That technical marketing guy said Vega is doing fine compared to 1080 Ti, well I guess it's a bit slower then. They could release it for the vast 500-700 dollar market np. I think Vega will not be like Fiji at all, I actually think it will be great because it's not such a limited and used up design Fiji was. It's not simply Tonga x2. While I like Fiji I don't like that it was a simple and ineffective design in parts. I'm sure Vega will be much better, NCU and all these things, I'm kinda hopeful about it, despite all the things I know.
1080 Ti is up to 11.3 TFLOPs. Assuming RX 480's ~30% penalty carries over to Vega's 12.5 TFLOPs, we can only expect 8.75 TFLOPs of actual rendering performance which is very close to GTX 1080's 8.8 TFLOPs under boost.

AMD cards age better because games that release a year or two down the road are more likely to reduce that 30%.

The problem is that PC game developers overwhelmingly use NVIDIA cards so their games are optimized for NVIDIA. In games that are developed on AMD, the framerate versus TFLOPs are much closer (RX 480 gets closer to GTX 1070).
 
Last edited:

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.11/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
Well that 30% you're speaking of is surely a worst case scenario, I hope for more since it's not GCN again.
 
Joined
May 31, 2016
Messages
4,325 (1.50/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 16GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
1080 Ti is up to 11.3 TFLOPs. Assuming RX 480's ~30% penalty carries over to Vega's 12.5 TFLOPs, we can only except 8.75 TFLOPs of actual rendering performance which is very close to GTX 1080's 8.8 TFLOPs under boost.

AMD cards age better because games that release a year or two down the road are more likely to reduce that 30%.

The problem is that PC game developers overwhelmingly use NVIDIA cards so their games are optimized for NVIDIA. In games that are developed on AMD, the framerate versus TFLOPs are much closer (RX 480 gets closer to GTX 1070).
RX480 was never meant to reach 1070. Keep that in mind. If it comes closer or match with it, that's great news. Please remember that when you compare 480 to 1070.
 
Joined
Feb 8, 2012
Messages
3,013 (0.68/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
Aren't we forgetting a tiled rasterizing algorithm nvidia uses, it helps efficiency even more ... the whole gpu works in parallel on a single tile but tiles are rasterized in serial and most importantly (for making it actually more efficient) each tile fits the L2 cache.
Vega's novel cache scheme may be used for the same effect without actually changing the rasterizer for amd.
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
1080 Ti is up to 11.3 TFLOPs. Assuming RX 480's ~30% penalty carries over to Vega's 12.5 TFLOPs, we can only except 8.75 TFLOPs of actual rendering performance which is very close to GTX 1080's 8.8 TFLOPs under boost.

AMD cards age better because games that release a year or two down the road are more likely to reduce that 30%.

The problem is that PC game developers overwhelmingly use NVIDIA cards so their games are optimized for NVIDIA. In games that are developed on AMD, the framerate versus TFLOPs are much closer (RX 480 gets closer to GTX 1070).

The internal tests of RX Vega show performance levels between GTX 1080Ti and Titan Xp so, there's that... And like I've said, Tflops do not directly convert into rendering capabilities. One thing is raw compute and another rasterizing. Yeah, it's still compute in an essence, but it's not comparable thing.

EDIT:
@BiggieShady
RX Vega has a confirmed tile based rasterizer. Now thinking of it, if they could pair their massive parallel shader count to work on that. So they could utilize all shaders at all times to process many tiles at once, always fully utilizing all shaders, avoiding the underutilization issue. Imagine watching Cinebench test on CPU with 16 threads. Now replace 16 threads with 4096 shaders. I wonder if NVIDIA does it in such way or some other... hm
 

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.11/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
Aren't we forgetting a tiled rasterizing algorithm nvidia uses, it helps efficiency even more ... the whole gpu works in parallel on a single tile but tiles are rasterized in serial and most importantly (for making it actually more efficient) each tile fits the L2 cache.
Vega's novel cache scheme may be used for the same effect without actually changing the rasterizer for amd.
That makes a lot of sense. The "magic cache" also helps Vega to have a very high amount of VRam without actually having it.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
RX480 was never meant to reach 1070. Keep that in mind. If it comes closer or match with it, that's great news. Please remember that when you compare 480 to 1070.
RX 480 has about 5.8 TFLOPs versus GTX 1070s 6.4 TFLOPs (10.3% difference). In games like Doom (Vulkan), RX 480 gets to 5-10% frames of GTX 1070. RX 580 has 6.2 TFLOPs so in the same scenario almost identical framerate is plausible. Vega 10 can be expected to perform like a Titan Xp in Doom (Vulkan) as well; however, this is a fringe case where id Software has taken the time to optimize for AMD which is sadly an exception and not the norm.

The internal tests of RX Vega show performance levels between GTX 1080Ti and Titan Xp so, there's that...
In games optimized for GCN and only games optimized for GCN (e.g. Doom w/ Vulkan).
 

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.11/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
RX 480 has about 5.8 TFLOPs versus GTX 1070s 6.4 TFLOPs. In games like Doom (Vulkan), RX 480 gets to 5-10% frames of GTX 1070. Raw TFLOP difference is 10.3%. RX 580 has 6.2 TFLOPs so in the same scenario, it can be expected to match. Vega 10 can be expected to perform like a Titan Xp in Doom (Vulkan) as well; however, this is a fringe case where id Software has taken the time to optimize for AMD which is sadly an exception and not the norm.
However there are more AMD games coming, and don't forget their partnership with Bethesda.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Besthesda -> id Software -> Doom

The bulk of engines out there (Unreal Engine 4, Unity 5.6, Game Maker, AnvilNext, and so on) are optimized for NVIDIA. Ashes of the Singularity and Doom (Vulkan) are the exceptions to the rule. That's not reasonably expected to change for many years yet.
 

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.11/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
Besthesda -> id Software -> Doom

The bulk of engines out there (Unreal Engine 4, Unity, Game Maker, AnvilNext, and so on) are optimized for NVIDIA. Ashes of the Singularity and Doom (Vulkan) are the exceptions to the rule. That's not reasonably expected to change for many years yet.
I still think you're exaggerating the whole picture (in general) and painting it too dark, but we will see soon enough.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Aren't we forgetting a tiled rasterizing algorithm nvidia uses, it helps efficiency even more ... the whole gpu works in parallel on a single tile but tiles are rasterized in serial and most importantly (for making it actually more efficient) each tile fits the L2 cache.
Vega's novel cache scheme may be used for the same effect without actually changing the rasterizer for amd.
Yes, it will help but the big question is "how much?"
https://www.overclock3d.net/news/gpu_displays/amd_vega_gpu_architectural_analysis/3

I still think you're exaggerating the whole picture (in general) and painting it too dark, but we will see soon enough.
I'm looking at it realistically. The problem isn't GCN, it's lack of support for GCN. AMD makes robust, capable cards but efficiency can't come without software optimization. D3D12 and Vulkan moves optimization from AMD/NVIDIA to developers directly. Yes, it can translate to better performance but only if developers spend the resources to make it happen. It is, at the same time, a boon and bust for AMD.
 

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.11/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
I'm looking at it realistically. The problem isn't GCN, it's lack of support for GCN. AMD makes robust, capable cards but efficiency can't come without software optimization.
The problem is their gpus up to this point were built on utilisation through hardware and not software which is a flawed and limited design. Nvidia switched to software with Fermi I think and since then never had any problems with Util.
Edit: I agree to the rest.
 
Joined
Sep 2, 2011
Messages
1,019 (0.22/day)
Location
Porto
System Name No name / Purple Haze
Processor Phenom II 1100T @ 3.8Ghz / Pentium 4 3.4 EE Gallatin @ 3.825Ghz
Motherboard MSI 970 Gaming/ Abit IC7-MAX3
Cooling CM Hyper 212X / Scythe Andy Samurai Master (CPU) - Modded Ati Silencer 5 rev. 2 (GPU)
Memory 8GB GEIL GB38GB2133C10ADC + 8GB G.Skill F3-14900CL9-4GBXL / 2x1GB Crucial Ballistix Tracer PC4000
Video Card(s) Asus R9 Fury X Strix (4096 SP's/1050 Mhz)/ PowerColor X850XT PE @ (600/1230) AGP + (HD3850 AGP)
Storage Samsung 250 GB / WD Caviar 160GB
Display(s) Benq XL2411T
Audio Device(s) motherboard / Creative Sound Blaster X-Fi XtremeGamer Fatal1ty Pro + Front panel
Power Supply Tagan BZ 900W / Corsair HX620w
Mouse Zowie AM
Keyboard Qpad MK-50
Software Windows 7 Pro 64Bit / Windows XP
Benchmark Scores 64CU Fury: http://www.3dmark.com/fs/11269229 / X850XT PE http://www.3dmark.com/3dm05/5532432
Tflops is not even the actual indicator. Pixel fillrate, triangle throughput, things like this tell you a bit more than Tflops...

If you sort the cards by raw compute power, and look at the sorting order for avg fps in doom Vulkan you'd be surprised :p The raw compute power and the avg fps align pretty damn well.
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
People think GCN is this magic wand where if you "support" GCN (whatever that means) it'll perform better. GCN is just a cluster of things that form a rasterizer unit. A bunch of shaders if you will. NVIDIA has CUDA cores, AMD has GCN. It's really JUST that.

Besthesda -> id Software -> Doom

The bulk of engines out there (Unreal Engine 4, Unity 5.6, Game Maker, AnvilNext, and so on) are optimized for NVIDIA. Ashes of the Singularity and Doom (Vulkan) are the exceptions to the rule. That's not reasonably expected to change for many years yet.

And you think AMD doesn't do that? They have a team of engineers deployed at major engine studios providing on the field optimizations and support for best performance. This way every game also using that engine and made by 3rd party game studio also benefits from those optimizations. Custom shaders and stuff excluded of course. That needs to be done in that specific studio. You can rest assured that AMD keeps an eye on development of all major game engines like Unreal or Unigine...
 
Last edited by a moderator:
Joined
May 31, 2016
Messages
4,325 (1.50/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 16GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
RX 480 has about 5.8 TFLOPs versus GTX 1070s 6.4 TFLOPs (10.3% difference). In games like Doom (Vulkan), RX 480 gets to 5-10% frames of GTX 1070. RX 580 has 6.2 TFLOPs so in the same scenario almost identical framerate is plausible. Vega 10 can be expected to perform like a Titan Xp in Doom (Vulkan) as well; however, this is a fringe case where id Software has taken the time to optimize for AMD which is sadly an exception and not the norm.


In games optimized for GCN and only games optimized for GCN (e.g. Doom w/ Vulkan).

Like I said. If rx 480 come close to 1070 or performance are equal that's great. RX 480 has never meant to compete with 1070. Huge applause for rx 480 if it matches or come close to 1070.
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
Exactly. RX 480 was meant to compete with GTX 980 and GTX 1060. They are the direct competitors. If RX 480 can now outmatch these two and come close to GTX 1070 (even if just in some games), even though it used to be more on levels of R9 290 vanilla on day 1, that's absolutely amazing indeed.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
People think GCN is this magic wand where if you "support" GCN (whatever that means) it'll perform better. GCN is just a cluster of things that form a rasterizer unit. A bunch of shaders if you will. NVIDIA has CUDA cores, AMD has GCN. It's really JUST that.
"CUDA cores" ~= stream processors. NVIDIA just gave them a fancy name.

I can't find any good articles that discuss how optimization for NVIDIA and AMD works for developers. My understanding is that it is a lot like CPU memory buffers. If the memory buffer in code mirrors the hardware cache, you'll get maximal performance with that particular workload. Another example is number of threads (differentiating between physical and virtual as well) a processor has. Software optimized for a quadcore is going to see a huge performance drop when running on a hyperthreaded dualcore. Software has to take these things into consideration when dealing with cross-thread processing.

GPUs not only have varying numbers of streaming processors, but the capability of each is also variable. Intel GPUs, for example, have few streaming processors but they're very wide. On the other hand, AMD and NVIDIA now have lots of streaming processors that are rather narrow (NVIDIA pre-Maxwell had few streaming processors that were wide).

Like memory and threads, software that uses the GPU has to take those variations into account. If they do not, streaming processors will be idle.

And you think AMD doesn't do that? They have a team of engineers deployed at major engine studios providing on the field optimizations and support for best performance. This way every game also using that engine and made by 3rd party game studio also benefits from those optimizations. Custom shaders and stuff excluded of course. That needs to be done in that specific studio. You can rest assured that AMD keeps an eye on development of all major game engines like Unreal or Unigine...
NVIDIA hands out Titan cards left and right to developers. They develop on NVIDIA so they run best on NVIDIA. I'm not aware of AMD doing the same. If they do, it's a much smaller program and less accessible.

I did some researching and the only link between Epic Games and AMD I found was some work on VR. Epic Games has a much closer working relationship with NVIDIA than AMD. Example.

Like I said. If rx 480 come close to 1070 or performance are equal that's great. RX 480 has never meant to compete with 1070. Huge applause for rx 480 if it matches or come close to 1070.
Then forget NVIDIA. R9 390 has 5.1 TFLOP capability versus RX 480's 5.8 yet, R9 390 performs as well or beats RX 480 in a lot of early benchmarks. Over time, RX 480 will knock on R9 390X's 5.9 TFLOP door.
 
Last edited:
Joined
May 31, 2016
Messages
4,325 (1.50/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 16GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Exactly. RX 480 was meant to compete with GTX 980 and GTX 1060. They are the direct competitors. If RX 480 can now outmatch these two and come close to GTX 1070 (even if just in some games), even though it used to be more on levels of R9 290 vanilla on day 1, that's absolutely amazing indeed.
I'd even say that the RX 480 was meant for 970 since during AMD's rx release that was a mid range card. Fact is 970 are behind rx 480's now. 1060 was released to compete with rx 480 after 970 were out of production.
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
AMD has actual engineers on the field. It doesn't matter if game dev has a Titan X in a workstation, if AMD engineer tells him that doing some thing this way works better on AMD, it's done. Besides, ever heard of QA departments? Do you think they only run Titan cards there? Trust me, they do not.

@ratirt
RX 480 has always been faster than GTX 970. It was also faster than R9 290/390 vanilla. But generally not faster than R9 390X and GTX 980. Today, it's usually beating both.
 
Top