• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial NVIDIA's Weakness is AMD's Strength: It's Time to Regain Mid-Range Users

Joined
Oct 22, 2014
Messages
7,165 (3.81/day)
Location
Sunshine Coast
System Name Black Box
Processor Intel Xeon E5-2680 10c/20t 2.8GHz @ 3.0GHz
Motherboard Asrock X79 Extreme 11
Cooling Coolermaster 240 RGB A.I.O.
Memory G. Skill 16Gb (4x4Gb) 2133Mhz
Video Card(s) Nvidia GTX 710
Storage Sandisk X 400 256Gb
Display(s) AOC 22" Freesync 1m.s. 75Hz
Case Corsair 450D High Air Flow.
Audio Device(s) No need.
Power Supply FSP Aurum 650W
Mouse Yes
Keyboard Of course
Software W10 Home Premium 64 bit
I think you fail to understand the majority of users don't buy high end cards, but mid-range or low end cards. Which is why Intel is the leader in GPU sales.
Intel? :confused:
 
Joined
Jun 10, 2014
Messages
1,822 (0.91/day)
and probably Navi will just be a number of tweaks.
+ an Nvidia Tensor competitor, likely.
Yes, certainly, there is a good chance of that.
But Navi is not going to be a redesign of the fundamental architecture of GCN.

Vega 56 would be worth better for your money, esp the flashing to 64 bios, overclocking and undervolting. These seem to have very good results as AMD was pretty much rushing those GPU's out without any proper testing about power consumption. The Vega arch on this procede is already maxed out. Anything above 1650Mhz and a full load applied is running towards 350 to 400W terrority. Almost twice as a 1080 and proberly not even 1/3rd performance more.
Wait a minute, you're arguing Vega is a good buy since you can flash the BIOS and overclock it?
You are talking about something which is very risky, and even when successful, the Pascal counterparts are still better. So what's the point? You are encouraging something which should be restricted to enthusiasts who do that as a hobby. This should never be a buying recommendation.

No mather who you flip it, Vega is an inferior choice vs. Pascal at last year's prices, and currently with Pascal on sale and Turing hitting the shelves, there is no reason to by Vega for gaming.

The refresh on a smaller node is good > it allows AMD to lower power consumption, push for higher clocks and hopefully produce cheaper chips. The smaller you make them the more fit on a silicon wafer. RTX is so damn expensive because those are frankly big dies and big dies take up alot of space on a wafer.
Turing is currently about twice as efficient per watt as Vega, even with a node shrink Vega will not be able to compete there. And don't forget that Nvidia have access to the same node as AMD.
Sill, the first generation of 7 nm node will not produce high volumes. Production with triple/quad patterning on DUV will be very slow and have issues with yields. A few weeks ago GloFo gave up 7 nm, not because it didn't work, but because it wasn't cost effective. It will take a while before we see wide adoption of 7 nm, volumes and costs probably eventually beat the current nodes, but it will take a long time.

The Polaris was a good mid-range card, and still is. Pubg does excellent at 75Hz/FPS lock at WQHD. In my opinion people dont need 144fps on a 60hz screen. Cap that and you can half your power bill easily. :) Something you dont hear people saying either.
When the first GCN cards was released, they competed well with Kepler, but Maxwell started to pull ahead the 2nd/3rd gen GCNs of the 300-series. Polaris (4th gen GCN) is barely different from it's predecessors, most of the improvement is a pure node shrink, and you can't call it good when it's on par with the previous Maxwell on an older node. RX 480/580 was never a better choice than GTX 1060, and the lower models are just low-end anyway.

Is there a new product coming... something, or AMD just goes idle for 12-18mo's maintaining with Polaris products as they are?
We don't know if there will be another refresh of Polaris, but Navi is still many months away.

But I would be worried to buy AMD cards so late in the product cycle, they have been known to drop driver support for cards that are still sold. Until their policy changes, I wouldn't buy anything but their latest generation.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
19,884 (3.49/day)
Processor Core i7-4790K
Memory 16 GB
Video Card(s) GTX 1080
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 7
Editorial news posts will now automatically have "Editorial" Thread prefix on the forums, like this thread
 
Joined
Dec 30, 2010
Messages
852 (0.26/day)
Yes, certainly, there is a good chance of that.


When the first GCN cards was released, they competed well with Kepler, but Maxwell started to pull ahead the 2nd/3rd gen GCNs of the 300-series. Polaris (4th gen GCN) is barely different from it's predecessors, most of the improvement is a pure node shrink, and you can't call it good when it's on par with the previous Maxwell on an older node. RX 480/580 was never a better choice than GTX 1060, and the lower models are just low-end anyway.
On Reddit, several people who went from a AMD based card back to nvidia, noted the instant color difference in games where textures on the green camp where a bit more blurry compared to AMD. Saying that the 1060 is a better choice is not really true if you care about image quality. It woud'nt suprise me if Nvidia is altering with Lod here and there to make the benchmarks procentual look better.

The image quality of AMD cards in general in games is still superior compared to Nvidia. One reason for me to stick with AMD.
 
Joined
Jun 10, 2014
Messages
1,822 (0.91/day)
On Reddit, several people who went from a AMD based card back to nvidia, noted the instant color difference in games where textures on the green camp where a bit more blurry compared to AMD. Saying that the 1060 is a better choice is not really true if you care about image quality. It woud'nt suprise me if Nvidia is altering with Lod here and there to make the benchmarks procentual look better.

The image quality of AMD cards in general in games is still superior compared to Nvidia. One reason for me to stick with AMD.
Hmmm, "random" people on reddit…

Over 10 years ago, there used to be large variations in render quality, both texture filtering and AA, between different generations. But since Fermi and HD 5000 I haven't seen any substantial differences.

There are benchmarks like 3D Mark where both are known to cheat by overriding the shader programs. But as of GCN vs. Pascal, there is no general difference in image quality. You might be able to find an edge case in a special AA mode or a game where the driver "optimizes"(screw up) a shader program. But both Direct3D, OpenGL and Vulkan are designed with requirements such that variations in the rendering output should be minimal.
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
20,702 (4.34/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: Athlon II x4 630 3.5GHz
Motherboard ASUS P8P67 Pro :: GIgabyte GA-770T-USB3
Cooling Corsair H70 :: Thermaltake Big Typhoon
Memory 2x4GB DDR3 1866 :: 2x1GB DDR3 1333
Video Card(s) 2x PNY GTX1070 :: none
Storage Plextor M5s 128GB, WDC Black 500GB :: Mushkin Enhanced 60GB SSD, WD RE3 1TB
Display(s) Acer P216HL HDMI :: None
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - iLive IT153B Soundbar (optical) :: None
Power Supply FSP Hydro GE 550w :: something
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Editorial news posts will now automatically have "Editorial" Thread prefix on the forums, like this thread
This post needs a million likes.
 

quadibloc

New Member
Joined
Sep 26, 2018
Messages
25 (0.06/day)
Ray tracing looks pretty, but I'm not sure it's all that exciting to gamers. But since ray tracing requires more extensive and flexible arithmetic capabilities than conventional graphics operations, I have wondered whether NVIDIA's new cards with ray tracing capabilities would also be better at GPU computing. If so, I hope that AMD does find a way to eventually incorporate this capability into their products as well.
 
Joined
Feb 3, 2017
Messages
1,888 (1.81/day)
Processor i5-8400
Motherboard ASUS ROG STRIX Z370-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-3200 CL16
Video Card(s) Gainward GeForce RTX 2080 Phoenix
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Logitech G700
Keyboard Corsair K60
Ray tracing looks pretty, but I'm not sure it's all that exciting to gamers. But since ray tracing requires more extensive and flexible arithmetic capabilities than conventional graphics operations, I have wondered whether NVIDIA's new cards with ray tracing capabilities would also be better at GPU computing. If so, I hope that AMD does find a way to eventually incorporate this capability into their products as well.
Ray Tracing capabilities will not make GPUs better in general computing.
Turing is better at computing than Pascal but not because of Ray Tracing or Tensor cores but because it has a slightly evolved microarchitecture. Turing is close to Volta in how it works in compute.

There are enough signs that Ray Tracing will take off in some form or another - DXR, Vulkan RT extensions, Optix/ProRender and the technology is likely to bleed over from professional sector where it already has taken off. The question is, when and how and whether Turing's implementation will be relevant.
 
Joined
Mar 10, 2014
Messages
1,691 (0.80/day)
Ray Tracing capabilities will not make GPUs better in general computing.
Turing is better at computing than Pascal but not because of Ray Tracing or Tensor cores but because it has a slightly evolved microarchitecture. Turing is close to Volta in how it works in compute.

There are enough signs that Ray Tracing will take off in some form or another - DXR, Vulkan RT extensions, Optix/ProRender and the technology is likely to bleed over from professional sector where it already has taken off. The question is, when and how and whether Turing's implementation will be relevant.
Would be super cool to get some redux games with RT, where it could shine. I'm thinking here games like Mirrors Edge. Just imagine that with RT reflections on buildings.
 
Joined
Jun 28, 2016
Messages
2,883 (2.28/day)
Ray Tracing capabilities will not make GPUs better in general computing.
Terminology mix-up.
Ray tracing is just another computing task and it definitely can be put under the "general processing" umbrella.
Hence, adding RT-specific ASIC to the chip will (by definition) improve computing potential in a very small class of problems.
However, as we learn how to use RT cores in other problems, they will become more and more useful (and the GPGPU "gain" will increase).

Ray Tracing is primarily a specific case of collision detection problem, so it's not that difficult to imagine a problem, where this ASIC could be used.

Turing is better at computing than Pascal but not because of Ray Tracing or Tensor cores but because it has a slightly evolved microarchitecture. Turing is close to Volta in how it works in compute.
Most of the things written above apply to Tensor cores. They're good at matrix operations and they will speed up other tasks. They already do.
There are enough signs that Ray Tracing will take off in some form or another - DXR, Vulkan RT extensions, Optix/ProRender and the technology is likely to bleed over from professional sector where it already has taken off. The question is, when and how and whether Turing's implementation will be relevant.
2080Ti's RT cores speed up Ray Tracing around 6x (alleged) compared to what this chip could do on CUDA alone. That means they're few dozen times more effective than the CUDA cores that they could be replaced with (measured by area). That's fairly significant.
If Ray Tracing catches on as a feature, hardware accelerator's performance will be so far ahead that it will become a standard.
And even if RT remains niche (or only available in high-end GPUs), someone will soon learn how to use this hardware to boost physics or something else. That's why we shouldn't rule out RT cores in cheaper GPUs - even if they would be too slow to cover the "flagship" use case, aka RTRT in gaming.

Also, I believe you're thinking about RTRT when saying RT, right? ;-)
 
Joined
Feb 3, 2017
Messages
1,888 (1.81/day)
Processor i5-8400
Motherboard ASUS ROG STRIX Z370-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-3200 CL16
Video Card(s) Gainward GeForce RTX 2080 Phoenix
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Logitech G700
Keyboard Corsair K60
I suppose it's not that important to be very specific with terminology in thread like this, is it? If I remember correctly RT Cores are technically additional units in every SM with hardware BVH traversal capability. And like you said, Tensor cores do certain types of matrix operations in hardware. For an average gamer or hardware enthusiast (like me) they are still RT and Tensor cores as opposed to CUDA/GCN cores that are GPGPU :D

RT is really new so there are no real applications for it in consumer space. Same for Tensor (although this has very clear uses in HPC space). I would not count on these being very useful beyond their intended use case.

You are right, I did mean RTRT. Although strictly speaking RTX does not necessarily have to mean RTRT. A lot of applications for Quadro are not real-time yet still accelerated on the same units.
 
Top