• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD GPUs See Lesser Performance Drop on "Deus Ex: Mankind Divided" DirectX 12

Joined
Jul 31, 2014
Messages
380 (0.29/day)
Likes
110
System Name Diablo | Baal | Mephisto | Andariel
Processor i5-3570K@4.4GHz | 2x Xeon X5675 | i7-4710MQ | i7-2640M
Motherboard Asus Sabertooth Z77 | HP DL380 G6 | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Cooling Swiftech H220-X | Chassis cooled (6 fans + HS) | dual-fanned heatpipes | small-fanned heatpipe
Memory 32GiB DDR3-1600 CL9 | 96GiB DDR3-1333 ECC RDIMM | 32GiB DDR3L-1866 CL11 | 8GiB DDR3L-1600 CL11
Video Card(s) Dual GTX 670 in SLI | Embedded ATi ES1000 | Quadro K2100M | Intel HD 3000
Storage many, many SSDs and HDDs....
Display(s) 1 Dell U3011 + 2x Dell U2410 | HP iLO2 KVMoIP | 3200x1800 Sharp IGZO | 1366x768 IPS with Wacom pen
Case Corsair Obsidian 550D | HP DL380 G6 Chassis | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Audio Device(s) Auzentech X-Fi HomeTheater HD | None | On-board | On-board
Power Supply Corsair AX850 | Dual 750W Redundant PSU (Delta) | Dell 330W+240W (Flextronics) | Lenovo 65W (Delta)
Mouse Logitech G502, Logitech G700s, Logitech G500, Dell optical mouse (emergency backup)
Keyboard 1985 IBM Model F 122-key, Ducky YOTT MX Black, Dell AT101W, 1994 IBM Model M, various integrated
Software FAAAR too much to list
#76
I wouldn't go so far as to say dominance, it's just that AMD is finally getting their money's worth out of their 'metal'.

- AMD uses a wider bus
- AMD uses more shaders
- AMD runs at lower clocks
- Polaris provides about similar (or slightly higher) perf/clock to Pascal
- Polaris still has a lower perf/watt than Pascal
- GCN has not radically changed since HD7xxx.

AMD just runs a wider GPU across the board, as they have done for a long time. GCN is geared to be an extremely balanced arch that has some overcapacity on the VRAM end. It is built to let the core do all the work it can do, whereas Nvidia's arch is always focused at 'efficiency gains through tight GPU balance' - Nvidia obtains that balance by cutting heavily into bus width and removing everything from the GPU core that isn't required for gaming. They've tried several things, of which DP was the first thing they dropped with Kepler, then delta compression enabled them to further reduce bus width. This is also why Nvidia's cards don't stretch their legs at higher resolutions, but rather lose performance. Only the GDDR5X-supported 1080 avoids that fate.

On DX11, AMD GPU's were just fine and they excelled only at higher resolutions. Why? Not just because of VRAM, but because of the fact that higher res = lower CPU load. In DX12, GCN gets to stretch its legs even earlier and also at lower resolutions, in part also because of the better CPU usage of that API. Vulkan is similar. That CPU usage was the last hurdle for GCN to really come to fruition. Say what you want, but AMD has really made a smart move here, even though we can doubt how conscious that move has really been. They have effectively gained architectural advantage by letting the market do most of the work.

The irony is that the market for gaming has moved towards GCN, and GCN has seen very minimal architectural changes, while the market is moving away from Nvidia's cost/efficiency improvement-focused GPU architecture. At the same time, Nvidia can almost eclipse that change through a much higher perf/watt, but that only hides so much of the underlying issue, an issue of Nvidia GPU's having to clock really high to gain solid performance, because they lack not only a wide bus right now, but also raw shader counts.

I think it is inevitable, and safe to predict, that Nvidia has now reached a new cap with regards to clock speeds on the core. The only way forward is for them to once again start building bigger and wider GPUs. AMD, on the flip side, has more wiggle room and a lot of things left to improve - clocks, efficiency, and judging the RX480, they also have space left on the die.
nV has also been incrementally widening their SMs each gen, very likely in order to better match increasing output resolutions.

As for DP, AMD also removed DP from GCN. On the original GCN1 chips, it's driver limited to 1/2 perf. on GCN1.1 and beyond, it been cut down in hardware to 1/4, 1/8 and I think the current is 1/16, very close to nV's 1/32 number for gaming cards.

Particles are actually just a bunch of tiny polygons with texture attached to them. Why wouldn't you run them on GPU? Especially since we have specialized features like Geometric Instancing to handle just that, hundreds of identical elements.
That's when they get rendered (and they get rendered on the GPU just fine and as intended and expected).

You still need to calculate position and movement like any other entity in the scene before the scene gets rendered... ffs man, this is basic renderer workflow...
 
Joined
Oct 2, 2004
Messages
12,736 (2.60/day)
Likes
6,084
Location
Europe\Slovenia
System Name Dark Silence 2
Processor Intel Core i7 5820K @ 4.5 GHz (1.15V)
Motherboard MSI X99A Gaming 7
Cooling Cooler Master Nepton 120XL
Memory 32 GB DDR4 Kingston HyperX Fury 2400 MHz @ 2666 MHz
Video Card(s) AORUS GeForce GTX 1080Ti 11GB (2000/11100)
Storage Samsung 850 Pro 2TB SSD (3D V-NAND)
Display(s) ASUS VG248QE 144Hz 1ms (DisplayPort)
Case Corsair Carbide 330R Titanium
Audio Device(s) Creative Sound BlasterX AE-5 + Altec Lansing MX5021 (HiFi capacitors and OPAMP upgrade)
Power Supply BeQuiet! Dark Power Pro 11 750W
Mouse Logitech G502 Proteus Spectrum
Keyboard Cherry Stream XT Black
Software Windows 10 Pro 64-bit (Fall Creators Update)
#77
You do know a standard named DirectCompute exists for a very long time, right? It can eliminate CPU involvement entirely if you decide to do so.
 
Joined
Jul 31, 2014
Messages
380 (0.29/day)
Likes
110
System Name Diablo | Baal | Mephisto | Andariel
Processor i5-3570K@4.4GHz | 2x Xeon X5675 | i7-4710MQ | i7-2640M
Motherboard Asus Sabertooth Z77 | HP DL380 G6 | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Cooling Swiftech H220-X | Chassis cooled (6 fans + HS) | dual-fanned heatpipes | small-fanned heatpipe
Memory 32GiB DDR3-1600 CL9 | 96GiB DDR3-1333 ECC RDIMM | 32GiB DDR3L-1866 CL11 | 8GiB DDR3L-1600 CL11
Video Card(s) Dual GTX 670 in SLI | Embedded ATi ES1000 | Quadro K2100M | Intel HD 3000
Storage many, many SSDs and HDDs....
Display(s) 1 Dell U3011 + 2x Dell U2410 | HP iLO2 KVMoIP | 3200x1800 Sharp IGZO | 1366x768 IPS with Wacom pen
Case Corsair Obsidian 550D | HP DL380 G6 Chassis | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Audio Device(s) Auzentech X-Fi HomeTheater HD | None | On-board | On-board
Power Supply Corsair AX850 | Dual 750W Redundant PSU (Delta) | Dell 330W+240W (Flextronics) | Lenovo 65W (Delta)
Mouse Logitech G502, Logitech G700s, Logitech G500, Dell optical mouse (emergency backup)
Keyboard 1985 IBM Model F 122-key, Ducky YOTT MX Black, Dell AT101W, 1994 IBM Model M, various integrated
Software FAAAR too much to list
#78
You do know a standard named DirectCompute exists for a very long time, right? It can eliminate CPU involvement entirely if you decide to do so.
Clearly not given the number of games that use DirectCompute and still have meaningful CPU load. Fact is, as of right now, you can't run everything on GPUs (as much as AMD and nV would like it to be the case), and in many cases it's more efficient to use the CPU for things than can be done on GPUs.

And FYI, CUDA and Stream have existed for longer, and OpenCL for about the same amount of time
 
Joined
Oct 2, 2004
Messages
12,736 (2.60/day)
Likes
6,084
Location
Europe\Slovenia
System Name Dark Silence 2
Processor Intel Core i7 5820K @ 4.5 GHz (1.15V)
Motherboard MSI X99A Gaming 7
Cooling Cooler Master Nepton 120XL
Memory 32 GB DDR4 Kingston HyperX Fury 2400 MHz @ 2666 MHz
Video Card(s) AORUS GeForce GTX 1080Ti 11GB (2000/11100)
Storage Samsung 850 Pro 2TB SSD (3D V-NAND)
Display(s) ASUS VG248QE 144Hz 1ms (DisplayPort)
Case Corsair Carbide 330R Titanium
Audio Device(s) Creative Sound BlasterX AE-5 + Altec Lansing MX5021 (HiFi capacitors and OPAMP upgrade)
Power Supply BeQuiet! Dark Power Pro 11 750W
Mouse Logitech G502 Proteus Spectrum
Keyboard Cherry Stream XT Black
Software Windows 10 Pro 64-bit (Fall Creators Update)
#79
Doing Ai and basic physics on CPU is perfectly reasonable. Doing more than just path finding and basic rigid object physics, you'll need a GPU. That's a fact.
 
Joined
Dec 31, 2014
Messages
32 (0.03/day)
Likes
11
Processor Intel Core i5 4670K
Motherboard GIGABYTE G1.Sniper Z87
Cooling Be Quiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance (4 x 8GB)
Video Card(s) GALAX GeForce GTX 970 OC Silent Infinity Black Edition 4GB
Storage WD 1TB, 120GB & 250GB Samsung 840 EVO
Display(s) BenQ XL2411
Case Fractal Design Define R5
Power Supply XFX PRO 650W Core Edition
Mouse Zowie FK2
Keyboard Vortex POK3R
#80
so this trend of ngreedia gpus lagging in dx12 titles continues. how long before a lawsuit against maxwell and pascal gpus for false advertisement of dx12 capabilties?
You're bloody hilarious. Do you honestly think Nvidia can be sued for such a thing? They advertised dx12 compatibility and last time I looked DX12 Nvidia cards run DX12.
 

rtwjunkie

PC Gaming Enthusiast
Joined
Jul 25, 2008
Messages
9,638 (2.75/day)
Likes
13,434
Location
Louisiana -Laissez les bons temps rouler!
Processor Core i7-3770k 3.5Ghz, O/C to 4.2Ghz fulltime @ 1.19v
Motherboard ASRock Fatal1ty Z68 Pro Gen3
Cooling All air: 2x140mm Fractal exhaust; 3x 140mm Cougar Intake; Enermax T40F CPU cooler
Memory 2x 8GB Mushkin Redline DDR-3 1866
Video Card(s) MSI GTX 980 Ti Gaming 6G LE
Storage 1x 250GB MX200 SSD; 2x 2TB WD Black; 1x4TB WD Black;1x 2TB WD Green (eSATA)
Display(s) HP 25VX 25" IPS @ 1920 x 1080
Case Fractal Design Define R4 Black w/Titanium front -windowed
Audio Device(s) Soundblaster Z
Power Supply Seasonic X-850
Mouse Logitech G500
Keyboard Logitech G610 Orion mechanical (Cherry Brown switches)
Software Windows 10 Pro 64-bit (Start10 & Fences 3.0 installed)
#81
I won't be updating my copy to DX12. With my 980Ti I have a super-smooth performance already. If I am only going to see a drop in fps with 12, then I will just stick with the CPU utilization on 11, which is going very well.
 
Joined
Oct 4, 2013
Messages
84 (0.05/day)
Likes
15
#82
It is bizarre but then, DX12 isn't needed for Nvidia cards but for AMD it gives better fps.
This reply goes also to the guy saying the there is no visual difference.

Just imagine this:
We have the card A, card A gives us X fps with the Y visual effects on a game.
When the card A and every next generation of this card is able to draw more fps from the same game, or scene, what do we(us programmers) do? WE MAKE BETTER VISUAL EFFECTS.
That's what has been going on since the 80s, not only in graphics, Processors were able to handle more, the software got better and better for the end user in each aspect.
If you make ZERO, that's 0, progress in your APIs, μArch, OS, e.t.c. then you get 0 better visuals back.

What is going to happen in the future? well that's easy to predict, game companies will see that they cannot squeeze more complicated graphics in the games and they will keep them the same for every generation of the games.
...and that's what nVidia is being doing the last decade. Nvidia shits on every gamers' face with their libraries because that's what they can handle and they do not let studios' innovations surface the market.
Do you remember what happened with Crysis series? It was marvelous that we had a studio push the hardware that much and push the companies to make beastier gpus.
Do you remember what happen with every Gameworks title? same visuals, only forced you to jump to a newer generation of nVidia's cards because they crippled their older gpus.

...but consumers are stupid and they deserve get stolen by such A-holes like nVidia.

That my "friends" is called evolution and a step forward in the technology, but some people think it's not much because they have to justify their 500$ or € purchase, not only by BSing themselves, but spreading the S*** all over the world.
 

rtwjunkie

PC Gaming Enthusiast
Joined
Jul 25, 2008
Messages
9,638 (2.75/day)
Likes
13,434
Location
Louisiana -Laissez les bons temps rouler!
Processor Core i7-3770k 3.5Ghz, O/C to 4.2Ghz fulltime @ 1.19v
Motherboard ASRock Fatal1ty Z68 Pro Gen3
Cooling All air: 2x140mm Fractal exhaust; 3x 140mm Cougar Intake; Enermax T40F CPU cooler
Memory 2x 8GB Mushkin Redline DDR-3 1866
Video Card(s) MSI GTX 980 Ti Gaming 6G LE
Storage 1x 250GB MX200 SSD; 2x 2TB WD Black; 1x4TB WD Black;1x 2TB WD Green (eSATA)
Display(s) HP 25VX 25" IPS @ 1920 x 1080
Case Fractal Design Define R4 Black w/Titanium front -windowed
Audio Device(s) Soundblaster Z
Power Supply Seasonic X-850
Mouse Logitech G500
Keyboard Logitech G610 Orion mechanical (Cherry Brown switches)
Software Windows 10 Pro 64-bit (Start10 & Fences 3.0 installed)
#83
This reply goes also to the guy saying the there is no visual difference.
You are aware that Square Ennix themselves said there would be no difference in visuals with this DX12 patch?

...but consumers are stupid and they deserve get stolen by such A-holes like nVidia.

That my "friends" is called evolution and a step forward in the technology, but some people think it's not much because they have to justify their 500$ or € purchase, not only by BSing themselves, but spreading the S*** all over the world
And you are innocent of spreading shit and hate, perpetuating this stupid red and green war? :shadedshu:
 
Joined
Oct 2, 2004
Messages
12,736 (2.60/day)
Likes
6,084
Location
Europe\Slovenia
System Name Dark Silence 2
Processor Intel Core i7 5820K @ 4.5 GHz (1.15V)
Motherboard MSI X99A Gaming 7
Cooling Cooler Master Nepton 120XL
Memory 32 GB DDR4 Kingston HyperX Fury 2400 MHz @ 2666 MHz
Video Card(s) AORUS GeForce GTX 1080Ti 11GB (2000/11100)
Storage Samsung 850 Pro 2TB SSD (3D V-NAND)
Display(s) ASUS VG248QE 144Hz 1ms (DisplayPort)
Case Corsair Carbide 330R Titanium
Audio Device(s) Creative Sound BlasterX AE-5 + Altec Lansing MX5021 (HiFi capacitors and OPAMP upgrade)
Power Supply BeQuiet! Dark Power Pro 11 750W
Mouse Logitech G502 Proteus Spectrum
Keyboard Cherry Stream XT Black
Software Windows 10 Pro 64-bit (Fall Creators Update)
#84
No you/they don't improve graphics when there are performance gains. They've proven that consistently over years. The worst waste of potential being tessellation I've bitched about many times. Engines waste huge amounts of polygons on things developers decided is "important" and the rest is the same blocky mess. So, we have elements that have 50 times more polygons than they actually need because of tesselation and there are objects that aren't even affected by it. Result, games that often run even worse and they don't even look any better than games without any tesselation.
 
Joined
Dec 14, 2009
Messages
6,705 (2.24/day)
Likes
5,957
Location
Glasgow - home of formal profanity
System Name New Ho'Ryzen
Processor Ryzen 1700X @ 3.82Ghz
Motherboard Asus Crosshair VI Hero
Cooling TR Le Grand Macho & custom GPU loop
Memory 16Gb G.Skill 3200 RGB
Video Card(s) GTX1080ti (Heatkiller WB) @ 2Ghz core/1.5(12)Ghz mem
Storage Samsumg 960 Pro m2. 512Gb
Display(s) Dell Ultrasharp 27" (2560x1440)
Case Lian Li PC-V33WX
Audio Device(s) On Board
Power Supply Seasonic Prime TItanium 850
Software W10
Benchmark Scores Look, it's a Ryzen on air........ What's the point?
#85
This reply goes also to the guy saying the there is no visual difference.

Just imagine this:
We have the card A, card A gives us X fps with the Y visual effects on a game.
When the card A and every next generation of this card is able to draw more fps from the same game, or scene, what do we(us programmers) do? WE MAKE BETTER VISUAL EFFECTS.
That's what has been going on since the 80s, not only in graphics, Processors were able to handle more, the software got better and better for the end user in each aspect.
If you make ZERO, that's 0, progress in your APIs, μArch, OS, e.t.c. then you get 0 better visuals back.

What is going to happen in the future? well that's easy to predict, game companies will see that they cannot squeeze more complicated graphics in the games and they will keep them the same for every generation of the games.
...and that's what nVidia is being doing the last decade. Nvidia shits on every gamers' face with their libraries because that's what they can handle and they do not let studios' innovations surface the market.
Do you remember what happened with Crysis series? It was marvelous that we had a studio push the hardware that much and push the companies to make beastier gpus.
Do you remember what happen with every Gameworks title? same visuals, only forced you to jump to a newer generation of nVidia's cards because they crippled their older gpus.

...but consumers are stupid and they deserve get stolen by such A-holes like nVidia.

That my "friends" is called evolution and a step forward in the technology, but some people think it's not much because they have to justify their 500$ or € purchase, not only by BSing themselves, but spreading the S*** all over the world.
Not sure what angle your reply to me is. Deus Ex is punishing on cards (relative to other games). The dev said DX12 may give better performance but not visuals.
The huge memory use may be a factor in that the monstrous textures are taking an awful lot of processing grunt to shift. API or not, I think running near 6GB on 1440p shows how much is being rendered. That takes power from a hefty chip. New FX may have to wait as devs keep creating more 'on screen' visuals which consume the chips capacity.

I don't know though, it seems if we want photo realistic, processing power with far higher Tflops is required. Maybe Pascal Titan X points in that direction with its 'rubbish' Async but enormous power.
 
Joined
Oct 28, 2012
Messages
228 (0.12/day)
Likes
74
Processor AMD Ryzen 1700x
Motherboard asus ROG Strix B-350I Gaming
Cooling bequiet pure rock slim
Memory Gskill Aegis 2x 8GB DDR4
Video Card(s) GTX 1060 3GB (yhea, so what ?)
Storage Samsung 960 evo 256 Gb Seagate 2To
Display(s) Benq VW2230
Case Ncase coming soon
Audio Device(s) FIIO Andes
Power Supply Corsair SF 600w
Mouse Logitech m500
Keyboard Corsair k55 RGB
Software win 10 pro
#86
Now it's hard to see just how much of a game changer DX12 will actually be.
I still remember the moment when square enix showed the DX 12 demo :
Granted it was working on four GTX Titan X, but that was glorious, it showed what could be done with low level api on pc.
The irony is that Nvidia doesn't seem to get any benefit from DX12 : they have either same, or negative performance. Even the gain on amd gpu isn't really ashtonishing:
http://www.techspot.com/review/1081-dx11-vs-dx12-ashes/page3.html
Getting low level optimisation on a compute monster like the Fury X is only worth 3 fps ?

I remember a time were gpu makers had developers making public realeased technical demo to show what their Gpu could do. They keep blabering about DX12, showing numbers on slides that looks impressive, but right now we don't have a single example of what you can expect of DX 12 on a single High-End gpu.

Vulkan is at the moment the api who got the more impressive results, but if the story of open gl repeat itself, I fear this isn't going to matter that much, as the studios will keep giving DX12 the priority.

So yhea, DX12 is at least giving you a great bump if you got a weak cpu, but it that really it ? Was that really worth making so much noise ? Right now gaming developpers are not giving me any reason to get hyped about DX 12 benefits. Will they be any project ambitious, and crazy enough to make people make ask : "Can it run XXX tho ?" while using of the benefits of a low level api ?
Deus Ex MD looks nice, but it's not "out of this world" nice. I'm not seeing a huge gap between it and Tomb Raider 2015, DOOM, or even TW3.
 
Last edited:
Joined
Jul 14, 2008
Messages
597 (0.17/day)
Likes
167
Location
Copenhagen, Denmark
System Name RIG1/Laptop
Processor i7 2600/i7 6700HQ
Motherboard Gigabyte PA65-UD3-B3/Acer
Cooling EKL Alpenföhn Nordwand Rev.B/OEM
Memory DDR31600 16GB/8GB DDR4 2133
Video Card(s) Gigabyte Windforce GTX 970/Intel HD 530 - GTX 950m
Storage Kingston V300 120GB 2x1TB WD enterprise drives/1TB Toshiba
Display(s) Samsung S27C350 + Dell 2407WFP-HC/OEM
Case Enermax Gracemesh/Acer Barebone
Audio Device(s) Creative SB X-Fi/onboard
Power Supply Enermax 650Watt/165 Watt power brick
Mouse G203 Prodigy/M705 Marathon
Keyboard G510
Software Win10 pro 64bit/Win10 pro 64bit
#87
I wouldn't go so far as to say dominance, it's just that AMD is finally getting their money's worth out of their 'metal'.

- AMD uses a wider bus
- AMD uses more shaders
- AMD runs at lower clocks
- Polaris provides about similar (or slightly higher) perf/clock to Pascal
- Polaris still has a lower perf/watt than Pascal
- GCN has not radically changed since HD7xxx.

AMD just runs a wider GPU across the board, as they have done for a long time. GCN is geared to be an extremely balanced arch that has some overcapacity on the VRAM end. It is built to let the core do all the work it can do, whereas Nvidia's arch is always focused at 'efficiency gains through tight GPU balance' - Nvidia obtains that balance by cutting heavily into bus width and removing everything from the GPU core that isn't required for gaming. They've tried several things, of which DP was the first thing they dropped with Kepler, then delta compression enabled them to further reduce bus width. This is also why Nvidia's cards don't stretch their legs at higher resolutions, but rather lose performance. Only the GDDR5X-supported 1080 avoids that fate.

On DX11, AMD GPU's were just fine and they excelled only at higher resolutions. Why? Not just because of VRAM, but because of the fact that higher res = lower CPU load. In DX12, GCN gets to stretch its legs even earlier and also at lower resolutions, in part also because of the better CPU usage of that API. Vulkan is similar. That CPU usage was the last hurdle for GCN to really come to fruition. Say what you want, but AMD has really made a smart move here, even though we can doubt how conscious that move has really been. They have effectively gained architectural advantage by letting the market do most of the work.

The irony is that the market for gaming has moved towards GCN, and GCN has seen very minimal architectural changes, while the market is moving away from Nvidia's cost/efficiency improvement-focused GPU architecture. At the same time, Nvidia can almost eclipse that change through a much higher perf/watt, but that only hides so much of the underlying issue, an issue of Nvidia GPU's having to clock really high to gain solid performance, because they lack not only a wide bus right now, but also raw shader counts.

I think it is inevitable, and safe to predict, that Nvidia has now reached a new cap with regards to clock speeds on the core. The only way forward is for them to once again start building bigger and wider GPUs. AMD, on the flip side, has more wiggle room and a lot of things left to improve - clocks, efficiency, and judging the RX480, they also have space left on the die.
this is an actual informative opinion. thank you mate!
i totally agree with what you said, i had the same feeling when i first heard about vulkan/dx12. if Nvidia is not careful they could lose a big chunk of the market in the next one to two gens of cards, which imo would be something positive for the consumer since i think that the gpu market has been lacking serious competition for some years now.
 
Joined
Mar 4, 2011
Messages
229 (0.09/day)
Likes
44
Location
Toronto, ON
System Name The Sovereign's Prince
Processor i5 6600K
Motherboard Asus Maximus VIII Hero
Cooling CoolerMaster Hyper 212 Evo
Memory 16GB Corsair Vengeance LPX DDR4 3200MHz
Video Card(s) Asus R9 290X DirectCU II OC
Storage SanDisk Extreme Pro 480GB, Intel 730 480GB, 2 x 3TB WD Red in RAID 1, 2 x 4TB WD Green in RAID 1
Display(s) BenQ V2400W 24" 16:10 1920*1200 LCD (Review/Pics @ http://hardforum.com/showthread.php?t=1315565)
Case Fractal Define R5 Windowless
Audio Device(s) SteelSeries Siberia 800, Asus Xonar Essense STX
Power Supply Corsair AX850
Mouse Logitech G900
Keyboard Microsoft Sidewinder X6
Software Win10 Pro x64
#88
if Nvidia is not careful they could lose a big chunk of the market in the next one to two gens of cards, which imo would be something positive for the consumer since i think that the gpu market has been lacking serious competition for some years now.
While of course a possibility, I wouldn't put any money/stock/holding of breath in that happening. Call me cynical if you must but Nvidia won't lose significant market share in the next 2 gens even if they deserve to for the same reason Apple doesn't lose market share even though they deserve to; they have a refined and sadly effective hype and marketing machine that constantly builds and stokes the consumer norm that they are the superior choice.

Though I hope you're right for the price drops that need to happen for all consumers, in the age of Likes, Views and Trending, mindshare is the real metric that maintains a marketshare's status quo and Nvidia is throwing too much TWIMTBP money around for that to change any time soon.
 
Joined
Sep 11, 2016
Messages
19 (0.04/day)
Likes
1
#89
The graphs and numbers posted by guru3d reflect what has been seen so far between the new cards and the old ones. All AMD see an increase, NVidia newer 10xx see a smaller nearly negligible increase, older 9xx see a penalty due to incomplete DX12.

The 1080 issue was mentioned as acknowledge and under investigation. After they are done, probably will behave like the 1060 numbers.

If the game is poorly optimized (or not), or does use more complex scenery than other games it doesn't matter as much as the actual card performance curve, specially if you educate enough and look not only at APIs, but at the real source, which is explained in detail here
 
Joined
Sep 15, 2011
Messages
4,439 (1.89/day)
Likes
1,107
Processor Intel Core i7 3770k @ 4.3GHz
Motherboard Asus P8Z77-V LK
Memory 16GB(2x8) DDR3@2133MHz 1.5v Patriot
Video Card(s) MSI GeForce GTX 1080 GAMING X 8G
Storage 59.63GB Samsung SSD 830 + 465.76 GB Samsung SSD 840 EVO + 2TB Hitachi + 300GB Velociraptor HDD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Anker
Software Win 10 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
#90
AMD beats NVIDIA in every single DX12 game (except games where both suck hard compared to DX11 for no logical reason). Must be "AMD biased games". Right. It's not because the rendering engine in Radeons is clearly superior for such tasks since HD7000 series when they introduced GCN, it has to be "bias". C'mon people, can you be less of a fanboys?...
No, AMD definitely DOES NOT beat nVidia in ANY/EVERY single D3D12 game. The right words are: AMD has a better performance gain in D3D12 than nVidia. That's all.
And please stop putting words like "fanboy" on your comments. It makes you look more and more like one. ;)
 

Malabooga

New Member
Joined
Jul 21, 2016
Messages
15 (0.03/day)
Likes
7
#91
No, AMD definitely DOES NOT beat nVidia in ANY/EVERY single D3D12 game. The right words are: AMD has a better performance gain in D3D12 than nVidia. That's all.
And please stop putting words like "fanboy" on your comments. It makes you look more and more like one. ;)
Yes it does. 2016. is definitely not good year for NVidia, unfortunately DX12/Vulkan came much faster than "3-4 years away" like those who dont know better were saying. Guess what, Microsoft > than Intel, NVidia and AMD combined and what Microsoft wants Microsoft gets lol

And currently Microsoft wants W10 adoption and DX12 is a part of that (and has been since release a year ago). And majority of gamers are on W10 in a year.
 

Malabooga

New Member
Joined
Jul 21, 2016
Messages
15 (0.03/day)
Likes
7
#92
I wouldn't go so far as to say dominance, it's just that AMD is finally getting their money's worth out of their 'metal'.

- AMD uses a wider bus
- AMD uses more shaders
- AMD runs at lower clocks
- Polaris provides about similar (or slightly higher) perf/clock to Pascal
- Polaris still has a lower perf/watt than Pascal
- GCN has not radically changed since HD7xxx.

AMD just runs a wider GPU across the board, as they have done for a long time. GCN is geared to be an extremely balanced arch that has some overcapacity on the VRAM end. It is built to let the core do all the work it can do, whereas Nvidia's arch is always focused at 'efficiency gains through tight GPU balance' - Nvidia obtains that balance by cutting heavily into bus width and removing everything from the GPU core that isn't required for gaming. They've tried several things, of which DP was the first thing they dropped with Kepler, then delta compression enabled them to further reduce bus width. This is also why Nvidia's cards don't stretch their legs at higher resolutions, but rather lose performance. Only the GDDR5X-supported 1080 avoids that fate.

On DX11, AMD GPU's were just fine and they excelled only at higher resolutions. Why? Not just because of VRAM, but because of the fact that higher res = lower CPU load. In DX12, GCN gets to stretch its legs even earlier and also at lower resolutions, in part also because of the better CPU usage of that API. Vulkan is similar. That CPU usage was the last hurdle for GCN to really come to fruition. Say what you want, but AMD has really made a smart move here, even though we can doubt how conscious that move has really been. They have effectively gained architectural advantage by letting the market do most of the work.

The irony is that the market for gaming has moved towards GCN, and GCN has seen very minimal architectural changes, while the market is moving away from Nvidia's cost/efficiency improvement-focused GPU architecture. At the same time, Nvidia can almost eclipse that change through a much higher perf/watt, but that only hides so much of the underlying issue, an issue of Nvidia GPU's having to clock really high to gain solid performance, because they lack not only a wide bus right now, but also raw shader counts.

I think it is inevitable, and safe to predict, that Nvidia has now reached a new cap with regards to clock speeds on the core. The only way forward is for them to once again start building bigger and wider GPUs. AMD, on the flip side, has more wiggle room and a lot of things left to improve - clocks, efficiency, and judging the RX480, they also have space left on the die.
Nice to see someone who gets it, kudos to you. NVidia has squeezed every MHz from TSMCs 16nm node and only option is to make more GCN like arhictecture because they are at the end of the road with their MHz chase. And last time they tried that we got Fermi.
 
Joined
Sep 11, 2016
Messages
19 (0.04/day)
Likes
1
#93
Yes it does. 2016. is definitely not good year for NVidia, unfortunately DX12/Vulkan came much faster than "3-4 years away" like those who dont know better were saying. Guess what, Microsoft > than Intel, NVidia and AMD combined and what Microsoft wants Microsoft gets lol

And currently Microsoft wants W10 adoption and DX12 is a part of that (and has been since release a year ago). And majority of gamers are on W10 in a year.
Actually currently it does. While AMD current architecture is better placed or optimized for DX12, in terms of raw clock, max TFLOPS Nvidia dominates.

And no matter how well you optimize there is a limit in how much a hardware can do. While probably like cars, those numbers can be taken with a kind of grain of salt in terms that manufactures may inflate them, if you look at given numbers, Nvidia's are higher... Period.
 
Joined
Sep 15, 2011
Messages
4,439 (1.89/day)
Likes
1,107
Processor Intel Core i7 3770k @ 4.3GHz
Motherboard Asus P8Z77-V LK
Memory 16GB(2x8) DDR3@2133MHz 1.5v Patriot
Video Card(s) MSI GeForce GTX 1080 GAMING X 8G
Storage 59.63GB Samsung SSD 830 + 465.76 GB Samsung SSD 840 EVO + 2TB Hitachi + 300GB Velociraptor HDD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Anker
Software Win 10 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
#94
Yes it does. 2016. is definitely not good year for NVidia, unfortunately DX12/Vulkan came much faster than "3-4 years away" like those who dont know better were saying. Guess what, Microsoft > than Intel, NVidia and AMD combined and what Microsoft wants Microsoft gets lol

And currently Microsoft wants W10 adoption and DX12 is a part of that (and has been since release a year ago). And majority of gamers are on W10 in a year.
Sorry, you wrote a bunch of nonsense there is not even worth replaying. This post was just informational btw...
 
Joined
Jan 31, 2011
Messages
1,698 (0.66/day)
Likes
543
System Name TeraUltima 7
Processor Intel Core i5 3570K @4Ghz
Motherboard ASRock Z68 Pro3-M
Cooling DeepCool Ice Berg Pro "Black Edition"
Memory Kingston Hyper X 16GB DDR3 1866
Video Card(s) Palit GeForce GTX 1070 Super JetStream
Storage Crucial MX100 256GB SSD + 1TB Seagate HDD
Display(s) LG 23MP68VQ-P IPS 75HZ 23" MONITOR
Case Aerocool Dead Silence Black Edition Cube
Audio Device(s) Onboard Realtek ALC892 7.1 HD Audio
Power Supply OCZ Stealth X Stream II 600W
Mouse Logitech G400S | Wacom Intuos CTH-480
Keyboard A4Tech G800V Gaming Keyboard
Software Windows 7 Ultimate 64bit SP1
Benchmark Scores http://www.3dmark.com/fs/9470441
#95
Techreport's DX12 Deus Ex Mankind Divided benchmarks, including frametime analysis

http://techreport.com/review/30639/...x-12-performance-in-deus-ex-mankind-divided/3




So that's a thing. Switching over to DXMD's DirectX 12 renderer doesn't improve performance on any of our cards, and it actually makes life much worse for the Radeons. The R9 Fury X turns in an average FPS result that might make you think its performance is on par with the GTX 1070 once again, but don't be fooled—that card's 99th-percentile frame time number is no better than even the GTX 1060's. Playing DXMD on the Fury X and RX 480 was a hitchy, stuttery experience, and our frame-time plots confirm that impression.

In the green corner, the GTX 1070 leads the 99th-percentile frame-time pack by a wide margin, and that translates into noticeably smoother gameplay than any other card here can provide while running under DirectX 12.

I hate to toot TR's horn here, but tests like these demonstrate why one simply can't take average FPS numbers at face value when measuring graphics-card performance. We've been saying so for years. From our results and our subjective experience, it's clear that the developers behind Deus Ex: Mankind Divided have a lot of optimizing to do for Radeons before the game's DirectX 12 mode goes gold in a week and change. AMD's driver team may also have a few long nights ahead, though in theory, DX12 puts much more responsibility on the shoulders of the developer.

It's also clear that it's too early to call a winner between the green and red teams for DirectX 12 performance in this beta build of Deus Ex, even if AMD seems to feel confident in doing so. The Radeon cards we tested perform poorly in our latency-sensitive frame-time metrics in DX12 mode, meaning that the Fury X's hitchy gameplay stands in stark contrast to its respectable average-FPS result. Even if Nvidia isn't shouting from the rooftops about Pascal's performance in DXMD's DX12 mode right now, the green team has some kind of smoothness advantage despite the game's beta tag. To be fair, we used different settings than AMD did while gathering its performance numbers, but we don't feel like the choices we made would be much different than those the average enthusiast would have with this hardware.
 
Last edited:
Joined
Sep 15, 2011
Messages
4,439 (1.89/day)
Likes
1,107
Processor Intel Core i7 3770k @ 4.3GHz
Motherboard Asus P8Z77-V LK
Memory 16GB(2x8) DDR3@2133MHz 1.5v Patriot
Video Card(s) MSI GeForce GTX 1080 GAMING X 8G
Storage 59.63GB Samsung SSD 830 + 465.76 GB Samsung SSD 840 EVO + 2TB Hitachi + 300GB Velociraptor HDD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Anker
Software Win 10 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
#96
A lot of discussion seems to focus only on comparing the D3D12 renderer performance AMD vs. nVidia. But seen how the both companies perform not as expected, isn't there a chance that actually this is Microsoft's fault for providing a crappy renderer instead? Or even lazy and buggy programing of the developer??
 
Joined
Apr 30, 2012
Messages
2,465 (1.16/day)
Likes
1,372
#97
A lot of discussion seems to focus only on comparing the D3D12 renderer performance AMD vs. nVidia. But seen how the both companies perform not as expected, isn't there a chance that actually this is Microsoft's fault for providing a crappy renderer instead? Or even lazy and buggy programing of the developer??
or most likely

Nixxes said:
Nixxes Software is proud to announce that we're working on Deus Ex: Mankind Divided™ for PC.
Its a DX12 Beta patch on a port. Like every other port. Expect a handful of patches after its out of beta for it to be ironed out.
 
Joined
May 3, 2014
Messages
615 (0.44/day)
Likes
120
System Name Sham Pc
Processor i5-2500k @ 4.33
Motherboard INTEL DZ77SL 50K
Cooling 2 bay res. "2L of fluid in loop" 1x480 2x360
Memory 16gb 4x4 kingstone 1600 hyper x fury black
Video Card(s) hfa2 gtx 780 @ 1306/1768 (xspc bloc)
Storage 1tb wd red 120gb kingston on the way os, 1.5Tb wd black, 3tb random WD rebrand
Display(s) cibox something or other 23" 1080p " 23 inch downstairs. 52 inch plasma downstairs 15" tft kitchen
Case 900D
Audio Device(s) on board
Power Supply xion gaming seriese 1000W (non modular) 80+ bronze
Software windows 10 pro x64
#98
The graphs and numbers posted by guru3d reflect what has been seen so far between the new cards and the old ones. All AMD see an increase, NVidia newer 10xx see a smaller nearly negligible increase, older 9xx see a penalty due to incomplete DX12.

The 1080 issue was mentioned as acknowledge and under investigation. After they are done, probably will behave like the 1060 numbers.

If the game is poorly optimized (or not), or does use more complex scenery than other games it doesn't matter as much as the actual card performance curve, specially if you educate enough and look not only at APIs, but at the real source, which is explained in detail here

that guy talks a load of bull though..

i agree nvidias dx11 drivers are very efficient. amd still out perform them in cases.. like 2x480 can out perform a 1080 in dx12 on more than 1 game. there is more too it than drivers alone, the amd crads are faster. Although i do agree that they gain more in dx12 % wize mostly due to drivers. The hardware archetecture of amd cards have been aimed towards a low lvl api since the hd 7750, Nvidia still havent bothered with it yet.. And given we wont be expecting dx12 to be come mainstream for games for atleast a nother year. i think nvidia did the right call. because they can have a new gen of gpu out just at the time dx12 becomes main stream.. obviously this makes the 10 sereise gpu's utterly pointless. But thats not an issue for nvidia they sold the things now, and next gen they can convince people they really need to upgrade to fully benifit low lvl api.
 
Joined
Nov 26, 2004
Messages
4,434 (0.92/day)
Likes
1,547
Location
Canuck in Norway
System Name Hellbox 3.0(same case new guts)
Processor i7 4790K
Motherboard Asus Z97 Sabertooth Mark 1
Cooling TT Kandalf L.C.S.(Water/Air)AC Cuplex Kryos CPU Block/Noctua
Memory 2x8GB Corsair Vengance Pro 2400
Video Card(s) Sapphire Tri-X Fury (Unlocked to Fury X)
Storage WD Caviar Black SATA 3 1TB x2 RAID 0 2xSamsung 850 Evo 500GB RAID 0 1TB WD Blue
Display(s) ASUS MG279Q 1440 IPS 144Hz FreeSync
Case TT Kandalf L.C.S.
Audio Device(s) Soundblaster ZX/Logitech Z906 5.1
Power Supply Seasonic X-1050W 80+ Gold
Mouse G502 Proteus Spectrum
Keyboard G19s
Software Win 10 Pro x64
#99
I wouldn't go so far as to say dominance, it's just that AMD is finally getting their money's worth out of their 'metal'.

- AMD uses a wider bus
- AMD uses more shaders
- AMD runs at lower clocks
- Polaris provides about similar (or slightly higher) perf/clock to Pascal
- Polaris still has a lower perf/watt than Pascal
- GCN has not radically changed since HD7xxx.

AMD just runs a wider GPU across the board, as they have done for a long time. GCN is geared to be an extremely balanced arch that has some overcapacity on the VRAM end. It is built to let the core do all the work it can do, whereas Nvidia's arch is always focused at 'efficiency gains through tight GPU balance' - Nvidia obtains that balance by cutting heavily into bus width and removing everything from the GPU core that isn't required for gaming. They've tried several things, of which DP was the first thing they dropped with Kepler, then delta compression enabled them to further reduce bus width. This is also why Nvidia's cards don't stretch their legs at higher resolutions, but rather lose performance. Only the GDDR5X-supported 1080 avoids that fate.

On DX11, AMD GPU's were just fine and they excelled only at higher resolutions. Why? Not just because of VRAM, but because of the fact that higher res = lower CPU load. In DX12, GCN gets to stretch its legs even earlier and also at lower resolutions, in part also because of the better CPU usage of that API. Vulkan is similar. That CPU usage was the last hurdle for GCN to really come to fruition. Say what you want, but AMD has really made a smart move here, even though we can doubt how conscious that move has really been. They have effectively gained architectural advantage by letting the market do most of the work.

The irony is that the market for gaming has moved towards GCN, and GCN has seen very minimal architectural changes, while the market is moving away from Nvidia's cost/efficiency improvement-focused GPU architecture. At the same time, Nvidia can almost eclipse that change through a much higher perf/watt, but that only hides so much of the underlying issue, an issue of Nvidia GPU's having to clock really high to gain solid performance, because they lack not only a wide bus right now, but also raw shader counts.

I think it is inevitable, and safe to predict, that Nvidia has now reached a new cap with regards to clock speeds on the core. The only way forward is for them to once again start building bigger and wider GPUs. AMD, on the flip side, has more wiggle room and a lot of things left to improve - clocks, efficiency, and judging the RX480, they also have space left on the die.
Thus has been my thinking with AMD and why I've stuck with them.