• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD GPUs See Lesser Performance Drop on "Deus Ex: Mankind Divided" DirectX 12

Joined
Jul 31, 2014
Messages
479 (0.14/day)
System Name Diablo | Baal | Mephisto | Andariel
Processor i5-3570K@4.4GHz | 2x Xeon X5675 | i7-4710MQ | i7-2640M
Motherboard Asus Sabertooth Z77 | HP DL380 G6 | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Cooling Swiftech H220-X | Chassis cooled (6 fans + HS) | dual-fanned heatpipes | small-fanned heatpipe
Memory 32GiB DDR3-1600 CL9 | 96GiB DDR3-1333 ECC RDIMM | 32GiB DDR3L-1866 CL11 | 8GiB DDR3L-1600 CL11
Video Card(s) Dual GTX 670 in SLI | Embedded ATi ES1000 | Quadro K2100M | Intel HD 3000
Storage many, many SSDs and HDDs....
Display(s) 1 Dell U3011 + 2x Dell U2410 | HP iLO2 KVMoIP | 3200x1800 Sharp IGZO | 1366x768 IPS with Wacom pen
Case Corsair Obsidian 550D | HP DL380 G6 Chassis | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Audio Device(s) Auzentech X-Fi HomeTheater HD | None | On-board | On-board
Power Supply Corsair AX850 | Dual 750W Redundant PSU (Delta) | Dell 330W+240W (Flextronics) | Lenovo 65W (Delta)
Mouse Logitech G502, Logitech G700s, Logitech G500, Dell optical mouse (emergency backup)
Keyboard 1985 IBM Model F 122-key, Ducky YOTT MX Black, Dell AT101W, 1994 IBM Model M, various integrated
Software FAAAR too much to list
I wouldn't go so far as to say dominance, it's just that AMD is finally getting their money's worth out of their 'metal'.

- AMD uses a wider bus
- AMD uses more shaders
- AMD runs at lower clocks
- Polaris provides about similar (or slightly higher) perf/clock to Pascal
- Polaris still has a lower perf/watt than Pascal
- GCN has not radically changed since HD7xxx.

AMD just runs a wider GPU across the board, as they have done for a long time. GCN is geared to be an extremely balanced arch that has some overcapacity on the VRAM end. It is built to let the core do all the work it can do, whereas Nvidia's arch is always focused at 'efficiency gains through tight GPU balance' - Nvidia obtains that balance by cutting heavily into bus width and removing everything from the GPU core that isn't required for gaming. They've tried several things, of which DP was the first thing they dropped with Kepler, then delta compression enabled them to further reduce bus width. This is also why Nvidia's cards don't stretch their legs at higher resolutions, but rather lose performance. Only the GDDR5X-supported 1080 avoids that fate.

On DX11, AMD GPU's were just fine and they excelled only at higher resolutions. Why? Not just because of VRAM, but because of the fact that higher res = lower CPU load. In DX12, GCN gets to stretch its legs even earlier and also at lower resolutions, in part also because of the better CPU usage of that API. Vulkan is similar. That CPU usage was the last hurdle for GCN to really come to fruition. Say what you want, but AMD has really made a smart move here, even though we can doubt how conscious that move has really been. They have effectively gained architectural advantage by letting the market do most of the work.

The irony is that the market for gaming has moved towards GCN, and GCN has seen very minimal architectural changes, while the market is moving away from Nvidia's cost/efficiency improvement-focused GPU architecture. At the same time, Nvidia can almost eclipse that change through a much higher perf/watt, but that only hides so much of the underlying issue, an issue of Nvidia GPU's having to clock really high to gain solid performance, because they lack not only a wide bus right now, but also raw shader counts.

I think it is inevitable, and safe to predict, that Nvidia has now reached a new cap with regards to clock speeds on the core. The only way forward is for them to once again start building bigger and wider GPUs. AMD, on the flip side, has more wiggle room and a lot of things left to improve - clocks, efficiency, and judging the RX480, they also have space left on the die.

nV has also been incrementally widening their SMs each gen, very likely in order to better match increasing output resolutions.

As for DP, AMD also removed DP from GCN. On the original GCN1 chips, it's driver limited to 1/2 perf. on GCN1.1 and beyond, it been cut down in hardware to 1/4, 1/8 and I think the current is 1/16, very close to nV's 1/32 number for gaming cards.

Particles are actually just a bunch of tiny polygons with texture attached to them. Why wouldn't you run them on GPU? Especially since we have specialized features like Geometric Instancing to handle just that, hundreds of identical elements.

That's when they get rendered (and they get rendered on the GPU just fine and as intended and expected).

You still need to calculate position and movement like any other entity in the scene before the scene gets rendered... ffs man, this is basic renderer workflow...
 
Joined
Oct 2, 2004
Messages
13,791 (1.94/day)
You do know a standard named DirectCompute exists for a very long time, right? It can eliminate CPU involvement entirely if you decide to do so.
 
Joined
Jul 31, 2014
Messages
479 (0.14/day)
System Name Diablo | Baal | Mephisto | Andariel
Processor i5-3570K@4.4GHz | 2x Xeon X5675 | i7-4710MQ | i7-2640M
Motherboard Asus Sabertooth Z77 | HP DL380 G6 | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Cooling Swiftech H220-X | Chassis cooled (6 fans + HS) | dual-fanned heatpipes | small-fanned heatpipe
Memory 32GiB DDR3-1600 CL9 | 96GiB DDR3-1333 ECC RDIMM | 32GiB DDR3L-1866 CL11 | 8GiB DDR3L-1600 CL11
Video Card(s) Dual GTX 670 in SLI | Embedded ATi ES1000 | Quadro K2100M | Intel HD 3000
Storage many, many SSDs and HDDs....
Display(s) 1 Dell U3011 + 2x Dell U2410 | HP iLO2 KVMoIP | 3200x1800 Sharp IGZO | 1366x768 IPS with Wacom pen
Case Corsair Obsidian 550D | HP DL380 G6 Chassis | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Audio Device(s) Auzentech X-Fi HomeTheater HD | None | On-board | On-board
Power Supply Corsair AX850 | Dual 750W Redundant PSU (Delta) | Dell 330W+240W (Flextronics) | Lenovo 65W (Delta)
Mouse Logitech G502, Logitech G700s, Logitech G500, Dell optical mouse (emergency backup)
Keyboard 1985 IBM Model F 122-key, Ducky YOTT MX Black, Dell AT101W, 1994 IBM Model M, various integrated
Software FAAAR too much to list
You do know a standard named DirectCompute exists for a very long time, right? It can eliminate CPU involvement entirely if you decide to do so.

Clearly not given the number of games that use DirectCompute and still have meaningful CPU load. Fact is, as of right now, you can't run everything on GPUs (as much as AMD and nV would like it to be the case), and in many cases it's more efficient to use the CPU for things than can be done on GPUs.

And FYI, CUDA and Stream have existed for longer, and OpenCL for about the same amount of time
 
Joined
Oct 2, 2004
Messages
13,791 (1.94/day)
Doing Ai and basic physics on CPU is perfectly reasonable. Doing more than just path finding and basic rigid object physics, you'll need a GPU. That's a fact.
 
Joined
Dec 31, 2014
Messages
32 (0.01/day)
Processor Intel Core i5 4670K
Motherboard GIGABYTE G1.Sniper Z87
Cooling Be Quiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance (4 x 8GB)
Video Card(s) GALAX GeForce GTX 970 OC Silent Infinity Black Edition 4GB
Storage WD 1TB, 120GB & 250GB Samsung 840 EVO
Display(s) BenQ XL2411
Case Fractal Design Define R5
Power Supply XFX PRO 650W Core Edition
Mouse Zowie FK2
Keyboard Vortex POK3R
so this trend of ngreedia gpus lagging in dx12 titles continues. how long before a lawsuit against maxwell and pascal gpus for false advertisement of dx12 capabilties?

You're bloody hilarious. Do you honestly think Nvidia can be sued for such a thing? They advertised dx12 compatibility and last time I looked DX12 Nvidia cards run DX12.
 

rtwjunkie

PC Gaming Enthusiast
Supporter
Joined
Jul 25, 2008
Messages
13,909 (2.43/day)
Location
Louisiana -Laissez les bons temps rouler!
System Name Bayou Phantom
Processor Core i7-8700k 4.4Ghz @ 1.18v
Motherboard ASRock Z390 Phantom Gaming 6
Cooling All air: 2x140mm Fractal exhaust; 3x 140mm Cougar Intake; Enermax T40F Black CPU cooler
Memory 2x 16GB Mushkin Redline DDR-4 3200
Video Card(s) EVGA RTX 2080 Ti Xc
Storage 1x 500 MX500 SSD; 2x 6TB WD Black; 1x 4TB WD Black; 1x400GB VelRptr; 1x 4TB WD Blue storage (eSATA)
Display(s) HP 27q 27" IPS @ 2560 x 1440
Case Fractal Design Define R4 Black w/Titanium front -windowed
Audio Device(s) Soundblaster Z
Power Supply Seasonic X-850
Mouse Coolermaster Sentinel III (large palm grip!)
Keyboard Logitech G610 Orion mechanical (Cherry Brown switches)
Software Windows 10 Pro 64-bit (Start10 & Fences 3.0 installed)
I won't be updating my copy to DX12. With my 980Ti I have a super-smooth performance already. If I am only going to see a drop in fps with 12, then I will just stick with the CPU utilization on 11, which is going very well.
 
Joined
Oct 4, 2013
Messages
86 (0.02/day)
It is bizarre but then, DX12 isn't needed for Nvidia cards but for AMD it gives better fps.
This reply goes also to the guy saying the there is no visual difference.

Just imagine this:
We have the card A, card A gives us X fps with the Y visual effects on a game.
When the card A and every next generation of this card is able to draw more fps from the same game, or scene, what do we(us programmers) do? WE MAKE BETTER VISUAL EFFECTS.
That's what has been going on since the 80s, not only in graphics, Processors were able to handle more, the software got better and better for the end user in each aspect.
If you make ZERO, that's 0, progress in your APIs, μArch, OS, e.t.c. then you get 0 better visuals back.

What is going to happen in the future? well that's easy to predict, game companies will see that they cannot squeeze more complicated graphics in the games and they will keep them the same for every generation of the games.
...and that's what nVidia is being doing the last decade. Nvidia shits on every gamers' face with their libraries because that's what they can handle and they do not let studios' innovations surface the market.
Do you remember what happened with Crysis series? It was marvelous that we had a studio push the hardware that much and push the companies to make beastier gpus.
Do you remember what happen with every Gameworks title? same visuals, only forced you to jump to a newer generation of nVidia's cards because they crippled their older gpus.

...but consumers are stupid and they deserve get stolen by such A-holes like nVidia.

That my "friends" is called evolution and a step forward in the technology, but some people think it's not much because they have to justify their 500$ or € purchase, not only by BSing themselves, but spreading the S*** all over the world.
 

rtwjunkie

PC Gaming Enthusiast
Supporter
Joined
Jul 25, 2008
Messages
13,909 (2.43/day)
Location
Louisiana -Laissez les bons temps rouler!
System Name Bayou Phantom
Processor Core i7-8700k 4.4Ghz @ 1.18v
Motherboard ASRock Z390 Phantom Gaming 6
Cooling All air: 2x140mm Fractal exhaust; 3x 140mm Cougar Intake; Enermax T40F Black CPU cooler
Memory 2x 16GB Mushkin Redline DDR-4 3200
Video Card(s) EVGA RTX 2080 Ti Xc
Storage 1x 500 MX500 SSD; 2x 6TB WD Black; 1x 4TB WD Black; 1x400GB VelRptr; 1x 4TB WD Blue storage (eSATA)
Display(s) HP 27q 27" IPS @ 2560 x 1440
Case Fractal Design Define R4 Black w/Titanium front -windowed
Audio Device(s) Soundblaster Z
Power Supply Seasonic X-850
Mouse Coolermaster Sentinel III (large palm grip!)
Keyboard Logitech G610 Orion mechanical (Cherry Brown switches)
Software Windows 10 Pro 64-bit (Start10 & Fences 3.0 installed)
This reply goes also to the guy saying the there is no visual difference.

You are aware that Square Ennix themselves said there would be no difference in visuals with this DX12 patch?

...but consumers are stupid and they deserve get stolen by such A-holes like nVidia.

That my "friends" is called evolution and a step forward in the technology, but some people think it's not much because they have to justify their 500$ or € purchase, not only by BSing themselves, but spreading the S*** all over the world

And you are innocent of spreading shit and hate, perpetuating this stupid red and green war? :shadedshu:
 
Joined
Oct 2, 2004
Messages
13,791 (1.94/day)
No you/they don't improve graphics when there are performance gains. They've proven that consistently over years. The worst waste of potential being tessellation I've bitched about many times. Engines waste huge amounts of polygons on things developers decided is "important" and the rest is the same blocky mess. So, we have elements that have 50 times more polygons than they actually need because of tesselation and there are objects that aren't even affected by it. Result, games that often run even worse and they don't even look any better than games without any tesselation.
 

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,378 (2.37/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
This reply goes also to the guy saying the there is no visual difference.

Just imagine this:
We have the card A, card A gives us X fps with the Y visual effects on a game.
When the card A and every next generation of this card is able to draw more fps from the same game, or scene, what do we(us programmers) do? WE MAKE BETTER VISUAL EFFECTS.
That's what has been going on since the 80s, not only in graphics, Processors were able to handle more, the software got better and better for the end user in each aspect.
If you make ZERO, that's 0, progress in your APIs, μArch, OS, e.t.c. then you get 0 better visuals back.

What is going to happen in the future? well that's easy to predict, game companies will see that they cannot squeeze more complicated graphics in the games and they will keep them the same for every generation of the games.
...and that's what nVidia is being doing the last decade. Nvidia shits on every gamers' face with their libraries because that's what they can handle and they do not let studios' innovations surface the market.
Do you remember what happened with Crysis series? It was marvelous that we had a studio push the hardware that much and push the companies to make beastier gpus.
Do you remember what happen with every Gameworks title? same visuals, only forced you to jump to a newer generation of nVidia's cards because they crippled their older gpus.

...but consumers are stupid and they deserve get stolen by such A-holes like nVidia.

That my "friends" is called evolution and a step forward in the technology, but some people think it's not much because they have to justify their 500$ or € purchase, not only by BSing themselves, but spreading the S*** all over the world.

Not sure what angle your reply to me is. Deus Ex is punishing on cards (relative to other games). The dev said DX12 may give better performance but not visuals.
The huge memory use may be a factor in that the monstrous textures are taking an awful lot of processing grunt to shift. API or not, I think running near 6GB on 1440p shows how much is being rendered. That takes power from a hefty chip. New FX may have to wait as devs keep creating more 'on screen' visuals which consume the chips capacity.

I don't know though, it seems if we want photo realistic, processing power with far higher Tflops is required. Maybe Pascal Titan X points in that direction with its 'rubbish' Async but enormous power.
 
Joined
Oct 28, 2012
Messages
1,159 (0.28/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
Now it's hard to see just how much of a game changer DX12 will actually be.
I still remember the moment when square enix showed the DX 12 demo :
Granted it was working on four GTX Titan X, but that was glorious, it showed what could be done with low level api on pc.
The irony is that Nvidia doesn't seem to get any benefit from DX12 : they have either same, or negative performance. Even the gain on amd gpu isn't really ashtonishing:
http://www.techspot.com/review/1081-dx11-vs-dx12-ashes/page3.html
Getting low level optimisation on a compute monster like the Fury X is only worth 3 fps ?

I remember a time were gpu makers had developers making public realeased technical demo to show what their Gpu could do. They keep blabering about DX12, showing numbers on slides that looks impressive, but right now we don't have a single example of what you can expect of DX 12 on a single High-End gpu.

Vulkan is at the moment the api who got the more impressive results, but if the story of open gl repeat itself, I fear this isn't going to matter that much, as the studios will keep giving DX12 the priority.

So yhea, DX12 is at least giving you a great bump if you got a weak cpu, but it that really it ? Was that really worth making so much noise ? Right now gaming developpers are not giving me any reason to get hyped about DX 12 benefits. Will they be any project ambitious, and crazy enough to make people make ask : "Can it run XXX tho ?" while using of the benefits of a low level api ?
Deus Ex MD looks nice, but it's not "out of this world" nice. I'm not seeing a huge gap between it and Tomb Raider 2015, DOOM, or even TW3.
 
Last edited:
Joined
Jul 14, 2008
Messages
872 (0.15/day)
Location
Copenhagen, Denmark
System Name Ryzen/Laptop/htpc
Processor R9 3900X/i7 6700HQ/i7 2600
Motherboard AsRock X470 Taichi/Acer/ Gigabyte H77M
Cooling Corsair H115i pro with 2 Noctua NF-A14 chromax/OEM/Noctua NH-L12i
Memory G.Skill Trident Z 32GB @3200/16GB DDR4 2666 HyperX impact/24GB
Video Card(s) TUL Red Dragon Vega 56/Intel HD 530 - GTX 950m/ 970 GTX
Storage 970pro NVMe 512GB,Samsung 860evo 1TB, 3x4TB WD gold/Transcend 830s, 1TB Toshiba/Adata 256GB + 1TB WD
Display(s) Philips FTV 32 inch + Dell 2407WFP-HC/OEM/Sony KDL-42W828B
Case Phanteks Enthoo Luxe/Acer Barebone/Enermax
Audio Device(s) SoundBlasterX AE-5 (Dell A525)(HyperX Cloud Alpha)/mojo/soundblaster xfi gamer
Power Supply Seasonic focus+ 850 platinum (SSR-850PX)/165 Watt power brick/Enermax 650W
Mouse G502 Hero/M705 Marathon/G305 Hero Lightspeed
Keyboard G19/oem/Steelseries Apex 300
Software Win10 pro 64bit
I wouldn't go so far as to say dominance, it's just that AMD is finally getting their money's worth out of their 'metal'.

- AMD uses a wider bus
- AMD uses more shaders
- AMD runs at lower clocks
- Polaris provides about similar (or slightly higher) perf/clock to Pascal
- Polaris still has a lower perf/watt than Pascal
- GCN has not radically changed since HD7xxx.

AMD just runs a wider GPU across the board, as they have done for a long time. GCN is geared to be an extremely balanced arch that has some overcapacity on the VRAM end. It is built to let the core do all the work it can do, whereas Nvidia's arch is always focused at 'efficiency gains through tight GPU balance' - Nvidia obtains that balance by cutting heavily into bus width and removing everything from the GPU core that isn't required for gaming. They've tried several things, of which DP was the first thing they dropped with Kepler, then delta compression enabled them to further reduce bus width. This is also why Nvidia's cards don't stretch their legs at higher resolutions, but rather lose performance. Only the GDDR5X-supported 1080 avoids that fate.

On DX11, AMD GPU's were just fine and they excelled only at higher resolutions. Why? Not just because of VRAM, but because of the fact that higher res = lower CPU load. In DX12, GCN gets to stretch its legs even earlier and also at lower resolutions, in part also because of the better CPU usage of that API. Vulkan is similar. That CPU usage was the last hurdle for GCN to really come to fruition. Say what you want, but AMD has really made a smart move here, even though we can doubt how conscious that move has really been. They have effectively gained architectural advantage by letting the market do most of the work.

The irony is that the market for gaming has moved towards GCN, and GCN has seen very minimal architectural changes, while the market is moving away from Nvidia's cost/efficiency improvement-focused GPU architecture. At the same time, Nvidia can almost eclipse that change through a much higher perf/watt, but that only hides so much of the underlying issue, an issue of Nvidia GPU's having to clock really high to gain solid performance, because they lack not only a wide bus right now, but also raw shader counts.

I think it is inevitable, and safe to predict, that Nvidia has now reached a new cap with regards to clock speeds on the core. The only way forward is for them to once again start building bigger and wider GPUs. AMD, on the flip side, has more wiggle room and a lot of things left to improve - clocks, efficiency, and judging the RX480, they also have space left on the die.
this is an actual informative opinion. thank you mate!
i totally agree with what you said, i had the same feeling when i first heard about vulkan/dx12. if Nvidia is not careful they could lose a big chunk of the market in the next one to two gens of cards, which imo would be something positive for the consumer since i think that the gpu market has been lacking serious competition for some years now.
 
Joined
Mar 4, 2011
Messages
299 (0.06/day)
Location
Canada
System Name Something Esoteric
Processor Intel i7 7700K @ 5GHz (2021 delid by Silicon Lottery)
Motherboard ASUS Maximus VIII Hero
Cooling CoolerMaster Hyper 212 Evo
Memory 32GB Crucial Ballistix 3600MHz DDR4
Video Card(s) GIGABYTE RTX 3080 GAMING OC
Storage 480GB SanDisk Extreme Pro / 1TB WD SN750 NVMe / 2 x 8TB Seagate Ironwolf NAS
Display(s) Dell S2721DGF IPS + BenQ V2400W 16:10 LCD (https://hardforum.com/showthread.php?t=1315565)
Case Fractal Define R5 Windowless
Audio Device(s) Asus Xonar Essense STX, Samsung Buds2 Pro, SteelSeries Siberia 800
Power Supply Corsair AX850
Mouse Logitech G903
Keyboard Microsoft Sidewinder X6
Software Win10 Pro x64
if Nvidia is not careful they could lose a big chunk of the market in the next one to two gens of cards, which imo would be something positive for the consumer since i think that the gpu market has been lacking serious competition for some years now.

While of course a possibility, I wouldn't put any money/stock/holding of breath in that happening. Call me cynical if you must but Nvidia won't lose significant market share in the next 2 gens even if they deserve to for the same reason Apple doesn't lose market share even though they deserve to; they have a refined and sadly effective hype and marketing machine that constantly builds and stokes the consumer norm that they are the superior choice.

Though I hope you're right for the price drops that need to happen for all consumers, in the age of Likes, Views and Trending, mindshare is the real metric that maintains a marketshare's status quo and Nvidia is throwing too much TWIMTBP money around for that to change any time soon.
 

NGreediaOrAMSlow

New Member
Joined
Sep 11, 2016
Messages
19 (0.01/day)
The graphs and numbers posted by guru3d reflect what has been seen so far between the new cards and the old ones. All AMD see an increase, NVidia newer 10xx see a smaller nearly negligible increase, older 9xx see a penalty due to incomplete DX12.

The 1080 issue was mentioned as acknowledge and under investigation. After they are done, probably will behave like the 1060 numbers.

If the game is poorly optimized (or not), or does use more complex scenery than other games it doesn't matter as much as the actual card performance curve, specially if you educate enough and look not only at APIs, but at the real source, which is explained in detail here
 
Joined
Sep 15, 2011
Messages
6,457 (1.41/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
AMD beats NVIDIA in every single DX12 game (except games where both suck hard compared to DX11 for no logical reason). Must be "AMD biased games". Right. It's not because the rendering engine in Radeons is clearly superior for such tasks since HD7000 series when they introduced GCN, it has to be "bias". C'mon people, can you be less of a fanboys?...
No, AMD definitely DOES NOT beat nVidia in ANY/EVERY single D3D12 game. The right words are: AMD has a better performance gain in D3D12 than nVidia. That's all.
And please stop putting words like "fanboy" on your comments. It makes you look more and more like one. ;)
 

Malabooga

New Member
Joined
Jul 21, 2016
Messages
15 (0.01/day)
No, AMD definitely DOES NOT beat nVidia in ANY/EVERY single D3D12 game. The right words are: AMD has a better performance gain in D3D12 than nVidia. That's all.
And please stop putting words like "fanboy" on your comments. It makes you look more and more like one. ;)

Yes it does. 2016. is definitely not good year for NVidia, unfortunately DX12/Vulkan came much faster than "3-4 years away" like those who dont know better were saying. Guess what, Microsoft > than Intel, NVidia and AMD combined and what Microsoft wants Microsoft gets lol

And currently Microsoft wants W10 adoption and DX12 is a part of that (and has been since release a year ago). And majority of gamers are on W10 in a year.
 

Malabooga

New Member
Joined
Jul 21, 2016
Messages
15 (0.01/day)
I wouldn't go so far as to say dominance, it's just that AMD is finally getting their money's worth out of their 'metal'.

- AMD uses a wider bus
- AMD uses more shaders
- AMD runs at lower clocks
- Polaris provides about similar (or slightly higher) perf/clock to Pascal
- Polaris still has a lower perf/watt than Pascal
- GCN has not radically changed since HD7xxx.

AMD just runs a wider GPU across the board, as they have done for a long time. GCN is geared to be an extremely balanced arch that has some overcapacity on the VRAM end. It is built to let the core do all the work it can do, whereas Nvidia's arch is always focused at 'efficiency gains through tight GPU balance' - Nvidia obtains that balance by cutting heavily into bus width and removing everything from the GPU core that isn't required for gaming. They've tried several things, of which DP was the first thing they dropped with Kepler, then delta compression enabled them to further reduce bus width. This is also why Nvidia's cards don't stretch their legs at higher resolutions, but rather lose performance. Only the GDDR5X-supported 1080 avoids that fate.

On DX11, AMD GPU's were just fine and they excelled only at higher resolutions. Why? Not just because of VRAM, but because of the fact that higher res = lower CPU load. In DX12, GCN gets to stretch its legs even earlier and also at lower resolutions, in part also because of the better CPU usage of that API. Vulkan is similar. That CPU usage was the last hurdle for GCN to really come to fruition. Say what you want, but AMD has really made a smart move here, even though we can doubt how conscious that move has really been. They have effectively gained architectural advantage by letting the market do most of the work.

The irony is that the market for gaming has moved towards GCN, and GCN has seen very minimal architectural changes, while the market is moving away from Nvidia's cost/efficiency improvement-focused GPU architecture. At the same time, Nvidia can almost eclipse that change through a much higher perf/watt, but that only hides so much of the underlying issue, an issue of Nvidia GPU's having to clock really high to gain solid performance, because they lack not only a wide bus right now, but also raw shader counts.

I think it is inevitable, and safe to predict, that Nvidia has now reached a new cap with regards to clock speeds on the core. The only way forward is for them to once again start building bigger and wider GPUs. AMD, on the flip side, has more wiggle room and a lot of things left to improve - clocks, efficiency, and judging the RX480, they also have space left on the die.

Nice to see someone who gets it, kudos to you. NVidia has squeezed every MHz from TSMCs 16nm node and only option is to make more GCN like arhictecture because they are at the end of the road with their MHz chase. And last time they tried that we got Fermi.
 

NGreediaOrAMSlow

New Member
Joined
Sep 11, 2016
Messages
19 (0.01/day)
Yes it does. 2016. is definitely not good year for NVidia, unfortunately DX12/Vulkan came much faster than "3-4 years away" like those who dont know better were saying. Guess what, Microsoft > than Intel, NVidia and AMD combined and what Microsoft wants Microsoft gets lol

And currently Microsoft wants W10 adoption and DX12 is a part of that (and has been since release a year ago). And majority of gamers are on W10 in a year.

Actually currently it does. While AMD current architecture is better placed or optimized for DX12, in terms of raw clock, max TFLOPS Nvidia dominates.

And no matter how well you optimize there is a limit in how much a hardware can do. While probably like cars, those numbers can be taken with a kind of grain of salt in terms that manufactures may inflate them, if you look at given numbers, Nvidia's are higher... Period.
 
Joined
Sep 15, 2011
Messages
6,457 (1.41/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
Yes it does. 2016. is definitely not good year for NVidia, unfortunately DX12/Vulkan came much faster than "3-4 years away" like those who dont know better were saying. Guess what, Microsoft > than Intel, NVidia and AMD combined and what Microsoft wants Microsoft gets lol

And currently Microsoft wants W10 adoption and DX12 is a part of that (and has been since release a year ago). And majority of gamers are on W10 in a year.
Sorry, you wrote a bunch of nonsense there is not even worth replaying. This post was just informational btw...
 
Joined
Jan 31, 2011
Messages
2,202 (0.46/day)
System Name Ultima
Processor AMD Ryzen 7 5800X
Motherboard MSI Mag B550M Mortar
Cooling Arctic Liquid Freezer II 240 rev4 w/ Ryzen offset mount
Memory G.SKill Ripjaws V 2x16GB DDR4 3600
Video Card(s) Palit GeForce RTX 4070 12GB Dual
Storage WD Black SN850X 2TB Gen4, Samsung 970 Evo Plus 500GB , 1TB Crucial MX500 SSD sata,
Display(s) ASUS TUF VG249Q3A 24" 1080p 165-180Hz VRR
Case DarkFlash DLM21 Mesh
Audio Device(s) Onboard Realtek ALC1200 Audio/Nvidia HD Audio
Power Supply Corsair RM650
Mouse Steelseries Rival 3 Wireless | Wacom Intuos CTH-480
Keyboard A4Tech B314 Keyboard
Software Windows 10 Pro
Techreport's DX12 Deus Ex Mankind Divided benchmarks, including frametime analysis

http://techreport.com/review/30639/...x-12-performance-in-deus-ex-mankind-divided/3




So that's a thing. Switching over to DXMD's DirectX 12 renderer doesn't improve performance on any of our cards, and it actually makes life much worse for the Radeons. The R9 Fury X turns in an average FPS result that might make you think its performance is on par with the GTX 1070 once again, but don't be fooled—that card's 99th-percentile frame time number is no better than even the GTX 1060's. Playing DXMD on the Fury X and RX 480 was a hitchy, stuttery experience, and our frame-time plots confirm that impression.

In the green corner, the GTX 1070 leads the 99th-percentile frame-time pack by a wide margin, and that translates into noticeably smoother gameplay than any other card here can provide while running under DirectX 12.

I hate to toot TR's horn here, but tests like these demonstrate why one simply can't take average FPS numbers at face value when measuring graphics-card performance. We've been saying so for years. From our results and our subjective experience, it's clear that the developers behind Deus Ex: Mankind Divided have a lot of optimizing to do for Radeons before the game's DirectX 12 mode goes gold in a week and change. AMD's driver team may also have a few long nights ahead, though in theory, DX12 puts much more responsibility on the shoulders of the developer.

It's also clear that it's too early to call a winner between the green and red teams for DirectX 12 performance in this beta build of Deus Ex, even if AMD seems to feel confident in doing so. The Radeon cards we tested perform poorly in our latency-sensitive frame-time metrics in DX12 mode, meaning that the Fury X's hitchy gameplay stands in stark contrast to its respectable average-FPS result. Even if Nvidia isn't shouting from the rooftops about Pascal's performance in DXMD's DX12 mode right now, the green team has some kind of smoothness advantage despite the game's beta tag. To be fair, we used different settings than AMD did while gathering its performance numbers, but we don't feel like the choices we made would be much different than those the average enthusiast would have with this hardware.
 
Last edited:
Joined
Sep 15, 2011
Messages
6,457 (1.41/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
A lot of discussion seems to focus only on comparing the D3D12 renderer performance AMD vs. nVidia. But seen how the both companies perform not as expected, isn't there a chance that actually this is Microsoft's fault for providing a crappy renderer instead? Or even lazy and buggy programing of the developer??
 
Joined
Apr 30, 2012
Messages
3,881 (0.89/day)
A lot of discussion seems to focus only on comparing the D3D12 renderer performance AMD vs. nVidia. But seen how the both companies perform not as expected, isn't there a chance that actually this is Microsoft's fault for providing a crappy renderer instead? Or even lazy and buggy programing of the developer??

or most likely

Nixxes said:
Nixxes Software is proud to announce that we're working on Deus Ex: Mankind Divided™ for PC.

Its a DX12 Beta patch on a port. Like every other port. Expect a handful of patches after its out of beta for it to be ironed out.
 
Joined
May 3, 2014
Messages
965 (0.27/day)
System Name Sham Pc
Processor i5-2500k @ 4.33
Motherboard INTEL DZ77SL 50K
Cooling 2 bay res. "2L of fluid in loop" 1x480 2x360
Memory 16gb 4x4 kingstone 1600 hyper x fury black
Video Card(s) hfa2 gtx 780 @ 1306/1768 (xspc bloc)
Storage 1tb wd red 120gb kingston on the way os, 1.5Tb wd black, 3tb random WD rebrand
Display(s) cibox something or other 23" 1080p " 23 inch downstairs. 52 inch plasma downstairs 15" tft kitchen
Case 900D
Audio Device(s) on board
Power Supply xion gaming seriese 1000W (non modular) 80+ bronze
Software windows 10 pro x64
The graphs and numbers posted by guru3d reflect what has been seen so far between the new cards and the old ones. All AMD see an increase, NVidia newer 10xx see a smaller nearly negligible increase, older 9xx see a penalty due to incomplete DX12.

The 1080 issue was mentioned as acknowledge and under investigation. After they are done, probably will behave like the 1060 numbers.

If the game is poorly optimized (or not), or does use more complex scenery than other games it doesn't matter as much as the actual card performance curve, specially if you educate enough and look not only at APIs, but at the real source, which is explained in detail here


that guy talks a load of bull though..

i agree nvidias dx11 drivers are very efficient. amd still out perform them in cases.. like 2x480 can out perform a 1080 in dx12 on more than 1 game. there is more too it than drivers alone, the amd crads are faster. Although i do agree that they gain more in dx12 % wize mostly due to drivers. The hardware archetecture of amd cards have been aimed towards a low lvl api since the hd 7750, Nvidia still havent bothered with it yet.. And given we wont be expecting dx12 to be come mainstream for games for atleast a nother year. i think nvidia did the right call. because they can have a new gen of gpu out just at the time dx12 becomes main stream.. obviously this makes the 10 sereise gpu's utterly pointless. But thats not an issue for nvidia they sold the things now, and next gen they can convince people they really need to upgrade to fully benifit low lvl api.
 

INSTG8R

Vanguard Beta Tester
Joined
Nov 26, 2004
Messages
7,955 (1.13/day)
Location
Canuck in Norway
System Name Hellbox 5.1(same case new guts)
Processor Ryzen 7 5800X3D
Motherboard MSI X570S MAG Torpedo Max
Cooling TT Kandalf L.C.S.(Water/Air)EK Velocity CPU Block/Noctua EK Quantum DDC Pump/Res
Memory 2x16GB Gskill Trident Neo Z 3600 CL16
Video Card(s) Powercolor Hellhound 7900XTX
Storage 970 Evo Plus 500GB 2xSamsung 850 Evo 500GB RAID 0 1TB WD Blue Corsair MP600 Core 2TB
Display(s) Alienware QD-OLED 34” 3440x1440 144hz 10Bit VESA HDR 400
Case TT Kandalf L.C.S.
Audio Device(s) Soundblaster ZX/Logitech Z906 5.1
Power Supply Seasonic TX~’850 Platinum
Mouse G502 Hero
Keyboard G19s
VR HMD Oculus Quest 2
Software Win 10 Pro x64
I wouldn't go so far as to say dominance, it's just that AMD is finally getting their money's worth out of their 'metal'.

- AMD uses a wider bus
- AMD uses more shaders
- AMD runs at lower clocks
- Polaris provides about similar (or slightly higher) perf/clock to Pascal
- Polaris still has a lower perf/watt than Pascal
- GCN has not radically changed since HD7xxx.

AMD just runs a wider GPU across the board, as they have done for a long time. GCN is geared to be an extremely balanced arch that has some overcapacity on the VRAM end. It is built to let the core do all the work it can do, whereas Nvidia's arch is always focused at 'efficiency gains through tight GPU balance' - Nvidia obtains that balance by cutting heavily into bus width and removing everything from the GPU core that isn't required for gaming. They've tried several things, of which DP was the first thing they dropped with Kepler, then delta compression enabled them to further reduce bus width. This is also why Nvidia's cards don't stretch their legs at higher resolutions, but rather lose performance. Only the GDDR5X-supported 1080 avoids that fate.

On DX11, AMD GPU's were just fine and they excelled only at higher resolutions. Why? Not just because of VRAM, but because of the fact that higher res = lower CPU load. In DX12, GCN gets to stretch its legs even earlier and also at lower resolutions, in part also because of the better CPU usage of that API. Vulkan is similar. That CPU usage was the last hurdle for GCN to really come to fruition. Say what you want, but AMD has really made a smart move here, even though we can doubt how conscious that move has really been. They have effectively gained architectural advantage by letting the market do most of the work.

The irony is that the market for gaming has moved towards GCN, and GCN has seen very minimal architectural changes, while the market is moving away from Nvidia's cost/efficiency improvement-focused GPU architecture. At the same time, Nvidia can almost eclipse that change through a much higher perf/watt, but that only hides so much of the underlying issue, an issue of Nvidia GPU's having to clock really high to gain solid performance, because they lack not only a wide bus right now, but also raw shader counts.

I think it is inevitable, and safe to predict, that Nvidia has now reached a new cap with regards to clock speeds on the core. The only way forward is for them to once again start building bigger and wider GPUs. AMD, on the flip side, has more wiggle room and a lot of things left to improve - clocks, efficiency, and judging the RX480, they also have space left on the die.

Thus has been my thinking with AMD and why I've stuck with them.
 
Top