Monday, March 23rd 2015

AMD Bets on DirectX 12 for Not Just GPUs, but Also its CPUs

In an industry presentation on why the company is excited about Microsoft's upcoming DirectX 12 API, AMD revealed its most important feature that could impact on not only its graphics business, but also potentially revive its CPU business among gamers. DirectX 12 will make its debut with Windows 10, Microsoft's next big operating system, which will be given away as a free upgrade for _all_ current Windows 8 and Windows 7 users. The OS will come with a usable Start menu, and could lure gamers who stood their ground on Windows 7.

In its presentation, AMD touched upon two key features of the DirectX 12, starting with its most important, Multi-threaded command buffer recording; and Asynchronous compute scheduling/execution. A command buffer is a list of tasks for the CPU to execute, when drawing a 3D scene. There are some elements of 3D graphics that are still better suited for serial processing, and no single SIMD unit from any GPU architecture has managed to gain performance throughput parity with a modern CPU core. DirectX 11 and its predecessors are still largely single-threaded on the CPU, in the way it schedules command buffer.
A graph from AMD on how a DirectX 11 app spreads CPU load across an 8-core CPU reveals how badly optimized the API is, for today's CPUs. The API and driver code is executed almost entirely on one core, and this is something that's bad for even dual- and quad-core CPUs (if you fundamentally disagree with AMD's "more cores" strategy). Overloading fewer cores with more API and driver-related serial workload makes up the "high API overhead" issue that AMD believes is holding back PC graphics efficiency compared to consoles, and it has a direct and significant impact on frame-rates.
DirectX 12 heralds a truly multi-threaded command buffer pathway, which scales up with any number of CPU cores you throw at it. Driver and API workloads are split evenly between CPU cores, significantly reducing API overhead, resulting in huge frame-rate increases. How big that increase is in the real-world, remains to be seen. AMD's own Mantle API addresses this exact issue with DirectX 11, and offers a CPU-efficient way of rendering. Its performance-yields are significant on GPU-limited scenarios such as APUs, but on bigger setups (eg: high-end R9 290 series graphics, high resolutions), the performance gains though significant, are not mind-blowing. In some scenarios, Mantle offered the difference between "slideshow" and "playable." Cynics have to give DirectX 12 the benefit of the doubt. It could end up doing a better job than even Mantle, at pushing paper through multi-core CPUs.

AMD's own presentation appears to agree with the way Mantle played out in the real world (benefits for APUs vs. high-end GPUs). A slide highlights how DirectX 12 and its new multi-core efficiency could step up draw-call capacity of an A10-7850K by over 450 percent. Sufficed to say, DirectX 12 will be a boon for smaller, cheaper mid-range GPUs, and make PC gaming more attractive for the gamer crowd at large. The fine-grain asynchronous compute-scheduling/execution, is another feature to look out for. It breaks down complex serial workloads into smaller, parallel tasks. It will also ensure that unused GPU resources are put to work on these smaller parallel tasks.
So where does AMD fit in all of this? DirectX 12 support will no doubt help AMD sell GPUs. Like NVIDIA, AMD has preemptively announced DirectX 12 API support on all its GPUs based on the Graphics CoreNext architecture (Radeon HD 7000 series and above). AMD's real takeaway from DirectX 12 will be how its cheap 8-core socket AM3+ CPUs could gain tons of value overnight. The notion that "games don't use >4 CPU cores" will dramatically change. Any DirectX 12 game will split its command buffer and API loads between any number of CPU cores you throw at it. AMD sells you 8-core CPUs for as low as $170 (the FX-8320). Intel's design strategy of placing stronger but fewer cores on its client processors, could face its biggest challenge with DirectX 12.
Add your own comment

87 Comments on AMD Bets on DirectX 12 for Not Just GPUs, but Also its CPUs

#51
Sony Xperia S
MrGeniusI'm with you there. And what most folks don't seem to realize is DX12 and Mantle just reinforce that. They take the cheaper casual gaming CPUs to the hardcore gaming level, and the cheapest entry-level CPUs to the casual gaming level. ALL of them(regardless of core count). Because they're not really about enabling the use of more cores. What they're really about is being able to use the cores available MUCH more efficiently. Making super powerful mega multicore CPUs even more of a waste of money(than they are now).
No, the purpose of DX12 is not to enable gaming on slow dual or quad core machines.

Its purpose is to shift the main performance improvement cause from almost entirely relying on IPC improvements to using all available hardware resources properly, loading all transistors as simultaneously as possible and not waiting for one thread to finish its task, so another takes the queue.

Actually, the shift to Windows 10 and DirectX 12 will be the main requirement for future 4K gaming.

Can't wait for this because this is the light in the end of the tunnel.
Posted on Reply
#52
JunkBear
What is really useful in 4k res when you dont have the screen to drive it? We gonna be in a long period where poorer people will use 1080p tv as computer screen.
Posted on Reply
#53
mynis01
damricWe've been promised so much with every new API. I'm not holding my breath.
You and me both.
Posted on Reply
#54
Schmuckley
I would just like to see a powerful CPU from AMD.
I don't care if new platform..not sure if DDR4 would be a good idea..
just something..
Dual-6-core CPUs on an OCing board would be OK with me.
Posted on Reply
#55
MrGenius
Sony Xperia SNo, the purpose of DX12 is not to enable gaming on slow dual or quad core machines.

Its purpose is to shift the main performance improvement cause from almost entirely relying on IPC improvements to using all available hardware resources properly, loading all transistors as simultaneously as possible and not waiting for one thread to finish its task, so another takes the queue.

Actually, the shift to Windows 10 and DirectX 12 will be the main requirement for future 4K gaming.

Can't wait for this because this is the light in the end of the tunnel.
I never said it was the purpose of DX12. What I was saying is that the performance increases will be across the board. Meaning ALL CPUs(again regardless of core/thread count) will benefit greatly from DX12 or Mantle.

Sorry...but I have to laugh now...:laugh: Why?

Because my "casual gaming rig" with an ancient dual core(E8600) and a 280X runs anything you've got at more than playable speeds. And when I run Star Swarm, D3D vs. Mantle, it's the "slideshow" vs. "playable"(if it were) scenario. Soo...

BTW, I've already shifted to Windows 10. I've had it running on said machine for a couple months now.:pimp:
Posted on Reply
#56
Sony Xperia S
MrGeniusan ancient dual core(E8600) and a 280X runs anything
Your "casual gaming rig" can't play anything 4k. Your super ancient E8600 is super bottleneck for your videocard which is a waste of time and resources.
MrGeniusI've already shifted to Windows 10. I've had it running on said machine for a couple months now.
Windows 10 is not ready yet. You are a beta tester. Someone who I do not want to be. Not yet.
Posted on Reply
#57
MrGenius
Sony Xperia SYour "casual gaming rig" can't play anything 4k. Your super ancient E8600 is super bottleneck for your videocard which is a waste of time and resources.

Windows 10 is not ready yet. You are a beta tester. Someone who I do not want to be. Not yet.
Bottleneck, yes. At 1080p? Not so much. So with my resources limited to that resolution, as they are, how much of a waste of time and resources that is, is a matter of opinion. Anyway, not trying to argue with you about that.

I've got some more interesting info to share, about the Windows beta testing, switching from mobile, please standby...

O.k., back with my news. I tried running Star Swarm on Windows 10(first time) and...wait for it...Start Swarm(public version) appears to use DX12. No longer is there a significant difference between runs with Mantle or D3D on my machine. In fact...D3D is now looking slightly faster!!! Whereas with DX11, D3D was WAY slower than Mantle.

I'm going to run it a few more times to verify. Anyone else want to join in the experiment?

Alrighty then. I've seen enough. I'm going to go ahead and call it, Mantle and D3D12 are nearly the same in terms of performance. At least on my machine, running Star Swarm. But I don't see it being a much different story on any machine, frankly. ;)

UPDATE
I screwed that up massively. Too embarrassed to admit what I did. But it made a huge difference in the comparison testing. And it turns out I'm eating my words as we speak. Because after correcting for idiocy, Mantle is now significantly faster than D3D12, on my machine, running Star Swarm. As it now looks like Mantle is around 50%-75% faster in this case/scenario. No real big surprise there I guess, considering it's meant to glorify Mantle. But, on the other hand, D3D12 is still MUCH faster than D3D11. I'd say by AT LEAST 100%, possibly more...IIRC(since I'd have to downgrade to DX11 to compare results now, and I don't want to).
Posted on Reply
#58
arbiter
scorpion_amd13No, the fact that most of the new 300 series are rebrands wouldn't be an issue. AMD focused all their efforts on Fiji and lo and behold, they got to use HBM one generation before nVidia. They now have the time and expertise required to design new mainstream/performance cards that use HBM and put them into production once HBM is cheap enough (real mass production). Besides, where's the rush? There are no DX12 titles out yet, Windows 10 isn't even out yet. By the time you'll have a DX12 game out there that you'd want to play, you'll be able to purchase 400 series cards.
Not everyone buys a new video card every new generation.
SchmuckleyI would just like to see a powerful CPU from AMD.
I don't care if new platform..not sure if DDR4 would be a good idea..
just something..
Dual-6-core CPUs on an OCing board would be OK with me.
We already know how powerful AMD cpu's are, it takes 2 AMD cpu cores to match 1 Intel core.
Posted on Reply
#59
john_
scorpion_amd13It is indeed likely that the R9 390X/Fiji is the only new core, just like Hawaii was when it was launched. However, judging by the "50-60% better performance than 290X" rumor, as well as the specs (4096 shaders, for starters), I don't think those are ye regular GCN shaders. If Fiji indeed ends up 50-60% faster than Hawaii, while having only about 45% more shaders, it will be rather a amazing feat. Just look at Titan X, it has at least 50% more anything compared to a GTX 980, but it only manages 30% better real life performance.

And that's actually a bit above average as far as scaling goes. Due to the complexity of GPUs, increasing shader count and/or frequency will never yield the same percentage in real life performance gains. And yet, Fiji is coming to the table with a performance gain that EXCEEDS the percentage by which the number of shaders has increased. No matter how good HBM is, it cannot explain such performance gains by itself. If you agree that Hawaii had sufficient memory bandwidth, then the only two things that can account for the huge gain: 2GHz shader frequency (which is extremely unlikely) or modified/new shaders. My money's on the shaders.
AMD needs better GCN architecture, at least better than the one in Hawaii, for DX12. But if that 50-60% is on 4K and comparing the water cooled 8GB model to the reference 4GB 290X, it could be very good, but not as much good as to call it amazing. Never underestimate marketing. Especially when the budget is so tight.
Posted on Reply
#60
Schmuckley
arbiterNot everyone buys a new video card every new generation.



We already know how powerful AMD cpu's are, it takes 2 AMD cpu cores to match 1 Intel core.
That was not the case 6 years ago.
..or 9 or 15 even.
Posted on Reply
#61
Sony Xperia S
MrGeniusMantle and D3D12 are nearly the same in terms of performance.
I hope you are mistaken and in fact DX12 is better than Mantle. ;)

I think Mantle is not good enough.
Posted on Reply
#62
MrGenius
I'm running SS for the rest of the day to see how close they are. Right now going for max GPU core clock. Since my mobo doesn't OC. Figure whichever might be able to finish @ higher speed there, might be relevant. So far finishing runs with both at same oc. Probably start a thread if I find anything conclusive.*

*See previous post.
Posted on Reply
#63
Serpent of Darkness
荷兰大母猪So what will the first DX12 game be?
From what Guru3D.com, it seems like Forza Motorsport 6 will be the first, but it's not official.
Sony Xperia SI don't care about the APUs like A10-7850 but if DX12 can push all 8 cores of a FX processor to 90-100%, that will hugely increase performance across all benchmarks.
For a PC Game, that's a big no. If the GPUs are offloading the work to produce images for the CPU, you don't want the CPU to be at 90% usage at this point. You'd only want all gpus and CPU cores to be at 99% is if you are rendering 3D Packages for a video or movie. You want the GPUs to be at 99% all the time while it's rendering the graphics for a PC Game, for the user. Now, what you do want is the 1th core to be less congested, and the other cores to que up task at the same time for the GPU to continue drawing images for the game. In a sense, this stops the 1st core from having to perform this task, or perform it less. This increases FPS because less time is spent waiting on the 1st Core. In a sense, this is what DX12 is going to do for gamers. Having more cores que up task for the GPU, synchronously with every core, is going to improve performance. Why. Task being done between the CPU and GPUs are taking up less time to execute. That's kind of the selling point for DX12. This doesn't fix the other problems. Problems like the CPU rendering shadows, particle effects, physics, and others. These things are going to drop the fps performance if they are waiting for the 1st core to que them up in the sequence of task, with DX12 or not.
NabarunIs it gonna make any significant improvements in terms of FPS to FC4 and Crysis 3 and the like?
If I am not mistaken, you will probably see a small gain in FPS with DX12 on these PC Games. Even though they aren't DX12 titles, it doesn't mean you won't get any benefits from DX12 API in Win10. Take into account that DX12 is in a sense, a CPU Optimizer. AMD Mantle was something similar, but it worked only with AMD Graphic Cards. DX12 works with both AMD and NVidia Graphic Cards that are optimized for it. It improves performance on the CPU level. If you notice a lot of other members talking about AMD CPUs with high amount of cores, whether you're using AMD or Intel, DX12 increases the usage of all the cores for the PC Game. So more cores "could" equate better FPS performance.
xorbeI'm sure DX12 will improve on DX11, but I never trust those graphs that suggest it's going to be twice as fast. There's just no way to validate that the game devs put the same honest effort into the DX11 and DX12 paths.
D3D12 is D3D11 plus the CPU optimizer aspects of the API. So when you say that D3D12 will improve on to D3D11, it doesn't mean that D3D11 is going to be optimized. The train-wreck that is D3D9 and below that doesn't get improved. I'm just pointing that out...
JunkBearFor people like me there's too much cores in play there. People with everyday use can still rock a dualcore and do the job and casual gaming. What makes computer companies fill the bank to reinvest in R&D it's freaks who need the latest technology like it was their last dose of drug.
I agree to an extent. You don't need anything more than dual Core processing for PC Games or everyday task. On the other hand, reinvesting in the latest tech makes life a lot easier and enjoyable. In addition, there are uses to having more than 2 Cores. Rendering is one of them. I think the bigger issue are two things. One, latest tech doesn't take large leaps. So we only get like the typical 15% to 30% increases in NVidia and Intel Tech. The usual 25% to 50% increases in performance in AMD graphic cards. The years it took in between D3D9, D3D11 and D3D12 are painfully slow. In a sense, we get less and less for the investments that consumers make for better products. The second thing is that with the US economy in it's current position, companies aren't eager to rush into investing in future tech as much because there's no guarantee that large revenue returns will be made.
SchmuckleyI would just like to see a powerful CPU from AMD.
I don't care if new platform..not sure if DDR4 would be a good idea..
just something..
Dual-6-core CPUs on an OCing board would be OK with me.
This is a matter of perception and perspective. When you say you want a more powerful CPU from AMD, the word powerful has multiple definitions, but for the most part, it is just another word for control. You want AMD to have more control of the situation so they can provide you a better product that meets your expectations. What you mean to say is you want AMD CPUs to produce more efficient forms of work for less power. You want that work to be accomplished faster and more multiple task or executions to be done at the same time. Intel focus on single thread burst speed, and AMD focus on multi-thread burst speed. Now in video games, you really don't need multiple cores, and core frequency is king. This is one of the reasons why Intel CPUs are more loved by consumers for Video Games. On the other hand, Rendering uses all cores. Using all cores at a higher speeds for a cheaper price is what AMD sort of provides to it's customers. It's a matter of perception (input information from your 5 senses) and perspective (output from a means of communication to convey the thoughts and opinions of a single person) on why you want to see a powerful CPU from AMD. If you look at it this way, if AMD were to focus on single thread burst speed and Intel focused on multi-thread burst speed, would AMD be the least favorite, excluding it's performance to read/write memory, etc... I would say no. AMD wouldn't be the least favorite because I bet a larger portion of their revenue returns would come from PC enthusiast, everyday users, and Gamers. Demand for AMD products in this hypothetical scenario, would go up. You know that thing or x-factor that makes Intel Wealthy, and continue to produce better products: Consumers investing in their products...

Dual 6 core CPUs on an OCing board for a PC Game is overkill x2... It probably won't be worth it.
john_AMD needs better GCN architecture, at least better than the one in Hawaii, for DX12. But if that 50-60% is on 4K and comparing the water cooled 8GB model to the reference 4GB 290X, it could be very good, but not as much good as to call it amazing. Never underestimate marketing. Especially when the budget is so tight.
Just going off of what you and AMD Scorpion said, R9-390x is rumored to be performing at 50% to 60% above R9-290x at the same TDP as the R9-290x, and this is probably thanks to HBM. Whether it needs better GCN architecture is somewhat irrelevant right now. What AMD needs is to release it's next line of Graphic Cards faster, and continue to do so with the 30% to 50% increase each generation with less mess-ups. This in turn increases their revenue so they have more cash in their pockets. The better GCN architecture thing can come down the line in the next 1 or 2 generations. AMD has a 1 to 2 generation gap lead ahead of the competition because NVidia is basically going to release Volta (it's own version of HBM in a generation or two) in the not to distant future. Expectations of Maxwell GTX Titan-X has been overall, a flop in my opinion. Someone quoted that Maxwell GTX Titan-X is performing only 30% greater than GTX 980, someone else in another TPU threads was quoting 23% gains over a GTX 980. Overall, my point is AMD needs to push out Graphic Cards faster, and they need to stay reliable without the GTX 970 b.s. that NVidia pulled recently.
Posted on Reply
#64
scorpion_amd13
arbiterNot everyone buys a new video card every new generation.

We already know how powerful AMD cpu's are, it takes 2 AMD cpu cores to match 1 Intel core.
I don't buy a new card every generation either. Generally, I keep a graphics card for 2+ years. If the not-high-end 300 series cards are going to be rebrands, I'm just going to wait for the 400 series before I upgrade. Or see what nVidia brings to the table. The point is, you'll have plenty of graphics cards to choose from when DX12 will actually be something worth taking into consideration (as in, it's actually going to be used by games most people want to play).

As for AMD's CPU cores, that's pretty much the situation with Bulldozer modules. AMD's next architecture will be quite different, it's going to have much more powerful cores and something along the lines of Hyper-Threading. Either way, what always matters most is what you can get for the money.
john_AMD needs better GCN architecture, at least better than the one in Hawaii, for DX12. But if that 50-60% is on 4K and comparing the water cooled 8GB model to the reference 4GB 290X, it could be very good, but not as much good as to call it amazing. Never underestimate marketing. Especially when the budget is so tight.
Oh, believe me, I never underestimate marketing. I know they can be very "generous" with their numbers, I've seen it happen plenty of times before. That's why I was saying that IF the rumors about 390X being 50-60% faster than 290X on average, then Fiji definitely has vastly improved GCN shaders. The vastly improved bandwidth and latencies that HBM brings to the table are not enough to explain such a huge boost in performance.

And yeah, AMD does need improved GCN shaders. Just don't expect to see them in the 300 series (except for Fiji, that is). They are going to wait for a new process node to bring out the good stuff for all the market segments. 300 series cards are going to be built on 28nm, and we won't have to wait much longer for 20nm to become available. There's no point in taping out the same cores twice (once on 28nm, and then on 20nm). Personally, unless I'll get a very sweet deal on Fiji, I'm going to wait for 20nm cores before I upgrade.
Posted on Reply
#65
Prima.Vera
AMD is kinda bullshitting some of the charts.
A lot of games that are using D3D11, are very well multi-core optimized.
For example, DA:Inquisition, BF, etc
Posted on Reply
#66
Sony Xperia S
Prima.VeraAMD is kinda bullshitting some of the charts.
A lot of games that are using D3D11, are very well multi-core optimized.
For example, DA:Inquisition, BF, etc
If you provide benchmarks graphs which show that all cores are loaded, but that load stays relatively low, for instance 40-50-60%, then this is NOT truely "well multi-core optimised". They are not.
Posted on Reply
#69
Ikaruga
While dx12 will definitely bring good kind of changes to gaming, let's not forget that the vast majority of the upcoming games gonna be multiplatform titles, which means they gonna be developed with the xbone and ps4 too. So it doesn't really matter if you have 4, 6 or 8 Intel cores, because if the game runs well on the sad Jaguar cores what they have in those consoles, it will surely run twice as well on any Intel CPU in the PCs.
Posted on Reply
#70
Sony Xperia S
IkarugaWhile dx12 will definitely bring good kind of changes to gaming, let's not forget that the vast majority of the upcoming games gonna be multiplatform titles, which means they gonna be developed with the xbone and ps4 too. So it doesn't really matter if you have 4, 6 or 8 Intel cores, because if the game runs well on the sad Jaguar cores what they have in those consoles, it will surely run twice as well on any Intel CPU in the PCs.
Good point.

1. We have a problem with very slow consoles which will spoil the fun, and progress.
2. I have read somewhere that it isn't necessarily a development target to design photo-realistic gaming environments. Say goodbye to progress and hallo to ugly animations.
3. AMD is in some sort of agreement with Intel and Microsoft to spoil the development. It doesn't make sense that they gave up on competing with Intel.
Maybe for the US market and economy it is better to have an active monopoly, so AMD will always be the weak underdog.

AMd have multiple opportunities to progress but they don't do it.
Posted on Reply
#71
REAYTH
This is how I picture AMD running DX12 on a CPU.

Posted on Reply
#72
JunkBear
Yup that's why I stay always behind the pidgeons who drop their crumbs trying to get their mouth fulls. What i mean i that lot of people buy overkill parts for absolutly nothing and change it not so long after. It's like buying a car that lose value as soon as you leave the dealership. I still have a cheap setup. Look under my avatar I did not even paid for this setup except the case that i bought couple years ago.
Serpent of DarknessFrom what Guru3D.com, it seems like Forza Motorsport 6 will be the first, but it's not official.

I agree to an extent. You don't need anything more than dual Core processing for PC Games or everyday task. On the other hand, reinvesting in the latest tech makes life a lot easier and enjoyable. In addition, there are uses to having more than 2 Cores. Rendering is one of them. I think the bigger issue are two things. One, latest tech doesn't take large leaps. So we only get like the typical 15% to 30% increases in NVidia and Intel Tech. The usual 25% to 50% increases in performance in AMD graphic cards. The years it took in between D3D9, D3D11 and D3D12 are painfully slow. In a sense, we get less and less for the investments that consumers make for better products. The second thing is that with the US economy in it's current position, companies aren't eager to rush into investing in future tech as much because there's no guarantee that large revenue returns will be made.
Posted on Reply
#73
john_
scorpion_amd13we won't have to wait much longer for 20nm to become available.
Forget 20nm. They will go to 14nm on Samsung and I don't expect them to be first in the line. Nvidia is already planning to use Samsung, probably for Tegra. GPUs will follow latter and with AMD's financials I wouldn't be surprised to see Nvidia coming out first from a Samsung fab with a new GPU.
Posted on Reply
#74
john_
Serpent of DarknessExpectations of Maxwell GTX Titan-X has been overall, a flop in my opinion. Someone quoted that Maxwell GTX Titan-X is performing only 30% greater than GTX 980, someone else in another TPU threads was quoting 23% gains over a GTX 980. Overall, my point is AMD needs to push out Graphic Cards faster, and they need to stay reliable without the GTX 970 b.s. that NVidia pulled recently.
I have seen a video comparing Titan X(the video in the end of this post) at stocks speed and overclocked against two SLi setup. One with 970's and one with 980s. It was performing really well and if the price was between $699 and $799 and not $999, it would have been a good option for gaming. But at $999 it is questionable for anyone who doesn't have deep deep pockets and at 1150 to 1350 euros in Europe it is an overvalued joke. But looking at the GPU only, especially when overclocked, it is pretty good.

Posted on Reply
Add your own comment
May 15th, 2024 21:08 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts