Monday, March 23rd 2015

AMD Bets on DirectX 12 for Not Just GPUs, but Also its CPUs

In an industry presentation on why the company is excited about Microsoft's upcoming DirectX 12 API, AMD revealed its most important feature that could impact on not only its graphics business, but also potentially revive its CPU business among gamers. DirectX 12 will make its debut with Windows 10, Microsoft's next big operating system, which will be given away as a free upgrade for _all_ current Windows 8 and Windows 7 users. The OS will come with a usable Start menu, and could lure gamers who stood their ground on Windows 7.

In its presentation, AMD touched upon two key features of the DirectX 12, starting with its most important, Multi-threaded command buffer recording; and Asynchronous compute scheduling/execution. A command buffer is a list of tasks for the CPU to execute, when drawing a 3D scene. There are some elements of 3D graphics that are still better suited for serial processing, and no single SIMD unit from any GPU architecture has managed to gain performance throughput parity with a modern CPU core. DirectX 11 and its predecessors are still largely single-threaded on the CPU, in the way it schedules command buffer.
A graph from AMD on how a DirectX 11 app spreads CPU load across an 8-core CPU reveals how badly optimized the API is, for today's CPUs. The API and driver code is executed almost entirely on one core, and this is something that's bad for even dual- and quad-core CPUs (if you fundamentally disagree with AMD's "more cores" strategy). Overloading fewer cores with more API and driver-related serial workload makes up the "high API overhead" issue that AMD believes is holding back PC graphics efficiency compared to consoles, and it has a direct and significant impact on frame-rates.
DirectX 12 heralds a truly multi-threaded command buffer pathway, which scales up with any number of CPU cores you throw at it. Driver and API workloads are split evenly between CPU cores, significantly reducing API overhead, resulting in huge frame-rate increases. How big that increase is in the real-world, remains to be seen. AMD's own Mantle API addresses this exact issue with DirectX 11, and offers a CPU-efficient way of rendering. Its performance-yields are significant on GPU-limited scenarios such as APUs, but on bigger setups (eg: high-end R9 290 series graphics, high resolutions), the performance gains though significant, are not mind-blowing. In some scenarios, Mantle offered the difference between "slideshow" and "playable." Cynics have to give DirectX 12 the benefit of the doubt. It could end up doing a better job than even Mantle, at pushing paper through multi-core CPUs.

AMD's own presentation appears to agree with the way Mantle played out in the real world (benefits for APUs vs. high-end GPUs). A slide highlights how DirectX 12 and its new multi-core efficiency could step up draw-call capacity of an A10-7850K by over 450 percent. Sufficed to say, DirectX 12 will be a boon for smaller, cheaper mid-range GPUs, and make PC gaming more attractive for the gamer crowd at large. The fine-grain asynchronous compute-scheduling/execution, is another feature to look out for. It breaks down complex serial workloads into smaller, parallel tasks. It will also ensure that unused GPU resources are put to work on these smaller parallel tasks.
So where does AMD fit in all of this? DirectX 12 support will no doubt help AMD sell GPUs. Like NVIDIA, AMD has preemptively announced DirectX 12 API support on all its GPUs based on the Graphics CoreNext architecture (Radeon HD 7000 series and above). AMD's real takeaway from DirectX 12 will be how its cheap 8-core socket AM3+ CPUs could gain tons of value overnight. The notion that "games don't use >4 CPU cores" will dramatically change. Any DirectX 12 game will split its command buffer and API loads between any number of CPU cores you throw at it. AMD sells you 8-core CPUs for as low as $170 (the FX-8320). Intel's design strategy of placing stronger but fewer cores on its client processors, could face its biggest challenge with DirectX 12.
Add your own comment

87 Comments on AMD Bets on DirectX 12 for Not Just GPUs, but Also its CPUs

#26
Lionheart
bobalazsOkay, so who cares about dx 12, when there's still gonna be majority of dx 9-11 out there, and amd cpu's as we know is lackluster in that field.


So what, we should just stick with DX9 - 11 based games forever & never innovate to create/ move to a new & better API? o_O
Posted on Reply
#27
scorpion_amd13
CasecutterDid AMD push Mantle to move DX12 to this direction as there was always a means to the true end?
Mantle was very much a tech demo affair. It showed how a modern API can vastly improve performance in and of itself. Remember that we're talking about percentages that are higher or significantly higher than what mere driver updates can achieve. It's basically the same thing that happened with tessellation. ATi had the hardware for tessellation in their GPUs starting with Radeon 8500 back in 2001, it was called TruForm back then. Only that TruForm never got accepted for DirectX (or OpenGL, for that matter). Fast forward to DirectX 11, and here we go: tessellation official for DirectX.

I don't know if you remember how things worked back in the day. ATi was never really able to get developers to use their proprietary tech, not in any meaningful way anyway. AMD have been far more successful in getting the tech they develop adopted by developers and especially Microsoft. Not really surprising considering that ATi/AMD GPUs are powering the last two console generations from Microsoft (Xbox 360 and Xbone). At any rate, AMD get their technologies (well, the big stuff anyway) implemented in DirectX, which means mass adoption by developers, and Microsoft get to spend less developing new tech to put into newer versions of DirectX. It's a win-win for everyone, really: AMD, Microsoft, the users... hell, even nVidia, since they get full access to everything.
Sony Xperia SI don't care about the APUs like A10-7850 but if DX12 can push all 8 cores of a FX processor to 90-100%, that will hugely increase performance across all benchmarks.
Believe it or not, DX12 will be most important for people using CPUs like that A10-7850 and the like. The gap between lower end CPUs and the high end will be a whole helluva lot smaller with DX12, which means that entry level and especially lower mainstream CPUs will be far more appealing to the gaming crowd. Think back to the times when you could overclock your cheap CPU to the point where you'd get similar performance to high end models. It wasn't that long ago.
john_The tittle is wrong. It was always (multicore) CPU performance the goal of a low level API.
Mantle was meant to push Microsoft to move faster on DX12. Mantle and DX12 from the beginning was going to minimize the distance between Intel and AMD CPUs in games that where poorly written. GPU performance with Mantle was always a secondary bonus, as long as Nvidia was sticking with DX11. Now that Nvidia is benefiting from DX12, it is a question mark if AMD will gain more from GCN compared to the performance gains of Maxwell with DX12. Anandtech's DX12 benchmarks show that GCN 1.1 is not as good as Maxwell in 900 series under DX12.
Mantle is AMD's insurance policy: they don't have to wait for a new DirectX to have an API that allows developers to use whatever technology they want to push. Furthermore, they'll have an extra advantage over nVidia next time Nintendo and Sony will need a new GPU/APU for their consoles: not having to rely on their competitor's own API or OpenGL is a pretty big selling point. As for how good AMD's current architecture is in DX12 versus nVidia's Maxwell, that doesn't really matter. AMD is going to launch a new generation of GPUs and it's very likely that said GPUs will use an improved version of the current GCN architecture. That's what matters to them. Comparing current GCN used for Radeon HD 7000 series cards with nVidia's (mostly) brand new Maxwell isn't entirely relevant.
bobalazsOkay, so who cares about dx 12, when there's still gonna be majority of dx 9-11 out there, and amd cpu's as we know is lackluster in that field.
What DX9-11 game are you playing, that requires more than an 8-core FX or a Phenom II X6 for smooth gameplay? I'm asking because I don't play every game that's out there and maybe I'm missing something. Me, I'm using a Phenom II X6 1055T and I really can't find a single game I can't max out. I had a Core i7 2600K before. I sold it and bought the Phenom II X6 with about 40% of the money I got for the Core i7 and I really can't see the difference in any of the games that I play.
haswrongyeah, this looks weird for microsoft.. this could initiate a fat law suit. anyway.. doyou know if star swarm is just multiple instacing and cloning of the same objects doing same things or is it pure realtime interaction and simulation? because the former looks always quite bad when you look at it.
and
Rahmat SofyanI feel like fooled by microsoft and intel with their DX11 and below, so WTH is going on from past time really happened?

So there is or already fishy thing between microft and intel before mantle came out? and micro push out the DX12 to avoid DX ashamed by mantle?
Umm, what? No offense, but do you even know what you're talking about? What does Intel have to do with anything? And why sue Microsoft? You're really missing the big picture here. Low-level APIs are the "secret weapon" the consoles have. Basically, having low-level APIs and fixed hardware specs allows developers to create games with graphics that would normally require a lot more processing power to pull off on the PC.

Windows operating systems have always been bloated, one way or another. The same is true for DirectX. Lately, though, Microsoft has been trying to sort things out in order to create a more efficient software ecosystem. That's right, not just the operating system(s), but everything Microsoft, including DirectX. They need to be able to offer software that is viable for mobile devices (as in, phones and tablets), not just PCs, or they won't stand a chance against Android. Furthermore, they want to offer the users devices that can communicate and work with each others across the board. That means they need a unified software ecosystem, which in turn means making an operating system that can work on all these devices, even if there will be device-specific versions that are optimized to some degree (ranging from UIs customised for phones/consoles/PCs to power consumption algorithms and so on).

In order to create such a Jack-of-all-trades operating system, they need it to have the best features in one package: low-level APIs from consoles, draconian power consumption profiles from phones, very light processing power requirements from both phones and consoles, excellent backwards compatibility through virtualization from PCs, the excellent diversity of software you can only get from a PC, and so on. Creating such an operating system takes a lot of money and effort, not to mention time. They've already screwed up on the UI front with Windows 8 (yeah, apparently it wasn't a no-brainer for them that desktops and phones can never use the same UI). Hopefully, they've learned their lesson and Windows 10 will turn out fine.

DX12 is a major step forward in the right direction, not some conspiracy between Microsoft and Intel to keep AMD CPUs down in the gutter. Nobody would have given all this conspiracy theory a second thought if AMD would have had the money they needed to stay on track with CPU development. But reality is like a kick in the teeth, and AMD CPUs are sadly not nearly as powerful as they'd need to be in order to give Intel a run for their money. Intel stands to lose more ground because they offer the highest performing CPUs, but I think we can all agree on the fact that they didn't get here (having the best performance) by conspiring with Microsoft. Besides, gamers are by far not the only ones that are buying Intel CPUs. Actually, DX12 will be detrimental to AMD as well: fewer people will feel the need to upgrade their CPUs to prepare for upcoming games.
Posted on Reply
#28
GLD
Windows 10 with DX 12, I will need to get it. That and a FX 83xx and a R9 3xx(X) card! Yea to a Newegg card with a zero balance. :eek:
Posted on Reply
#29
xorbe
I'm sure DX12 will improve on DX11, but I never trust those graphs that suggest it's going to be twice as fast. There's just no way to validate that the game devs put the same honest effort into the DX11 and DX12 paths.
Posted on Reply
#30
PatoRodrigues
I would guess everyone that follows the CPU market is skeptical about these graphs.

But i'd be thrilled if they're confirmed as soon as the consumer gets its hands on W10 and DX12. Especially if AMD processors are maintained at the current price ranges. Can't even imagine the new landscape that would surge in the gaming community. "Value-oriented gaming" would change completely.
Posted on Reply
#31
costeakai
CasecutterDid AMD push Mantle to move DX12 to this direction as there was always a means to the true end?
AMD have pushed Mantle in the right direction, and DX12 is living proof of it; they just showed the light, and MS rushed in ... The eagerly awaited revival of an old CPU, the FX, is unprecedented in gaming hardware archives ...
Posted on Reply
#32
Sony Xperia S
scorpion_amd13What does Intel have to do with anything?
Well, the theory (probably true) suggests that Intel with its x86 holds dramatically the industry progress. Simply because x86 as a standard is a shit.

Also, agreement between Microsoft and Intel for negative influences on the progress.

AMD probably couldn't do anything because they were happy to have that x86 license at all.

Still a mystery why x86 still exists and why AMD doesn't jump on anything else. Maybe because in the beginning they simply copied Intel's processors.
Posted on Reply
#33
Bytales
What would happen then on my 24 cores 48 threads machine ?
Posted on Reply
#34
RejZoR
I hope Natural Selection 2 will go Direct3D 12. This game seriously need something like this...
Posted on Reply
#35
john_
scorpion_amd13Mantle is AMD's insurance policy: they don't have to wait for a new DirectX to have an API that allows developers to use whatever technology they want to push. Furthermore, they'll have an extra advantage over nVidia next time Nintendo and Sony will need a new GPU/APU for their consoles: not having to rely on their competitor's own API or OpenGL is a pretty big selling point.
Good point.
As for how good AMD's current architecture is in DX12 versus nVidia's Maxwell, that doesn't really matter. AMD is going to launch a new generation of GPUs and it's very likely that said GPUs will use an improved version of the current GCN architecture. That's what matters to them. Comparing current GCN used for Radeon HD 7000 series cards with nVidia's (mostly) brand new Maxwell isn't entirely relevant.
It does, especially if part of the 300 series is rebrands. And in Anandtech's tests they used Hawaii, so we are talking about GCN 1.1 and a core that is used in $300 cards. At the time Anandtech was testing DX12 performance there wasn't any DX12 driver for 7000 series available(GCN 1.0). So, if only Fiji is a new GPU and if Tonga is better performing under DX12 compared with Hawaii, you have only those two GPUs performing probably good and all the other GPUs, GCN 1.1 and GCN 1.0 based, losing compared to Maxwell competition. Of course in 2015 that wouldn't really matter much, considering that there will be none, or not many, DX12 titles and probably in games the difference will be much less noticeable compared to benchmarks like Star Swarm.
Posted on Reply
#36
Ebo
I think that we can all agree on that we hope DX 12 will be a fantastic tool, to move computergaming forward.
For me it sounds like the real benefit will be in low gaming and main gaming rigs.

Lets just hope the use of dx12 in games will turn up fast, so we finally can wave goodbye to dx9-11
Posted on Reply
#37
arbiter
john_It does, especially if part of the 300 series is rebrands. And in Anandtech's tests they used Hawaii, so we are talking about GCN 1.1 and a core that is used in $300 cards. At the time Anandtech was testing DX12 performance there wasn't any DX12 driver for 7000 series available(GCN 1.0). So, if only Fiji is a new GPU and if Tonga is better performing under DX12 compared with Hawaii, you have only those two GPUs performing probably good and all the other GPUs, GCN 1.1 and GCN 1.0 based, losing compared to Maxwell competition. Of course in 2015 that wouldn't really matter much, considering that there will be none, or not many, DX12 titles and probably in games the difference will be much less noticeable compared to benchmarks like Star Swarm.
I think bigger issue would be if good part of the 300 series is a "re-badge", that is proper term for it. Issue will be is those cards more then likely will lack DX12 support. They will support speed part but not graphic's part which is bit if a let down for a new series of cards. It would mean only 390 cards will be dx12 cards, any card lower means have to look at nvidia side or wait for hopefully AMD lower end cards that has it.
EboLets just hope the use of dx12 in games will turn up fast, so we finally can wave goodbye to dx9-11
The graphics part of 12 probably take a while but I expect speed side which DX11 cards will support will catch on quick least from dev's that care. Rockstar i think for example would be key example of a dev that should add it in GTA5 ASAP
scorpion_amd13Mantle is AMD's insurance policy: they don't have to wait for a new DirectX to have an API that allows developers to use whatever technology they want to push. Furthermore, they'll have an extra advantage over nVidia next time Nintendo and Sony will need a new GPU/APU for their consoles: not having to rely on their competitor's own API or OpenGL is a pretty big selling point.
john_Good point.
Yea well, Issue for AMD isn't that they got the advantage but issue of making money. They aren't doing so well and when Sony/Nintendo/MS make their next console, will AMD be still be able to be in business.
Posted on Reply
#38
Jermelescu
Hold your horses, ladies. It'll take at least a year before we see some good games optimized properly on DX12 and about two years for new engines.
Posted on Reply
#39
JunkBear
I am having an older dual core 1.86 gig cpu setup but I was thinking of goin to APU since i want play only older games and surf the net plus bluray. Does it worth it compared to regular Intel setup? Should I go with Phenom or APU is fine?
Posted on Reply
#40
Jermelescu
JunkBearI am having an older dual core 1.86 gig cpu setup but I was thinking of goin to APU since i want play only older games and surf the net plus bluray. Does it worth it compared to regular Intel setup? Should I go with Phenom or APU is fine?
My sister has an APU [A8-7100] powered laptop and it does Blu-ray, casual gaming and Office just fine.
Posted on Reply
#41
Bytales
BytalesWhat would happen then on my 24 cores 48 threads machine ?
Would DX12 be able to handel these many cores/threads ?

It would be nice if it did, i havent spend 3500 euros for CPUs for nothing, like most of my friends, or followers of my build log would kie to think.
Posted on Reply
#42
Sony Xperia S
BytalesWould DX12 be able to handel these many cores/threads ?
According to the presentation slides, DX12 will load every single core of the CPUs, so the answer is yes.

Theoretically there would be no problem to handle as many threads as you like.
Posted on Reply
#43
Bytales
Sony Xperia SAccording to the presentation slides, DX12 will load every single core of the CPUs, so the answer is yes.

Theoretically there would be no problem to handle as many threads as you like.
Would then DX12 usher in an era where a 4way 15core/30thread E7-4890 v2 CPU (7000 eur each) (60core/120thread in total for 28000 EUR) would be a beast of a gaming cpu compared to a "mortal" 8core/16thread CPU.

Compared to this my 2x12core/48 thread CPUs seem outdated.

Perhaps we will one day see an ASUS ROG board made for these kinds of CPUs, who knows!
Posted on Reply
#44
costeakai
JermelescuMy sister has an APU [A8-7100] powered laptop and it does Blu-ray, casual gaming and Office just fine.
i'v got a8-7600 , i'm not crazy about it, but i enjoy it.
Posted on Reply
#45
scorpion_amd13
xorbeI'm sure DX12 will improve on DX11, but I never trust those graphs that suggest it's going to be twice as fast. There's just no way to validate that the game devs put the same honest effort into the DX11 and DX12 paths.
Understandable. However, you can be quite sure that said devs will use all the DX12 features they can on the Xbone, so that means they'll also use them for the PC port. That means DX12 is likely to show up pretty soon in games, and on a large enough scale: it will probably be adopted faster than DX11.
Sony Xperia SWell, the theory (probably true) suggests that Intel with its x86 holds dramatically the industry progress. Simply because x86 as a standard is a shit.

Also, agreement between Microsoft and Intel for negative influences on the progress.

AMD probably couldn't do anything because they were happy to have that x86 license at all.

Still a mystery why x86 still exists and why AMD doesn't jump on anything else. Maybe because in the beginning they simply copied Intel's processors.
I agree that x86 is, as a standard, shit. But, what agreement between Microsoft and Intel to slow down progress? I haven't seen anything that would suggest that. Sure, Microsoft isn't exactly fast when it comes to improvements, but that's how things work when there's no viable competition on the operating system front.

Sure, AMD started out by copying Intel's CPUs, but that was a very long time ago and it was part of the agreement between AMD, Intel and IBM. AMD hasn't copied an Intel design for well over a decade now. As for why AMD doesn't "jump on anything else", well, that's not exactly true. They've been tinkering with ARM designs for a while now, and the results are going to show up pretty soon. Developing an entirely new standard to compete with x86 would take incredible amounts of money and time, and AMD has neither of those in ample supply. Not to mention that PC software was built from the ground up for x86 machines and that you'd have better luck drawing up an entirely new GPU on a napkin during lunch break than convincing software developers to remake all their software to work with the new standard.
BytalesWhat would happen then on my 24 cores 48 threads machine ?
Well, supposedly, you'd be able to use all those cores in games. Not that you'd find graphics cards that would be powerful enough to register significantly better performance (higher than 5-10%) than ye average i7 4790K or flagship FX CPU. You also wouldn't need to upgrade the CPU for a decade or so, but then again, your e-peen would shrink big time. Ouch.
john_Good point.

It does, especially if part of the 300 series is rebrands. And in Anandtech's tests they used Hawaii, so we are talking about GCN 1.1 and a core that is used in $300 cards. At the time Anandtech was testing DX12 performance there wasn't any DX12 driver for 7000 series available(GCN 1.0). So, if only Fiji is a new GPU and if Tonga is better performing under DX12 compared with Hawaii, you have only those two GPUs performing probably good and all the other GPUs, GCN 1.1 and GCN 1.0 based, losing compared to Maxwell competition. Of course in 2015 that wouldn't really matter much, considering that there will be none, or not many, DX12 titles and probably in games the difference will be much less noticeable compared to benchmarks like Star Swarm.
It is indeed likely that the R9 390X/Fiji is the only new core, just like Hawaii was when it was launched. However, judging by the "50-60% better performance than 290X" rumor, as well as the specs (4096 shaders, for starters), I don't think those are ye regular GCN shaders. If Fiji indeed ends up 50-60% faster than Hawaii, while having only about 45% more shaders, it will be rather a amazing feat. Just look at Titan X, it has at least 50% more anything compared to a GTX 980, but it only manages 30% better real life performance.

And that's actually a bit above average as far as scaling goes. Due to the complexity of GPUs, increasing shader count and/or frequency will never yield the same percentage in real life performance gains. And yet, Fiji is coming to the table with a performance gain that EXCEEDS the percentage by which the number of shaders has increased. No matter how good HBM is, it cannot explain such performance gains by itself. If you agree that Hawaii had sufficient memory bandwidth, then the only two things that can account for the huge gain: 2GHz shader frequency (which is extremely unlikely) or modified/new shaders. My money's on the shaders.
arbiterI think bigger issue would be if good part of the 300 series is a "re-badge", that is proper term for it. Issue will be is those cards more then likely will lack DX12 support. They will support speed part but not graphic's part which is bit if a let down for a new series of cards. It would mean only 390 cards will be dx12 cards, any card lower means have to look at nvidia side or wait for hopefully AMD lower end cards that has it.

Yea well, Issue for AMD isn't that they got the advantage but issue of making money. They aren't doing so well and when Sony/Nintendo/MS make their next console, will AMD be still be able to be in business.
No, the fact that most of the new 300 series are rebrands wouldn't be an issue. AMD focused all their efforts on Fiji and lo and behold, they got to use HBM one generation before nVidia. They now have the time and expertise required to design new mainstream/performance cards that use HBM and put them into production once HBM is cheap enough (real mass production). Besides, where's the rush? There are no DX12 titles out yet, Windows 10 isn't even out yet. By the time you'll have a DX12 game out there that you'd want to play, you'll be able to purchase 400 series cards.

As for AMD's financial troubles, people have been saying that they're gonna go bankrupt for a good long while now. And guess what, they're still alive and kicking. I'm thankful that AMD price their cards so that they're affordable for most people, not just the rich and silly that are more than willing to pay a thousand bucks or more for a card just because it has nVidia's logo and Titan writing on it. And it also makes sense for them: a lot more people are likely to buy their cards since a lot more people can afford to do so. I'm sorry, but you'll have to live with the fact that your shiny new graphics card is overpriced beyond reason. Don't try to convince the others that they should pay more for their cards just so you can feel good with yourself.
JermelescuHold your horses, ladies. It'll take at least a year before we see some good games optimized properly on DX12 and about two years for new engines.
Bingo. We have a winner here. I wholeheartedly agree.
Posted on Reply
#46
JunkBear
For people like me there's too much cores in play there. People with everyday use can still rock a dualcore and do the job and casual gaming. What makes computer companies fill the bank to reinvest in R&D it's freaks who need the latest technology like it was their last dose of drug.
Posted on Reply
#47
Jorge
Clearly DirectX 12 will help properly feed multi core CPUs, which means AMD's CPU performance will increase drastically as half or more of the cores are often under utilized currently due to bad code. That's good for gamers if the games/software is properly written to use DirectX 12. That may be the only reason why anyone would consider Win 10. The fact the Microsucks is planning to give Win 10 to existing Win 8/7 users shows how desperate they are to keep their installed base. For many people Win 10 may still not be worth the hassles.
Posted on Reply
#48
Bytales
quote
Well, supposedly, you'd be able to use all those cores in games. Not that you'd find graphics cards that would be powerful enough to register significantly better performance (higher than 5-10%) than ye average i7 4790K or flagship FX CPU. You also wouldn't need to upgrade the CPU for a decade or so, but then again, your e-peen would shrink big time. Ouch.
quote

Wel, that was what i was planning in the first place when i got me these CPUs, to not need to change the CPU setup for a lot of time, sure the GPUs will needto be refresehd now nad then, but the CPUs could stay in place for a lot of time, i reckoned.

It seems my logic was good, as usually my logic allmost always is.
Now to choose the proper GPU setup.

Probably if the apps are coded right, one would or at least could get a big boost of performance even from a Crossfire 295x2 Setup, with 4 GPUS and unified 16gb of RAM.

To bad there isnt a dual gm200 board outthere, that would have been my choice as a gpu.
As it is right now, i will probably go with titan X SLI as a i allready own a GSync Monitor, however its going to take i thin two months before i will actually aquaire them. Untill now i hope i will learn more about the 390x and see if it is worth it.
Posted on Reply
#49
64K
JorgeClearly DirectX 12 will help properly feed multi core CPUs, which means AMD's CPU performance will increase drastically as half or more of the cores are often under utilized currently due to bad code. That's good for gamers if the games/software is properly written to use DirectX 12. That may be the only reason why anyone would consider Win 10. The fact the Microsucks is planning to give Win 10 to existing Win 8/7 users shows how desperate they are to keep their installed base. For many people Win 10 may still not be worth the hassles.
I'm not sure why you think MS is desperate to keep their installed base. Where else would people turn to for an OS? Linux? Only a very small percentage of people run Linux compared to Windows. Most people wouldn't even know where to start with Linux. MS has a monopoly on PC OS and they have for a long time.
Posted on Reply
#50
MrGenius
JunkBearFor people like me there's too much cores in play there. People with everyday use can still rock a dualcore and do the job and casual gaming. What makes computer companies fill the bank to reinvest in R&D it's freaks who need the latest technology like it was their last dose of drug.
I'm with you there. And what most folks don't seem to realize is DX12 and Mantle just reinforce that. They take the cheaper casual gaming CPUs to the hardcore gaming level, and the cheapest entry-level CPUs to the casual gaming level. ALL of them(regardless of core count). Because they're not really about enabling the use of more cores. What they're really about is being able to use the cores available MUCH more efficiently. Making super powerful mega multicore CPUs even more of a waste of money(than they are now).
Posted on Reply
Add your own comment
May 15th, 2024 11:37 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts