Friday, September 16th 2016

AMD Actively Promoting Vulkan Beyond GPUOpen

Vulkan, the new-generation cross-platform 3D graphics API governed by the people behind OpenGL, the Khronos Group, is gaining in relevance, with Google making it the primary 3D graphics API for Android. AMD said that it's actively promoting the API. Responding to a question by TechPowerUp in its recent Radeon Technology Group (RTG) first anniversary presser, its chief Raja Koduri agreed that the company is actively working with developers to add Vulkan to their productions, and optimize them for Radeon GPUs. This, we believe, could be due to one of many strategic reasons.

First, Vulkan works inherently better on AMD Graphics CoreNext GPU architecture because it's been largely derived from Mantle, a now defunct 3D graphics API by AMD that brings a lot of "close-to-metal" API features that make game consoles more performance-efficient, over to the PC ecosystem. The proof of this pudding is the AAA title and 2016 reboot of the iconic first-person shooter "Doom," in which Radeon GPUs get significant performance boosts switching from the default OpenGL renderer to Vulkan. These boosts aren't as pronounced on NVIDIA GPUs.
Second, and this could be a long shot, but the growing popularity of Vulkan could give AMD leverage over Microsoft to steer Direct3D development in areas that AMD GPUs are inherently good at - these include asynchronous compute, and tiled-resources (AMD GPUs benefit due to higher memory bandwidths). AMD has been engaging aggressively with game studios working on AAA games that use DirectX 12, and thus far AMD GPUs have been either gaining or sustaining performance better than NVIDIA GPUs, when switching from DirectX 11 fallbacks to DirectX 12 renderers.

AMD has already "opened" up much of its GPU IP to game developers through its GPUOpen initiative. Here, developers will find detailed technical resources on how to take advantage of not just AMD-specific GPU IP, but also some industry standards. Vulkan is among the richly differentiated resources AMD is giving away through the initiative.

Vulkan still has a long way to go before it becomes the primary API in AAA releases. To most gamers who don't tinker with advanced graphics settings, "Doom" still works on OpenGL. and "Talos Prinicple," works on Direct3D 11 by default, for example. It could be a while before a game runs on Vulkan out of the box, and the way its special interest group Khronos, and more importantly AMD, promote its use, not just during game development, but also long-term support, will have a lot to do with it. A lot will also depend on NVIDIA, which holds about 70% in PC discrete GPU market share, to support the API. Over-customizing Vulkan would send it the way of OpenGL. Too many vendor-specific extensions to keep up drove game developers to Direct3D in the first place.
Add your own comment

111 Comments on AMD Actively Promoting Vulkan Beyond GPUOpen

#51
Captain_Tom
UngariSure, because they can make even more money by delaying Volta so they could sell more of the surprise Paxwell cards, then obsolete them before owners get a single year of usage.
With 1080Ti coming out so late, you have to wonder if Volta will be delayed further.
All depends on how Vega does.

It seems like Nvidia is currently making a big cash grab for the high-end since they know Paxwell will lose a massive performance advantage once Vulkan/DX12 are standard. Thus they will only sell their overpriced 10xx series as long as it isn't a dominating issue. If most releases are using Async compute by march, and AMD's Vega is indeed as strong as the Titan X: Nvidia will launch the 1180 by the end of spring.
Posted on Reply
#52
Captain_Tom
bugHow come? AMD has no business in the mobile space at the moment while Nvidia has Tegra. And Pascal is already way more energy efficient than Polaris.
A lot will change by the time we see Samsung ditch Mali, but right now AMD has a (disputable, imho) software advantage while Nvidia has the hardware.
Tegra chips are absolute garbage in terms of efficiency. Their powerful chips hog 25-50w (Far more than a phone can take), and their 5w variants fail to beat their Qualcomm/Apple competition.


AMD's efficiency is totally fine, 14nm just isn't mature for big chips yet. Furthermore you should look at efficiency for AMD in Vulkan. Their far cheaper to produce 480 is roughly as efficient as the 1070 (Like a 10% difference).
Posted on Reply
#53
bug
Captain_TomTegra chips are absolute garbage in terms of efficiency. Their powerful chips hog 25-50w (Far more than a phone can take), and their 5w variants fail to beat their Qualcomm/Apple competition.
Samsung isn't looking to buy the whole SoC, just the GPU.
Captain_TomAMD's efficiency is totally fine, 14nm just isn't mature for big chips yet.
How on earth is AMD's efficiency fine when the RX480 eats as much power as the GTX1070?
Captain_TomFurthermore you should look at efficiency for AMD in Vulkan. Their far cheaper to produce 480 is roughly as efficient as the 1070 (Like a 10% difference).
10%? Wtf dude? www.techpowerup.com/reviews/MSI/RX_480_Gaming_X/24.html
Posted on Reply
#54
D007
I thought this was like.. old news..lol Of course they are promoting it and they have been for as long as I can remember..
But don't get it twisted.. This doesn't make AMD, king of the hill all of a sudden.. Not by a long shot..
Posted on Reply
#55
$ReaPeR$
Captain_TomAnyone else remember that both AMD and Nvidia are bidding to supply the graphics in Samsung's next Smartphone APU's?

By making Vulkan the standard API of Android, AMD may have just secured a massive advantage in their bidding....
Vulkan opens the door for Linux in general IMO. and that might turn out to be very interesting..
Posted on Reply
#56
efikkan
btarunrFirst, Vulkan works inherently better on AMD Graphics CoreNext GPU architecture because it's been largely derived from Mantle
This is just PR BS from AMD.
First, there is no evidence supporting the claim that it work inherently better on AMD hardware. In fact, the only "evidence" is games specifically targeting AMD hardware which has later been ported.

Secondly, Vulkan is not based on Mantle. As you can read in the specs, Vulkan is built on SPIR-V. SPIR-V is the compiler infrastructure and intermediate representation of a shader language which is the basis for OpenCL (2.1) and Vulkan. The features of Vulkan is built on top of this, and this architecture has nothing in common with either Mantle nor Direct3D*. What Vulkan has inherited from Mantle is not the underlaying architecture, but some aspects of the front end. To claim that one piece of software is based on another for implementing similar features is obviously gibberish, just like no one is claiming that Chrome is based on IE for implementing similar features. Any coder will understand this.

AMD have no real advantage on Vulkan compared to it's rivals. Nvidia were in fact the first vendor to demonstrate a working Vulkan driver, and the first to release one (both PC and Android). AMD were the last to get certification, and had to write a driver from scratch like everyone else.

*) In fact, the next Shader Model of Direct3D will adapt a similar architecture. I would expect that you knew this, since you actually covered it on this news site.
btarunrAMD has already "opened" up much of its GPU IP to game developers through its GPUOpen initiative. Here, developers will find detailed technical resources on how to take advantage of not just AMD-specific GPU IP, but also some industry standards. Vulkan is among the richly differentiated resources AMD is giving away through the initiative.
Nvidia has also done the same for more than a decade. Contrary to popular belief, most of GameWorks is actually open, and it's the most extensive collection of examples, tutorials and best practices for graphics development.
Do not believe everything a PR spokesman says.
btarunrA lot will also depend on NVIDIA, which holds about 70% in PC discrete GPU market share, to support the API. Over-customizing Vulkan would send it the way of OpenGL. Too many vendor-specific extensions to keep up drove game developers to Direct3D in the first place.
Nvidia is already offering excellent Vulkan support on all platforms.
Extensions have never been a problem for OpenGL, the problem has been the slow standardization process.

-----
bugOpenGL was always problematic on Linux, for example. Even now with their new, open source driver, OpenGL performance is still poor.
Nvidia has offered superb OpenGL support for Linux for more than a decade, but they've been the only one. You are talking about the "hippie" drivers, nobody who cares about stability, features or performance cares about those. The new "open" drivers are based on Gallium which is a generic GPU abstraction layer, so just forget about optimized support for any API on those.

-----
UngariWhat Nvidia has done is ignore the new APIs until they become an issue for their customers.
Despite the planned road map for Volta in 2017 which will probably scale with DX12 and Vulkan, they released an unscheduled "new" architecture in Pascal, which is really Maxwell 3.0 that doesn't improve with these APIs.
Nvidia's philosophy is simply sell their customers a whole new architecture when the deficiencies become too problematic, making the previous generation obsolete in a very short time.
But as long as their loyal fans slavishly buy their product at their command, they will continue to be short-sighted about building their hardware for up-comming technical developments.
I don't know which fantasy world you live in, but since AMD released their last major architecture, Nvidia has released Maxwell and Pascal.
Pascal was introduced because Nvidia was unable to complete Volta by 2016, bringing some of the features of Volta. This was done primarily for the compute oriented customers (Tesla).
There is no major disadvantage with Nvidia's architectures vs. GCN in terms of modern APIs.

-----
the54thvoidMeanwhile, with even the latest and greatest AAA title shipping as DX11 and with DX12 support patched in, there is no rush to buy a DX12 card right now. Given the programmed visuals on DX11 are the same as DX12 in Deus Ex MD, why would anyone need to move to DX12?
Yes, the good Direct3D 12 titles will come in a while, perhaps early next year. It always takes 2-3 years before the "good" games arrive.
Does anyone remember Crysis?
the54thvoidYes, an 8+ TFlop card matching another 8+ TFlop card..... Fury X should perform this well. This is the whole point of everything I post. It's the most over specced and (in DX11) under performing card. Nvidia cards do what they are meant to do in their given time frame. Fury X needs DX12 and Vulkan to work but those API's aren't yet the normal scene. By the time DX12 and/or Vulkan is the norm and DX11 is long forgotten we will be on what? Navi and Volta?
The under-performing GCN cards has nothing to do with the APIs.
We all know Nvidia's architectures are much more advanced, and one of it's advantages is more flexible compute cores and a very powerful scheduler. AMD has a more simple approach; more simpler cores and a simple scheduler. When you compare GTX 980 Ti to Fury X you'll see that Nvidia is able to saturate it's GPU while Fury is more than 1/3 unutilized. So AMD have typically ~50% more resources for comparable performance, but are there workloads which benefits from AMD's more simple brute force approach? Yes, of course. A number of compute workloads actually perform very well on GCN. This consists of workloads which are more a stream of independent data. AMD clearly have more computational power, so if their GPUs are saturated they can perform very well. The problem is that rendering typically have a lot of internal dependencies. E.g. resources(textures, meshes) are reused several times in a single frame, and if 5 cores requests the same data they will have to wait in turn. That's why scheduling is essential to saturate a GPU during rendering. I would actually draw a parallel with AMD Bulldozer vs. Intel Sandy-Bridge and newer, AMD has clearly more computational power for competing products, but is only able to utilize them in certain (uncommon) workloads. AMD is finally bringing Zen with necessary improvements, and they need to do a similar thing with GCN.

In addition; Nvidia does a number of smart implementations of rendering. E.g; Maxwell and Pascal rasterizes and processes fragments in tiles while AMD process in screen space. This allows Nvidia to use less memory bandwidth, and keep all the important data in L2 cache, to ensure the GPU is completely saturated. With AMD on the other hand, the data has to travel back and forth between GPU memory and L2 cache, causing bottlenecks and cache misses. For those who are not familiar with programming GPUs; fragment shading easily takes up 60-80% or more of rendering time, so a bottleneck here makes a huge impact. This is one of the primary reasons why Nvidia can perform better with much lower memory bandwidth.

We also know Nvidia has a much more powerful tessellation engine, etc.

-----
R-T-BAt this moment, people are using exposed parts by DX12 to better optimize for AMD because frankly, there's a lot of optimizing to do compared to their DX11 renderer. There is some valid argument that async compute IS better supported on AMD's side, but it's not a valid argument for the way you are using it as NVIDIA also supports several things AMD doesn't:
More games are optimized for AMD this time around because of the major gaming consoles.
Async compute is fully supported by Nvidia, but the advantage is dependent on unutilized GPU resources. In many cases games tries to utilize the same resources for two queues, and since Nvidia is already better at saturating their GPUs, they will get "smaller improvements".
Posted on Reply
#57
ensabrenoir
.....when BOTH companies pretty much put out new cards every year.......the whole future proofing argument sorta falls flat.
Posted on Reply
#58
Ungari
efikkanI don't know which fantasy world you live in, but since AMD released their last major architecture, Nvidia has released Maxwell and Pascal.
Pascal was introduced because Nvidia was unable to complete Volta by 2016, bringing some of the features of Volta. This was done primarily for the compute oriented customers (Tesla).
There is no major disadvantage with Nvidia's architectures vs. GCN in terms of modern APIs.
-----
Pascal is Maxwell 3.0 = Paxwell
Posted on Reply
#59
Totally
RejZoRYou apparently think designs and technology planning happens 1 month before the launch of something new... Just try to remember how long AMD was planning the APU's and why they bought ATi later on. The whole planning and execution took years and it was all happening way before they bought ATi and before we actually got the APU's.
1 month is clearly not 2 years. Yes they designed the cards with "close-to-metal" in mind and started laying the groundwork long before but the reality is when the hardware arrived the software had not. Current API(just shifted from dx10 to11, and incremented to opengl 4.5) didn't take advantage or fully utilized their so they made their own. For your theory to make sense they'd have released mantle at the same time as GCN 1.0, not 2-3 generations later.
Posted on Reply
#60
efikkan
UngariPascal is Maxwell 3.0 = Paxwell
More like a Maxwell-Volda hybrid.
But what's your point? It's still a bigger progression than GCN 1.4.
Posted on Reply
#61
Anymal
RejZoRThere is a bit of a falacy there saying it inherently works better on AMD because it was largely derived from Mantle. Mostly because that's not true. It works better on AMD because AMD has been pursuing closer to the metal API for years. They built GCN architecture (hardware) around it and have been enhancing it for years. They made architecture that is inherently better at operating via such API's in general, be it Mantle, Vulkan or DX12. DirectX 12 has nothing to do with Mantle apart from core idea and behold, AMD again way better at it than NVIDIA.
Ne biti tak AMDfanboy, no!
Posted on Reply
#62
EarthDog
phanbueyNvidia's philosophy is "let's make the fastest card on the most popular APIs right now". And that's what they do. AMD simply can't do that so their PR department harps on whatever advantage they can spin and they do that over and over again.

They did the same thing with dx 11.1 and mantle and honestly it's just PR and false hope for their customers. By the time these architecture advantages become material we will already have two more arch releases.
/thread.

Though it is great news android will use it, that doesn't mean the pc market will adapt. I certainly hope it does, more competition never hurts. I wont hold my breath though.
Posted on Reply
#63
NC37
This is all fine and dandy until nVidia gets serious about Async. Till then, AMD might as well do what it can to bridge the gap.
Posted on Reply
#64
Captain_Tom
$ReaPeR$Vulkan opens the door for Linux in general IMO. and that might turn out to be very interesting..
Oh I completely agree. Right now Linux builds are only viable for systems $300 or less imo. But if we could get it so 95% of games came out on Linux....It would keep Windows on its toes when it comes to gaming support, and open up PC gaming to lower-income people.
Posted on Reply
#66
Captain_Tom
phanbueyNvidia's philosophy is "let's make the fastest card on the most popular APIs right now". And that's what they do. AMD simply can't do that so their PR department harps on whatever advantage they can spin and they do that over and over again.

They did the same thing with dx 11.1 and mantle and honestly it's just PR and false hope for their customers. By the time these architecture advantages become material we will already have two more arch releases.
That is just flat out not true. I got a 7970 in 2012, and that thing stayed on top until the 290X came out. Sure there were periods where the Titan was like 35% stronger, but soon enough I had a card playing as well as my friends 780 (And then 780 Ti).

It was only when Maxwell and the Fury X came out that I finally felt like I no longer had an Enthusiast card. Meanwhile I had been maxing out prettier and prettier games while my friend with the 680 had to continually turn more and more settings down because he didn't have enough VRAM or shaders.


The same situation is happening now. The Fury is selling for $300 and competing with the 1070 and 1080. That means people who bought that over the 980 Ti are laughing to the bank as they watch a more expensive Nvidia card start losing to the 390X! Furthermore, people building NEW PC's are buying up a lot of 1 year old Fury's because apparently they are beating 1070's in the latest games for 2/3rds the cost.
Posted on Reply
#67
$ReaPeR$
Captain_TomOh I completely agree. Right now Linux builds are only viable for systems $300 or less imo. But if we could get it so 95% of games came out on Linux....It would keep Windows on its toes when it comes to gaming support, and open up PC gaming to lower-income people.
well yes, but that will take a long time, supposing that it can be done at all.
Posted on Reply
#68
phanbuey
Captain_TomThat is just flat out not true. I got a 7970 in 2012, and that thing stayed on top until the 290X came out. Sure there were periods where the Titan was like 35% stronger, but soon enough I had a card playing as well as my friends 780 (And then 780 Ti).

It was only when Maxwell and the Fury X came out that I finally felt like I no longer had an Enthusiast card. Meanwhile I had been maxing out prettier and prettier games while my friend with the 680 had to continually turn more and more settings down because he didn't have enough VRAM or shaders.


The same situation is happening now. The Fury is selling for $300 and competing with the 1070 and 1080. That means people who bought that over the 980 Ti are laughing to the bank as they watch a more expensive Nvidia card start losing to the 390X! Furthermore, people building NEW PC's are buying up a lot of 1 year old Fury's because apparently they are beating 1070's in the latest games for 2/3rds the cost.
Interesting post for a variety of reasons:

The 680 ended up running out of ram. The 680 4GB is still generally faster than the 7970 (but yes the 7970 eneded up being faster once framebuffer ran out), over the course of 4 years. There has been 2-3 generations of enthusiast cards since then.

Couple of things:
- "Soon enough I had a card playing as well as my friends 780 and 780ti" - sorry, no matter what you did to that 7970 it did not play as well as a 780(ti) as it was/is 15-20% slower at stock. If you consider overclocking then the 780's really pull away.

- There is no card competing with the 1080 (I wish there was), even with all the boosts its still 15-20% faster than the Fury X.
Captain_Tom...Furthermore, people building NEW PC's are buying up a lot of 1 year old Fury's because apparently they are beating 1070's in the latest games for 2/3rds the cost.
Is this the same friend that bought the 2GB 680? because he didn't learn his lesson... that 4GB frame buffer isn't going to be enough if he wants to hold onto it for 4 years...

Bottom line:
Hardware will go obsolete. Buy a card based on overall performance. The whole idea that "ALL THE GAMES ARE GOING TO COME OUT AT DX<whatever> AND YOU NEED TO FUTURE PROOF" is trash. No your Fury X isn't going to beat a 1080, yes you will need to sell a kidney to afford a high end card, and yes it will drop in value faster than that brand new yellow corvette you just drove off the lot.
Posted on Reply
#69
Camm
Oh its amazing with the Nvidia fanboi's coming out of the woodwork as their cards aren't performing :p.

Trolling aside though, all but the most strident knew this was going to happen. Kepler and up have been heavily optimised for DX11\OpenGL pathways and are missing a good chunk of compute (that simply wasn't needed for gaming back in 2011). Pascal is simply on the cusp of the change back to compute orientated architectures being preferential. The more interesting card will be Volta to see if Nvidia can realign itself to the new paradigm.

That being said, the wind is certainly blowing against Nvidia. If it doesn't get a design win soon and regain marketshare (remembering consoles here), it will see its influence on pathways continue to decrease as developers target the GCN architecture as a commonality between platforms in a sort of reverse what happened to AMD and being fucked over with Gameworks.

(For the record, I own a 1080. Can't argue with it being the fastest card out with no competition from AMD - but IMO it'll date pretty quickly. And with most AAA titles this season coming out with Vulkan\DX12, Nvidia better hope AMD keeps dragging its feet on its high end cards, as going off the RX480, it continues to look to be the better buy than the 1060 everytime a new AAA title comes out).
Posted on Reply
#70
Nkd
bugSamsung isn't looking to buy the whole SoC, just the GPU.


How on earth is AMD's efficiency fine when the RX480 eats as much power as the GTX1070?

10%? Wtf dude? www.techpowerup.com/reviews/MSI/RX_480_Gaming_X/24.html
reading comprehension. Did you pay attention to the vulkan part? Or you totally ignored it to prove your point?
Posted on Reply
#71
Captain_Tom
phanbueyInteresting post for a variety of reasons:

Couple of things:
- "Soon enough I had a card playing as well as my friends 780 and 780ti" - sorry, no matter what you did to that 7970 it did not play as well as a 780(ti) as it was/is 15-20% slower at stock. If you consider overclocking then the 780's really pull away.
Uhhh have you not looked at benchmarks for the past year? I genuinely encourage you to go read them and then come back.


Ok you back? Good.

1) The 7970 overclocks better than anything that has been released since then. My 7970 ran at 1220/1840. My brother's 7950 ran at 1120/1800, and all of my crypto-mining 7950's ran at 1100+/1800+. Those are 40% overclocks lmao! My 7970 benches as well as a 980 in Deus Ex: MD and BF1. So drop that argument here.

2) 2-3 generations? You completely missed what I was saying. I said that withing a year of the 7970's launch it was ALREADY beating the 680 by 10-20% on average. Most people keep their cards for 2-3 years in my experience.

Furthermore just because it is 1-2 generations newer doesn't make a difference. Everyone CONSTANTLY complains about AMD's recent trend of re-branding old GPU's. I will admit that I think it is stupid too, but can you blaim them? Radeon is like 1-2 times smaller than Nvidia. If they can sell the 7970 2 years later and have it compete with the 970 they will lmao. Hence why I just bought a Fury for $310 - it beats the 1070 in TODAY's games. That's just stupid.
Posted on Reply
#73
eidairaman1
The Exiled Airman
This is good, it allows Devs to realize AMD is out there.
Posted on Reply
#74
RejZoR
AnymalNe biti tak AMDfanboy, no!
That's the most hilarious shit when you call someone an AMD fanboy and they own NVIDIA card... Epic fail.
Posted on Reply
#75
eidairaman1
The Exiled Airman
RejZoRThat's the most hilarious shit when you call someone an AMD fanboy and they own NVIDIA card... Epic fail.
FNGs-smh
Posted on Reply
Add your own comment
Apr 26th, 2024 19:11 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts