Friday, June 26th 2015

AMD Didn't Get the R9 Fury X Wrong, but NVIDIA Got its GTX 980 Ti Right

This has been a roller-coaster month for high-end PC graphics. The timing of NVIDIA's GeForce GTX 980 Ti launch had us giving finishing touches to its review with our bags to Taipei still not packed. When it launched, the GTX 980 Ti set AMD a performance target and a price target. Then began a 3-week wait for AMD to launch its Radeon R9 Fury X graphics card. The dance is done, the dust has settled, and we know who has won - nobody. AMD didn't get the R9 Fury X wrong, but NVIDIA got its GTX 980 Ti right. At best, this stalemate yielded a 4K-capable single-GPU graphics option from each brand at $650. You already had those in the form of the $650-ish Radeon R9 295X2, or a pair GTX 970 cards. Those with no plans of a 4K display already had great options in the form of the GTX 970, and price-cut R9 290X.

The Radeon R9 290 series launch from Fall-2013 stirred up the high-end graphics market in a big way. The $399 R9 290 made NVIDIA look comically evil for asking $999 for the card it beat, the GTX TITAN; while the R9 290X remained the fastest single-GPU option, at $550, till NVIDIA launched the $699 GTX 780 Ti, to get people back to paying through their noses for the extra performance. Then there were two UFO sightings in the form of the GTX TITAN Black, and the GTX TITAN-Z, which made no tangible contributions to consumer choice. Sure, they gave you full double-precision floating point (DPFP) performance, but DPFP is of no use to gamers. So what could have been the calculation at AMD and NVIDIA as June 2015 approached? Here's a theory.
Image credit: Mahspoonis2big, Reddit

AMD's HBM Gamble
The "Fiji" silicon is formidable. It made performance/Watt gains over "Hawaii," despite a lack of significant shader architecture performance improvements between GCN 1.1 and GCN 1.2 (at least nowhere of the kind between NVIDIA's "Kepler" and "Maxwell.") AMD could do a 45% increase in stream processors for the Radeon R9 Fury X, at the same typical board power as its predecessor, the R9 290X. The company had to find other ways to bring down power consumption, and one way to do that, while not sacrificing performance, was implementing a more efficient memory standard, High Bandwidth Memory (HBM).

Implementing HBM, right now, is not as easy GDDR5 was, when it was new. HBM is more efficient than GDDR5, but it trades clock speed for bus-width, and a wider bus entails more pins (connections), which would have meant an insane amount of PCB wiring around the GPU, in AMD's case. The company had to co-develop the industry's first mass-producible interposer (silicon die that acts as substrate for other dies), relocate the memory to the GPU package, and still make do with the design limitation of first-generation HBM capping out at 8 Gb per stack, or 4 GB for AMD's silicon; after having laid a 4096-bit wide memory bus. This was a bold move.

Reviews show that 4 GB of HBM isn't Fiji's Achilles' heel. The card still competes in the same league as the 6 GB memory-laden GTX 980 Ti, at 4K Ultra HD (a resolution that's most taxing on the video memory). The card is just 2% slower than the GTX 980 Ti, at this resolution. Its performance/Watt is significantly higher than the R9 290X. We reckon that this outcome would have been impossible with GDDR5, if AMD never gambled with HBM, and stuck to the 512-bit wide GDDR5 interface of "Hawaii," just as it stuck to a front-end and render back-end configuration similar to it (the front-end is similar to that of "Tonga," while the ROP count is the same as "Hawaii.")

NVIDIA Accelerated GM200
NVIDIA's big "Maxwell" silicon, the GM200, wasn't expected to come out as soon as it did. The GTX 980 and the 5 billion-transistor GM204 silicon are just 9 months old in the market, NVIDIA has sold a lot of these; and given how the company milked its predecessor, the GK104, for a year in the high-end segment before bringing out the GK110 with the TITAN; something similar was expected of the GM200. Its March 2015 introduction - just six months following the GTX 980 - was unexpected. What was also unexpected, was NVIDIA launching the GTX 980 Ti, as early as it did. This card has effectively cannibalized the TITAN X, just 3 months post its launch. The GTX TITAN X is a halo product, overpriced at $999, and hence not a lot of GM200 chips were expected to be in production. We heard reports throughout Spring, that launch of a high-volume, money-making SKU based on the GM200 could be expected only after Summer. As it turns out, NVIDIA was preparing a welcoming party for the R9 Fury X, with the GTX 980 Ti.

The GTX 980 Ti was more likely designed with R9 Fury X performance, rather than a target price, as the pivot. The $650 price tag is likely something NVIDIA came up with later, after having achieved a performance lead over the R9 Fury X, by stripping down the GM200 as much as it could to get there. How NVIDIA figured out R9 Fury X performance is anybody's guess. It's more likely that the price of R9 Fury X would have been different, if the GTX 980 Ti wasn't around; than the other way around.

Who Won?
Short answer - nobody. The high-end graphics card market isn't as shaken up as it was, right after the R9 290 series launch. The "Hawaii" twins held onto their own, and continued to offer great bang for the buck, until NVIDIA stepped in with the GTX 970 and GTX 980 last September. $300 gets you not much more from what it did a month ago. At least now you have a choice between the GTX 970 and the R9 390 (which appears to have caught up), at $430, the R9 390X offers competition to the $499 GTX 980; and then there are leftovers from the previous-gen, such as the R9 290 series and the GTX 780 Ti, but these aren't really the high-end we were looking for. It was gleeful to watch the $399 R9 290 dethrone the $999 GTX TITAN in September 2013, as people upgraded their rigs for Holiday 2013. We didn't see that kind of a spectacle this month. There is a silver lining, though. There is a rather big gap between the GTX 980 and GTX 980 Ti just waiting to be filled.

Hopefully July will churn out something exciting (and bonafide high-end) around the $500 mark.
Add your own comment

223 Comments on AMD Didn't Get the R9 Fury X Wrong, but NVIDIA Got its GTX 980 Ti Right

#201
Casecutter
HumanSmokeWell,I did mention it in passing, but most people are fixated on gaming rather than the industry as a whole. Which is strange considering that the larger picture dictates what products, how many, and in what timeframe we see future Radeon branded cards.
Exactly this thread went off the rails within the first page. Honestly I had not dug through all these posts, I went to the end by like the second page. But yes the issue is neither want to be suck with inventory, and to accomplish that having other Professional sku's does add to the profit margins to obtain a payback.
HumanSmokeYou seem to have forgotten the Quadro line entirely. The M6000 is currently in the channel, while it seems it will be joined by further Maxwell 2 variants when this years SIGGRAPH rolls around.
You stay up on that side of Green team, it's hard to forget something that I've never read about. It appears to have been brushed-over in its mid-March announcement, bowled-over with TitanX, I suppose I see how I missed that. The Anandtech article was one of a handful that mentioned it in any detail. Come-one... SIGGRAPH was from an article yesterday, super big news on lower-end, not about these big chips. Sidetrack much?
HumanSmokeIf history is any indicator, the Nvidia card at a lower price will be the 980 Ti. Nvidia still have to release a full-die GM 200 GTX-number series card ( as seen with the 780 Ti / Titan Black). I would assume that Nvidia's product release cadence (in the face of a minimal urgency now that the Fury X has been realized) would be to ready it for the high sales volume fourth quarter holiday season, since both companies have shot their wad architecturally speaking for the foreseeable future.
So you'd say they run the 980Ti MSRP down in the stank and work a full-fledged GM200 on a new numbered GTX in above. Then there are surly GM200's that are below the current 980Ti gelding spec, do we see how they perhaps need to use those?
HumanSmokeenthusiast sector, whether single card or multi-GPU, has never been a significant proportion of the buying public. Tech sites tend to give a distorted view of uptake. Go to a mainstream site and uptake looks better than any random gathering of people, go to a specialist enthusiast/OC/benchmark/hard-mod site and you'd swear that entry level is a single (minimum) top tier card an multiple diaplays and/or 4K. Sales of ultra enthusiast (say $600+) cards might be in the order of tens of thousands- maybe six figures if you're lucky, balanced out in a total market of fifty million in the year.
Exactly, lots of this engineering progression, hinges on understanding the market expansion. Not being in the right point of the curve to offer enough of the right product, or too much wrong ones can stick you in this case with costly inventory (aka Tahiti). I do say Nvidia is good at reading the tea-leaves and getting mix right.

While we might be seeing some Enthusiast from just a year or so ago starting to feel the "drum-beat" of 4K, I see it’s more reality in the next node shrink. I perhaps think there’s gamers that might try 3x 1440p for Eyefinity/Surround if the price was right, that’s perhaps a better the hook to sell these top-end cards.

What I was postulating, is mainstream gamers need to see the path in transitioning from 1080p. It seems right today panel manufactures aren't or can't move higher resolution and their subsequent tech down in price, so the whole thing goes stagnate. Sure GPU side can offer the hardware (what they can within 28nm); however folk aren’t going to buy into that if they can’t see a monitor(s) upgrade as obtainable in the near term. I really thought panel manufactures would've ran with the whole Adaptive Sync to aid in some new market push, but it (or they) don’t seem to be generating that momentum. A mainstream gamer (@1080p) should be able right now to realize a card that's in a $280-400 range, and then see in the relative near future decent though generic 1440p Adaptive Sync panel for $350-400. From where I’m watching, I’m not sensing a move in that direction.
Posted on Reply
#202
HumanSmoke
CasecutterYou stay up on that side of Green team, it's hard to forget something that I've never read about. It appears to have been brushed-over in its mid-March announcement, bowled-over with TitanX, I suppose I see how I missed that. The Anandtech article was one of a handful that mentioned it in any detail. Come-one... SIGGRAPH was from an article yesterday, super big news on lower-end, not about these big chips. Sidetrack much?
Meanwhile the M6000 was the first actual hard evidence regarding GM 200 to surface (that was a fun thread- it had it all, bogus charts claimed as real, claims of Fiji being on GloFo's 20nm low power process, Fiji stomping over everything), and the first pictures and specs. As for side-tracking, I believe all I was doing was correcting a factual error on your part. as you were the one that bought up GM 200's supposed lack of pro-graphics parts.
Posted on Reply
#203
Casecutter
HumanSmokeAs for side-tracking, I believe all I was doing was correcting a factual error on your part. as you were the one that bought up GM 200's supposed lack of pro-graphics parts.
On the GM200 it's nice to be enlightened. As to side-tacking, you bringing up the whole SIGGRAPH, it seemed like it was in there to send it back down some rabbit hole.
CasecutterSIGGRAPH was from an article yesterday, super big news on lower-end, not about these big chips. Sidetrack much?
Posted on Reply
#204
Captain_Tom
FrickIn 12 of the 22 games of w1z's charts it is actually faster than the 980ti. In that chart I count 12 games. I haven't bothered to see if they are the same games.



www.guru3d.com/articles_pages/amd_radeon_r9_fury_x_review,11.html



Haven't watched the video, but it is the fastest in cherry picked tests. Which is the point of PR, and as far as I'm concerned they haven't told lies. Picking on a company for PR speak is like hating the sky because it's blue, or the sun because it shines.
Techpowerup's benchmarks always skew towards Nvidia since they include Batman/AC gamesworks garbage. In the Tomshardware review the Titan X and Fury X benched exactly the same. Titan X won by 1% in 1440p and the opposite happened in 4K.
Posted on Reply
#205
newtekie1
Semi-Retired Folder
Captain_TomTechpowerup's benchmarks always skew towards Nvidia since they include Batman/AC gamesworks garbage. In the Tomshardware review the Titan X and Fury X benched exactly the same. Titan X won by 1% in 1440p and the opposite happened in 4K.
TPU reviews also include games like Bioshock Infinite and Farcry 4, which are notoriously AMD optimized. So, I'd say the TPU reviews are the most balanced.
Posted on Reply
#206
rtwjunkie
PC Gaming Enthusiast
Captain_TomTechpowerup's benchmarks always skew towards Nvidia since they include Batman/AC gamesworks garbage. In the Tomshardware review the Titan X and Fury X benched exactly the same. Titan X won by 1% in 1440p and the opposite happened in 4K.
Do you read the reviews? W1zzard routunely mentions "in the interest of fairness..." he turns off gameworks, etc stuff that would be preferential.
Posted on Reply
#207
R-T-B
rtwjunkieDo you read the reviews? W1zzard routunely mentions "in the interest of fairness..." he turns off gameworks, etc stuff that would be preferential.
Devil's advocate here...

It's still probably fair to assume the game was better tested/optimized on NVIDIA hardware though if they include that at all.
Posted on Reply
#208
EarthDog
There really isn't anything a website can do about it honestly. You play with a choice between AAA/relevant titles, making sure you have a decent cross section of genres, and what the people want to see (after all this is about hits). Do you add making sure to get a fair balance of TWIMTBP games and AMD sponsored ones? If you do that can you still get a good cross section of genres, AAA titles, and games people care about?
Posted on Reply
#209
moproblems99
I want to see benchs of games that I play. I will be mad at the devs for chosing to make them GameWorks or whatever
Posted on Reply
#210
arbiter
Captain_TomTechpowerup's benchmarks always skew towards Nvidia since they include Batman/AC gamesworks garbage. In the Tomshardware review the Titan X and Fury X benched exactly the same. Titan X won by 1% in 1440p and the opposite happened in 4K.
rtwjunkieDo you read the reviews? W1zzard routunely mentions "in the interest of fairness..." he turns off gameworks, etc stuff that would be preferential.
If people were to read most reviews instead of skipping everything. All sites do turn off gameworks, physx, tressfx, etc. So really your attempt to make excuses for AMD being slower is just that an excuse.
newtekie1TPU reviews also include games like Bioshock Infinite and Farcry 4, which are notoriously AMD optimized. So, I'd say the TPU reviews are the most balanced.
BF4 and alien are also AMD games as well, BF4 has AMD's propeirtary locked up Mantle API that isn't even allowed to run on Nvidia cards. Hrm reminds me of people complaining of PhysX doesn't it?
Posted on Reply
#211
rtwjunkie
PC Gaming Enthusiast
moproblems99I want to see benchs of games that I play. I will be mad at the devs for chosing to make them GameWorks or whatever
That is hilarious. To actually be mad at devs because not all choose to work with AMD? To be sure, in a perfect world none would work with either camp. But that's reality, and you being "mad" isn't going to change that. There's real things in the world to be mad about.
Posted on Reply
#212
moproblems99
rtwjunkieThat is hilarious. To actually be mad at devs because not all choose to work with AMD? To be sure, in a perfect world none would work with either camp. But that's reality, and you being "mad" isn't going to change that. There's real things in the world to be mad about.
Notice the "or whatever." I don't give a rat's ass if it is GameWorks or whatever pile of shit program AMD has. It should be neither. We don't need more segregation in the PC gaming community. We already get shitty console ports half the time the last thing we need is to put down a pretty penny on a GPU and have it handicapped in games because it isn't red or green branded. So I will be "mad" at the devs, forgive me for using a figure of speech. Personally, I don't give a flying fuzz about whose brand is who. They (AMD and Nvidia) both think we (gamers) are stupid. Nvidia lies to our faces and expect us to just swallow it an ask for more and AMD puts out sub par products and expect us to be happy about it.
Posted on Reply
#213
newtekie1
Semi-Retired Folder
arbiterBF4 and alien are also AMD games as well, BF4 has AMD's propeirtary locked up Mantle API that isn't even allowed to run on Nvidia cards. Hrm reminds me of people complaining of PhysX doesn't it?
Oh, and don't forget Tomb Raider with TressFX.
moproblems99Notice the "or whatever." I don't give a rat's ass if it is GameWorks or whatever pile of shit program AMD has. It should be neither. We don't need more segregation in the PC gaming community. We already get shitty console ports half the time the last thing we need is to put down a pretty penny on a GPU and have it handicapped in games because it isn't red or green branded. So I will be "mad" at the devs, forgive me for using a figure of speech. Personally, I don't give a flying fuzz about whose brand is who. They (AMD and Nvidia) both think we (gamers) are stupid. Nvidia lies to our faces and expect us to just swallow it an ask for more and AMD puts out sub par products and expect us to be happy about it.
I have never once, not one time ever, started to play a game and felt like it was handicapped because I was using a card from the wrong GPU maker. If I can't turn on some eye candy that has no bearing on the actual gameplay, I really don't give a shit.

At the same time, games devs that do work with one team or the other still make their games completely playable on the other's cards. They'd be stupid not to. The other team's cards might get a few FPS lower, but it has never been to a point that I said "damn this game is crippled because my card is made by team XYZ". It just doesn't happen.

Finally, don't even try to say nVidia is the only one outright lying to our faces. AMD has lied a lot more, and said much worse lies, than nVidia. But for some reason they just seem to get a pass...
Posted on Reply
#214
HumanSmoke
CasecutterOn the GM200 it's nice to be enlightened. As to side-tacking, you bringing up the whole SIGGRAPH, it seemed like it was in there to send it back down some rabbit hole.
No, it was demonstrate that the M6000 won't be the only Maxwell pro graphics card - because as you well know, if I only supply a single example, the almost inevitable retort - not necessarily from you, but once burned, twice shy as they say- " But that's ONLY 1 CARD!!!!!! That only ONE away from ZERO!!!"
Posted on Reply
#215
john_
arbiterBF4 and alien are also AMD games as well, BF4 has AMD's propeirtary locked up Mantle API that isn't even allowed to run on Nvidia cards. Hrm reminds me of people complaining of PhysX doesn't it?
Typical fanboy comment.



Creating physics effects with the use of PhysX, usually means that there is NOT going to be available a second physics engine that can offer the same quality of effects using the CPU or/and the GPU. That means that a person that does not use an Nvidia GPU sees something inferior on his screen.

Even if that person has a secondary Nvidia card, he can't use it because Nvidia forbids it. Patches that where removing the lock Nvidia implements in it's drivers, showed that using an Nvidia secondary card for physics was 100% working.


On the other hand Mantle does NOT affect the quality for someone running the game with DX11. What the person running the DX11 version sees on his screen, is the same as to what the person running Mantle sees on his screen. No one is treated as a second class customer.

And of course Nvidia would have never agreed in supporting an AMD's tech with AMD's logo on it. Instead it does support DX12 and I believe Vulkan. So in a way it does have an equivalent to Mantle. Nvidia on the other hand never promoted or supported the idea of a physics engine that will take advantage of what PhysX offers today and let someone else make it run on all gpus. Instead they create more proprietary techs thought GameWorks that can guarantee that whoever is not paying for Nvidia hardware will be treated as a second class customer with inferior quality and/or performance in the game.
newtekie1Oh, and don't forget Tomb Raider with TressFX.
Runs perfectly on Nvidia GPUs.
Posted on Reply
#216
rtwjunkie
PC Gaming Enthusiast
john_Runs perfectly on Nvidia GPUs.
Not really, I had to turn that "feature" off just to adequately play the game, jist as surely as I have to turn off Nvidia hairworks.
Posted on Reply
#217
john_
It runs the same on Nvidia or AMD GPUs that's what I meant with "perfectly".
Posted on Reply
#218
rtwjunkie
PC Gaming Enthusiast
But it doesnt run perfectly. Maybe thats what AMD PR says, but it simply isnt true.
Posted on Reply
#219
john_
rtwjunkieBut it doesnt run perfectly. Maybe thats what AMD PR says, but it simply isnt true.
Stop being a fanboy, just for a moment. It will do you good. Believe me.
Posted on Reply
#220
rtwjunkie
PC Gaming Enthusiast
john_Stop being a fanboy, just for a moment. It will do you good. Believe me.
Actually, not even close to a fanboy here. And were you a more regular contributor throughout the forums you would know that. You "claim" that Tress FX works great on Nvidia. My ACTUAL use says it doesn't. Does that bother me? Not in the least. NVIDIA hairworks doesn't work great either, so I don't use THAT. Doesn't bother me in the least.

I merely object when people like you spout a PR line about things being available for all equally, when they really aren't. However ALL those effects from both sides are gimmicks, and not necessary for good, fun gameplay, so not having them doesn' bother me. I just object to the statement you made about Tress-FX being just as usable on Nvidia cards.
Posted on Reply
#221
moproblems99
newtekie1I have never once, not one time ever, started to play a game and felt like it was handicapped because I was using a card from the wrong GPU maker. If I can't turn on some eye candy that has no bearing on the actual gameplay, I really don't give a shit.

At the same time, games devs that do work with one team or the other still make their games completely playable on the other's cards. They'd be stupid not to. The other team's cards might get a few FPS lower, but it has never been to a point that I said "damn this game is crippled because my card is made by team XYZ". It just doesn't happen.

Finally, don't even try to say nVidia is the only one outright lying to our faces. AMD has lied a lot more, and said much worse lies, than nVidia. But for some reason they just seem to get a pass...
Holy crap, why does everyone have to get so uptight about GPU makers and take everything so literal. Just because I didn't list AMD PR lies in my post doesn't mean I don't think they do or have. Stop drinking the cool-aid.
Posted on Reply
#222
arbiter
john_On the other hand Mantle does NOT affect the quality for someone running the game with DX11. What the person running the DX11 version sees on his screen, is the same as to what the person running Mantle sees on his screen. No one is treated as a second class customer.
You can't say that it DIDN'T since some of th game dev's had to dedicated to work on that instead of other things so you can't say for sure it didn't effect DX11 users.
john_Runs perfectly on Nvidia GPUs.
john_It runs the same on Nvidia or AMD GPUs that's what I meant with "perfectly".
I remember when that game was first released, tressFX ran like complete dog sh**. No thanks in part to nvidia not getting early copy of the game work on the issue(if this happened to AMD we all know where they would put the blame). TressFX used OpenCL which AMD is better in, well Hairworks uses DX11 tessellation which Nvidia is miles ahead in over AMD. Both used standards for it so stop whining about it, nvidia used standards which people complained about nvidia not using. Sad they get attacked for using a standard now.
Posted on Reply
#223
john_
arbiterI remember when that game was first released, tressFX ran like complete dog sh**. No thanks in part to nvidia not getting early copy of the game work on the issue(if this happened to AMD we all know where they would put the blame).
Nvidia had a working driver 3 days after TressFX. The nature of TressFX was giving them the possibility to optimize in no time. On the other hand it is convenient whatever Nvidia creates to run crap on anything else. It took them years (I wonder why) to offer a PhysX CPU version that doesn't shoot itself in the foot. The first versions where not even using SSE.
Posted on Reply
Add your own comment
May 12th, 2024 09:17 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts