Friday, June 26th 2015
AMD Didn't Get the R9 Fury X Wrong, but NVIDIA Got its GTX 980 Ti Right
This has been a roller-coaster month for high-end PC graphics. The timing of NVIDIA's GeForce GTX 980 Ti launch had us giving finishing touches to its review with our bags to Taipei still not packed. When it launched, the GTX 980 Ti set AMD a performance target and a price target. Then began a 3-week wait for AMD to launch its Radeon R9 Fury X graphics card. The dance is done, the dust has settled, and we know who has won - nobody. AMD didn't get the R9 Fury X wrong, but NVIDIA got its GTX 980 Ti right. At best, this stalemate yielded a 4K-capable single-GPU graphics option from each brand at $650. You already had those in the form of the $650-ish Radeon R9 295X2, or a pair GTX 970 cards. Those with no plans of a 4K display already had great options in the form of the GTX 970, and price-cut R9 290X.
The Radeon R9 290 series launch from Fall-2013 stirred up the high-end graphics market in a big way. The $399 R9 290 made NVIDIA look comically evil for asking $999 for the card it beat, the GTX TITAN; while the R9 290X remained the fastest single-GPU option, at $550, till NVIDIA launched the $699 GTX 780 Ti, to get people back to paying through their noses for the extra performance. Then there were two UFO sightings in the form of the GTX TITAN Black, and the GTX TITAN-Z, which made no tangible contributions to consumer choice. Sure, they gave you full double-precision floating point (DPFP) performance, but DPFP is of no use to gamers. So what could have been the calculation at AMD and NVIDIA as June 2015 approached? Here's a theory.Image credit: Mahspoonis2big, Reddit
AMD's HBM Gamble
The "Fiji" silicon is formidable. It made performance/Watt gains over "Hawaii," despite a lack of significant shader architecture performance improvements between GCN 1.1 and GCN 1.2 (at least nowhere of the kind between NVIDIA's "Kepler" and "Maxwell.") AMD could do a 45% increase in stream processors for the Radeon R9 Fury X, at the same typical board power as its predecessor, the R9 290X. The company had to find other ways to bring down power consumption, and one way to do that, while not sacrificing performance, was implementing a more efficient memory standard, High Bandwidth Memory (HBM).
Implementing HBM, right now, is not as easy GDDR5 was, when it was new. HBM is more efficient than GDDR5, but it trades clock speed for bus-width, and a wider bus entails more pins (connections), which would have meant an insane amount of PCB wiring around the GPU, in AMD's case. The company had to co-develop the industry's first mass-producible interposer (silicon die that acts as substrate for other dies), relocate the memory to the GPU package, and still make do with the design limitation of first-generation HBM capping out at 8 Gb per stack, or 4 GB for AMD's silicon; after having laid a 4096-bit wide memory bus. This was a bold move.
Reviews show that 4 GB of HBM isn't Fiji's Achilles' heel. The card still competes in the same league as the 6 GB memory-laden GTX 980 Ti, at 4K Ultra HD (a resolution that's most taxing on the video memory). The card is just 2% slower than the GTX 980 Ti, at this resolution. Its performance/Watt is significantly higher than the R9 290X. We reckon that this outcome would have been impossible with GDDR5, if AMD never gambled with HBM, and stuck to the 512-bit wide GDDR5 interface of "Hawaii," just as it stuck to a front-end and render back-end configuration similar to it (the front-end is similar to that of "Tonga," while the ROP count is the same as "Hawaii.")
NVIDIA Accelerated GM200
NVIDIA's big "Maxwell" silicon, the GM200, wasn't expected to come out as soon as it did. The GTX 980 and the 5 billion-transistor GM204 silicon are just 9 months old in the market, NVIDIA has sold a lot of these; and given how the company milked its predecessor, the GK104, for a year in the high-end segment before bringing out the GK110 with the TITAN; something similar was expected of the GM200. Its March 2015 introduction - just six months following the GTX 980 - was unexpected. What was also unexpected, was NVIDIA launching the GTX 980 Ti, as early as it did. This card has effectively cannibalized the TITAN X, just 3 months post its launch. The GTX TITAN X is a halo product, overpriced at $999, and hence not a lot of GM200 chips were expected to be in production. We heard reports throughout Spring, that launch of a high-volume, money-making SKU based on the GM200 could be expected only after Summer. As it turns out, NVIDIA was preparing a welcoming party for the R9 Fury X, with the GTX 980 Ti.
The GTX 980 Ti was more likely designed with R9 Fury X performance, rather than a target price, as the pivot. The $650 price tag is likely something NVIDIA came up with later, after having achieved a performance lead over the R9 Fury X, by stripping down the GM200 as much as it could to get there. How NVIDIA figured out R9 Fury X performance is anybody's guess. It's more likely that the price of R9 Fury X would have been different, if the GTX 980 Ti wasn't around; than the other way around.
Who Won?
Short answer - nobody. The high-end graphics card market isn't as shaken up as it was, right after the R9 290 series launch. The "Hawaii" twins held onto their own, and continued to offer great bang for the buck, until NVIDIA stepped in with the GTX 970 and GTX 980 last September. $300 gets you not much more from what it did a month ago. At least now you have a choice between the GTX 970 and the R9 390 (which appears to have caught up), at $430, the R9 390X offers competition to the $499 GTX 980; and then there are leftovers from the previous-gen, such as the R9 290 series and the GTX 780 Ti, but these aren't really the high-end we were looking for. It was gleeful to watch the $399 R9 290 dethrone the $999 GTX TITAN in September 2013, as people upgraded their rigs for Holiday 2013. We didn't see that kind of a spectacle this month. There is a silver lining, though. There is a rather big gap between the GTX 980 and GTX 980 Ti just waiting to be filled.
Hopefully July will churn out something exciting (and bonafide high-end) around the $500 mark.
The Radeon R9 290 series launch from Fall-2013 stirred up the high-end graphics market in a big way. The $399 R9 290 made NVIDIA look comically evil for asking $999 for the card it beat, the GTX TITAN; while the R9 290X remained the fastest single-GPU option, at $550, till NVIDIA launched the $699 GTX 780 Ti, to get people back to paying through their noses for the extra performance. Then there were two UFO sightings in the form of the GTX TITAN Black, and the GTX TITAN-Z, which made no tangible contributions to consumer choice. Sure, they gave you full double-precision floating point (DPFP) performance, but DPFP is of no use to gamers. So what could have been the calculation at AMD and NVIDIA as June 2015 approached? Here's a theory.Image credit: Mahspoonis2big, Reddit
AMD's HBM Gamble
The "Fiji" silicon is formidable. It made performance/Watt gains over "Hawaii," despite a lack of significant shader architecture performance improvements between GCN 1.1 and GCN 1.2 (at least nowhere of the kind between NVIDIA's "Kepler" and "Maxwell.") AMD could do a 45% increase in stream processors for the Radeon R9 Fury X, at the same typical board power as its predecessor, the R9 290X. The company had to find other ways to bring down power consumption, and one way to do that, while not sacrificing performance, was implementing a more efficient memory standard, High Bandwidth Memory (HBM).
Implementing HBM, right now, is not as easy GDDR5 was, when it was new. HBM is more efficient than GDDR5, but it trades clock speed for bus-width, and a wider bus entails more pins (connections), which would have meant an insane amount of PCB wiring around the GPU, in AMD's case. The company had to co-develop the industry's first mass-producible interposer (silicon die that acts as substrate for other dies), relocate the memory to the GPU package, and still make do with the design limitation of first-generation HBM capping out at 8 Gb per stack, or 4 GB for AMD's silicon; after having laid a 4096-bit wide memory bus. This was a bold move.
Reviews show that 4 GB of HBM isn't Fiji's Achilles' heel. The card still competes in the same league as the 6 GB memory-laden GTX 980 Ti, at 4K Ultra HD (a resolution that's most taxing on the video memory). The card is just 2% slower than the GTX 980 Ti, at this resolution. Its performance/Watt is significantly higher than the R9 290X. We reckon that this outcome would have been impossible with GDDR5, if AMD never gambled with HBM, and stuck to the 512-bit wide GDDR5 interface of "Hawaii," just as it stuck to a front-end and render back-end configuration similar to it (the front-end is similar to that of "Tonga," while the ROP count is the same as "Hawaii.")
NVIDIA Accelerated GM200
NVIDIA's big "Maxwell" silicon, the GM200, wasn't expected to come out as soon as it did. The GTX 980 and the 5 billion-transistor GM204 silicon are just 9 months old in the market, NVIDIA has sold a lot of these; and given how the company milked its predecessor, the GK104, for a year in the high-end segment before bringing out the GK110 with the TITAN; something similar was expected of the GM200. Its March 2015 introduction - just six months following the GTX 980 - was unexpected. What was also unexpected, was NVIDIA launching the GTX 980 Ti, as early as it did. This card has effectively cannibalized the TITAN X, just 3 months post its launch. The GTX TITAN X is a halo product, overpriced at $999, and hence not a lot of GM200 chips were expected to be in production. We heard reports throughout Spring, that launch of a high-volume, money-making SKU based on the GM200 could be expected only after Summer. As it turns out, NVIDIA was preparing a welcoming party for the R9 Fury X, with the GTX 980 Ti.
The GTX 980 Ti was more likely designed with R9 Fury X performance, rather than a target price, as the pivot. The $650 price tag is likely something NVIDIA came up with later, after having achieved a performance lead over the R9 Fury X, by stripping down the GM200 as much as it could to get there. How NVIDIA figured out R9 Fury X performance is anybody's guess. It's more likely that the price of R9 Fury X would have been different, if the GTX 980 Ti wasn't around; than the other way around.
Who Won?
Short answer - nobody. The high-end graphics card market isn't as shaken up as it was, right after the R9 290 series launch. The "Hawaii" twins held onto their own, and continued to offer great bang for the buck, until NVIDIA stepped in with the GTX 970 and GTX 980 last September. $300 gets you not much more from what it did a month ago. At least now you have a choice between the GTX 970 and the R9 390 (which appears to have caught up), at $430, the R9 390X offers competition to the $499 GTX 980; and then there are leftovers from the previous-gen, such as the R9 290 series and the GTX 780 Ti, but these aren't really the high-end we were looking for. It was gleeful to watch the $399 R9 290 dethrone the $999 GTX TITAN in September 2013, as people upgraded their rigs for Holiday 2013. We didn't see that kind of a spectacle this month. There is a silver lining, though. There is a rather big gap between the GTX 980 and GTX 980 Ti just waiting to be filled.
Hopefully July will churn out something exciting (and bonafide high-end) around the $500 mark.
223 Comments on AMD Didn't Get the R9 Fury X Wrong, but NVIDIA Got its GTX 980 Ti Right
Also, 4K gaming is still too early (IMHO) and I am guessing most PC gamers play their games on a 21-27" desktop monitor, 2 feet from their face instead of sitting on a couch 10 feet from the TV. 4K makes no sense on unless you're playing games on a 50"+ screen.
GTX980ti I would not brag because it does not have all processors(This is waste,filtter away processors in the production of MG-200 for TITAN-X and Quadro 6000. TITAN-X is It is competitive but are made for DirectX11 and is excessive expensive showe of . nVidia sorry you're here so far behind in the technology and front at the price.:shadedshu:
I want to see as soon as possible on the market 2xGPU FuryX card option with normal price.:)
$650 in 2006 is equal to $766
$830 in 2006 is equal to $979
This is according to the Bureau of Labor Statistics.
So... not all that bad. :)
The point I have long (going on three years) contended is that 28nm was never going to get us to any sustainable level of 4k nor 1080p120; it was always going to about a good 1440p experience (and it has pissed me off to no end these solutions are being sold touting 4k). It's a combination of many factors: bandwidth, buffer size, power requirements of units/clocks to make it feasible to scale throughout the life of this console generation...etc. We were always destined to get painfully close, although I very much thought both companies would be more on par to what AMD achieved (with Hawaii/Fiji) than what nvidia pulled off with GM204/GM200 (overclocked; absolute performance). There is certainly an argument to be made for those product as we wait for the next generation; but the fact remains they don't quite reach the next threshold per segment.
That alll said, that next-generation is where things get real. 4k30 or 60 sustainable on a single card without the fuss of multiple cards. Large buffers. Small card sizes; performance of gpus lining up closer to the ram configurations in consoles scaled across resolutions (even if it shouldn't have to be that way) PER segment...power and price.
For geeks like me, I am so completely excited by the amount of bandwidth/floating-point that these cards should bring. Why, beyond gaming? MADVR.
I'm (within realistic budget) an image snob...I want my
pornmedia to look good. The idea of what these cards should be able to achieve (as remember multi-card doesn't work for real-time image processing) is tremendously exciting, and should be leaps and bounds better than what is currently possible with most consumer cards. Piled on top of video blocks that actually decode highly-compressed (see realistically steamable) VP8/9 (plastic) and h.264/h.265 (noisy) codecs, the idea remains in my head that scaling/processing many different aspects of video, even with FRC, should be possible while all looking good.It's all about the WHOLE package, and this generation just didn't get there compared to what they want us to believe. It's not AMD nor nvidia's fault; it's a process limitation. If this month of launches has me grateful for anything, it's simply because we're that much closer to 14/16nm. I've said it before, but I'll say it again; it doesn't matter what kind of user you are; high or low rez, high or low framerate, simply IoT or hardcore gamer/video snob; 2016 is going to be one hell of a year. The convergence of technology that should be possible is going to be nothing short of amazing.
AMD have released a card at the same price point which ultimately isn't any faster than a stock reference 980 Ti, using a huge chip with cutting edge HBM, an AIO cooler and while offering less VRAM in the process. Maxwell is still more efficient with GDDR5 and has the benefits of HBM power savings still to come, let along the new process node.
If the rumours of an $800+ price tag were even remotely true before the 980 Ti dropped, then it's Nv that have done consumers a favour, whether that really is at AMD's expense of course remains to be seen.
thanks @btarunr :toast:
:lovetpu:
They flat out over hyped this card, to the point of flat out directly lying about it. They put out so much hype false information that there was no way it could be anything but disappointing.
Also, their ego is making the price this card higher than it should be and the card disappoints at this price.
Their marketing killed them. If they would have said from the beginning that they were putting out a card that performed almost exactly the same as the 980Ti at 4k, and their launch price was $50 cheaper when they released Fury X, it would have been a lot better received. The product's first impression sticks with it, and if you put a bad taste in people's mouths at launch, it is hard turn things around. And the Fury X's launch put a bad taste in a lot of people's mouths. Even with it's flaws at lower resolutions(which I believe will be worked out with driver updates), the 4k performance is there. But instead they flat out said the Fury X outperformed the 980Ti at 4k, they also said typical gaming temperatures were under 50°C and that isn't true either. The marketing was bad, the card is solid though.
I think Summit Ridge arrives first - since AMD desperately need to re-establish a server presence, but the introduction date has been pushed out from Q1 2016 to Q3 2016 - probably the reason that Intel moved out the 10nm Cannonlake architecture, and decided to get another series of 14nm introduced (Kaby lake) in its place.
It only works up to 95 Hz ... (brain fart)²
That's why Nvidia needs to become a monopoly in discrete GPUs, though hardware and mostly thought proprietary techs(software or hardware). That's why it tries to create problems to AMD through aggressive pricing, first with 970 and now with 980Ti.
It's not that I'm disappointed by what AMD showed, quite the opposite. It's just that the hype was to high and I wanted AMD to kick some green arse again.
I can't say if AMD is back in the game yet... It will take a generation or two to really see if they can stay afloat the competition.
I'm really eager to see what will come at the CPU side of AMD.
Better core performance at same TDP that will match Intel offerings?! I'll take it now!
In a mildly modding way, Nvidia have the edge for now with a far more flexible overclocking device (core and memory) but either device would work in my rig. I'm waiting on blocks regardless so will watch pricing and driver maturity - though I doubt the optimism of some people will be realised with dramatic improvements. I think they released Fury X near as fast as it will get, given the reviews so far. Any changes may need to be BIOS revisions.
I'm still holding out for a 980ti 'metal' (mythical unicorn card) that is a full GM200 core with 6GB and unlocked TDP. Hell, if AMD do work wonders with making Fury X even better, I dare say Nvidia would do this for the reps.
It's funny, back when ATI/NVidia were accused of price fixing cards topped out at 500$. After an anti competition lawsuit suddenly 500$ seems cheap.
Interest rates, in a "healthy" economy, are somewhere between 2 and 5 percent depending upon which model you prefer. Less than that, and the economy is stagnating. More than that and buying power is severely limited.
Interest functions as a power equation. This means that the dollar value will be multiplied by the interest rate to the power of the number of years that you've been tracking. IE > dollars adjusted = dollars original * (1+ % inflation)^years.
A little searching, and it seems like the GeForce 7800 GTX 512 came out in 2005 at $650 MSRP.
To those with a calculator, that means the cost for it in adjusted dollars, assuming a 3% inflation rate, is $873.60 adjusted. These cards are currently siting at the $600-$700 range, depending upon seller.
That's right kiddies, a decade of 3% inflation means 34% greater cost in adjusted dollars.
If we're to go back to 2000, the inflation increase amounts to 56%. In round numbers, that $500 card cost more than $750 today.
So, BS detector time.
1) Is the current price out of line; demonstrably not.
2) Are either Nvidia or AMD charging too much; if it sells in a free economy obviously not.
3) Who wins? This is a mixed bag. I feel that performance doesn't justify an upgrade. If you go into this cold, the consumer wins by getting a better card for less price (adjusted). Neither AMD nor Nvidia win, because there's no reason for the majority of users to upgrade. The real winner here is TSMC. They managed to fabricate with the same technology for 5 years. That's insane. If I were Nvidia and AMD right now I'd be making sure the TSMC boss could taste my shoe leather, despite having entered from the oposite end of the digestive tract.
people should be capping/syncing in the first place for years, i know i am, why should any hardware uselessly add more fps (excluding competitive)? or worse, why should anything at any time go to 3,000 fps in a menu!?
anyway you dont 'bring down power consumption' if many sites are testing at 4k with fps in the 40s, you only bring it down if you're originally going past 60 for sustained periods (or whatever monitor refresh you have) proper enthusiast right here :respect: