Thursday, September 15th 2016
NVIDIA GeForce GTX 1080 Ti Specifications Leaked, Inbound for Holiday 2016?
NVIDIA is giving finishing touches to its next enthusiast-segment graphics card based on the "Pascal" architecture, the GeForce GTX 1080 Ti. Its specifications were allegedly screengrabbed by a keen-eyed enthusiast snooping around NVIDIA website, before being redacted. The specs-sheet reveals that the GTX 1080 Ti is based on the same GP102 silicon as the TITAN X Pascal, but is further cut-down from it. Given that the GTX 1080 is unflinching from its $599-$699 price-point, with some custom-design cards even being sold at over $800, the GTX 1080 Ti could either be positioned around the $850-mark, or be priced lower, disrupting currently overpriced custom GTX 1080 offerings. By pricing the TITAN X Pascal at $1200, NVIDIA appears to have given itself headroom to price the GTX 1080 Ti in a way that doesn't cannibalize premium GTX 1080 offerings.
The GTX 1080 Ti is carved out of the GP102 silicon by disabling 4 out of 30 streaming multiprocessors, resulting in 3,328 CUDA cores. The resulting TMU count is 208. The card could retain its ROP count of 96. The card will be endowed with 12 GB of GDDR5 memory across the chip's 384-bit wide memory interface, instead of GDDR5X on the TITAN X Pascal. This should yield 384 GB/s of memory bandwidth, significantly lesser than the 480 GB/s bandwidth the TITAN X Pascal enjoys, with its 10 Gbps memory chips. The GPU is clocked at 1503 MHz, with 1623 MHz GPU Boost. The card's TDP is rated at 250W, same as the TITAN X Pascal.GeForce GTX 1080 Ti Specifications:
Source:
OC3D
The GTX 1080 Ti is carved out of the GP102 silicon by disabling 4 out of 30 streaming multiprocessors, resulting in 3,328 CUDA cores. The resulting TMU count is 208. The card could retain its ROP count of 96. The card will be endowed with 12 GB of GDDR5 memory across the chip's 384-bit wide memory interface, instead of GDDR5X on the TITAN X Pascal. This should yield 384 GB/s of memory bandwidth, significantly lesser than the 480 GB/s bandwidth the TITAN X Pascal enjoys, with its 10 Gbps memory chips. The GPU is clocked at 1503 MHz, with 1623 MHz GPU Boost. The card's TDP is rated at 250W, same as the TITAN X Pascal.GeForce GTX 1080 Ti Specifications:
- 16 nm GP102 silicon
- 3,328 CUDA cores
- 208 TMUs
- 96 ROPs
- 12 GB GDDR5 memory
- 384-bit GDDR5 memory interface
- 1503 MHz core, 1623 MHz GPU Boost
- 8 GHz (GDDR5-effective) memory
- 384 GB/s memory bandwidth
- 250W TDP
176 Comments on NVIDIA GeForce GTX 1080 Ti Specifications Leaked, Inbound for Holiday 2016?
The problem with GCN is architectural inefficiencies, and cases such as Doom does nothing to fix that:
- Vendor "optimized" pipelines (e.g. console ports) just makes the competition less efficient, not GCN actually that much better.
- Vendor specific extensions might include tricks, but still does not help the architectural inefficiencies. Most such tricks does not apply to all most cases. Similar to what? Doom Vulkan vs. OpenGL? Do I need to remind you that AMD's OpenGL support is extremely unstable?
Why would "the new paradigm" suddenly dissipate the architectural problems of GCN? Don't you know that the APIs have nothing to do with how internal GPU scheduling (on the level each GPU thread), GPU memory fetches, etc.? And if the APIs were holding AMD back all these years, how come Nvidia were not held back? You better explain yourself. Oh conspiracies!
No one "ever" intentionally "hamper the competition". The real problem is when a game is developed with no or little testing on the other vendor throughout the whole development cycle. If the day-to-day development and testing is all done on one vendor, then it's easy to do design-choices which puts the other vendor at a disadvantage. This typically results in bottlenecks and scaling issues for the other vendor. It might not be easy to fine tune this later. We have always had some AMD(/ATI) and Nvidia biased games, but the amount of AMD biased games has increased because of both PS4 and Xbox One being AMD based. Please explain precisely what will make GCN suddenly grow past it's design faults?
This kind of reminds me of the good old Bulldozer days, when all the fans were screaming that new software will make AMD overcome all the issues. :p It did of course never happen, and AMD finally have discarded the inefficient architecture in favor of a architecture more similar to the competition.
Even that OG Titan vs 7970 example isn't as big as you are describing: The OG Titan was only 35-40% stronger than the 7970 lol.
1080 Ti is a mass market chip so I actually think it is a very good decision considering they will sell just as many to the lemmings either way.
Probably that greedy Nvidia wants to make a profit so that they can stay in business unlike the saints at AMD that only want to sell everything to cheap to make a decent profit and go bankrupt. :)
Even if you moderate yourself to "only" 25% now, how will Fury X be able to achieve that?
It's not quite the card to master 4K/60 although it will get very close. The next x80 card NV launches will get it done but by then 4K/60Hz+ will be a thing.
This card will be 15-20% stronger than the 1080 and cost $750 with $850 for the Founders Edition. Thus $800 price in reality.
Remember how GTX 770 was released in may 2013 and GTX 970 September 2014 that was exactly 60% faster. Much like 1080Ti is to be exactly 60% faster than GTX 1070.
1080Ti can't be more expensive than the SLI that it replaces. 1070 will probably drop to GTX 1160 level. just like GTX670 did as GTX 760 was very close .
The lesson, Can't make the GTX 970 SLI work for longetivity, it was replaced by 980Ti soon after, and the 980Ti reference was replaced by GTX 1060 @ 2.2Ghz.
Can't make the GTX 1070 work, it will be replaced by 1080Ti, and 1080Ti reference will be replaced by....
The rise in pricing as Nvidia captured market share has led me to approach upgrades differently:
I just bought my second 980TI for $300. A SLI setup can be had for $600, 50% of Pascal Titan's price, at 25%-33% more performance than a $1200 Pascal Titan.
Reference for numbers are here: www.guru3d.com/articles-pages/nvidia-geforce-titan-x-pascal-review,26.html and in my system specs.
9K Pascal graphics score, my 980TI SLI does 12K: www.3dmark.com/3dm/14930947
a good deal right now.
And yeah I got lucky a couple years ago when I was crypto mining. Found a bunch of 7950's for $100 each lol (This was in 2014, even today that would be an insane deal). They all overclocked to 1150/1800!
Only paid $390 for it too.
Speaking of old cards, my 970 was nothing special but the previous card is unbeatable in terms of value over time, a 7950 MSI TF3. Got it on Amazon for $309, as soon as they launched. Paid itself off via bitcoin and then some. Eventually upgraded to 970, put a system together from old parts with the 7950 and gave it to a friend, still going strong. Had a great ASIC score too, needed little voltage and even on air it overclocked to 1250/1850, a stock card comes in at 880 on the core, this is a link to the tests I ran: forums.anandtech.com/threads/radeon-hd-7950-owners-thread.2259333/page-10#post-33822143
www.babeltechreviews.com/hd-7970-vs-gtx-680-2013-revisited/3/
They're about equal, with the 7970 only winning in heavily AMD favored DIRT and at 4K, where both cards produce unplayable frame rates. Meanwhile in Dying Light, the 680 scores a good 15 fps more. Overall the 680 can definitely be considered a better choice as it OC"s better and at the time, 3GB was overkill for most games and resolutions.
At launch, the 680 was overall 10% faster than the 7970. They both age well to be honest. The reason you wouldn't buy the 680 was a different one: price. The much cheaper 670 could do almost as well as the 680.
AC Unity was the first game for me that demanded heavy compromises for the 7950 and other AMD cards to run it with playable frame rates, later patches improved it a bit. The 970 after that wasn't as impressive, the price was in similar range but the value quickly evaporated.
But Idk what you are talking about with regards to overclocking. A 7970 at 1250/1850 is a monstrous 35% overclock, it was so high that my card was trading blows with a 970 (when the 970 came out, now it would roflstomp it in Vulcan/DX12 games).
I also checked a benchmark for Dying Light The Following: The 680, 7970, 780, and 7970 GHz all get about the same framerate. But let's not go here because I can find A LOT of benchmarks from the past 2 years that show a 7970 destroying a 680 (in fact the 7870 roughly matches the 680).