• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce GTX 1080 Ti Specifications Leaked, Inbound for Holiday 2016?

it isn't insane to think it's possible the Fury X will come close to the new Titan
As you can see, in DX12 game made using GCN asynchronous engines (which is the best case for AMD) they're not even close
rottr_2560_1440.png rottr_3840_2160.png
 
As you can see, in DX12 game made using GCN asynchronous engines (which is the best case for AMD) they're not even close
View attachment 78902 View attachment 78903
right..
Screenshot_2.png


as for performance in Vulkan, which is much closer to what AMD hoped for in an API

Screenshot_1.png


you can see that the fury x has 2/3 of the performance for half the price.
to the point though, all this conversation is a pointless red vs green fight. we are the consumers, and as such, we should be outraged with the insane prices of both Nvidia and intel. i suggest voting with our wallets and leaving aside any personal feelings of misguided loyalty to whatever corporation.

http://www.overclock3d.net/reviews/...e_tomb_raider_directx_12_performance_review/6
http://www.guru3d.com/articles_pages/gigabyte_radeon_rx_480_g1_gaming_review,16.html
 
Maybe this will be what 780Ti was for the first Titan, instead of what 980Ti was for the next Titan ...
Knowing nVidia this would hardly be called a surprise.
 
Maybe this will be what 780Ti was for the first Titan, instead of what 980Ti was for the next Titan ...
Knowing nVidia this would hardly be called a surprise.

I was wondering the same thing. ie, maybe 1080Ti will be 3840 cores, fully enabled like the 780Ti was.
 
...methinks Captain Tom has been in space too long, must be Major Tom from Space Oddysey/David Bowie.
 
as for performance in Vulkan, which is much closer to what AMD hoped for in an API



you can see that the fury x has 2/3 of the performance for half the price.
to the point though, all this conversation is a pointless red vs green fight. we are the consumers, and as such, we should be outraged with the insane prices of both Nvidia and intel. i suggest voting with our wallets and leaving aside any personal feelings of misguided loyalty to whatever corporation.


I removed the graphs don't seem necessary everyone can go back and see the obvious plus you state it, the point for me was simply to state I have no idea how Tom's original statement is even vaguely close to reality. That was really it, it's not pro-green or anti-red it's simply me looking at the facts as I have them and how the cards' perform even in best case scenarios. You proved that with your graph which is arguably the best at this point in time you can do with ATI and worst you can do for team green. Even with that you still get as you said 2/3's the performance...hardly close or even more as Tom's original post said, that is what I was addressing.
 
Maybe this will be what 780Ti was for the first Titan, instead of what 980Ti was for the next Titan ...
Knowing nVidia this would hardly be called a surprise.

Just picked up at 980ti actually:). Seemed to be best performance I could get at a relatively reasonable price especially second hand, truthfully other than efficiency gains (massive granted) I'm not at all impressed with performance of 1000 series; you can basically overclock a 980ti and a 1080 and the ti nips at it's heels or at least stays closer than 2/3's :). Regardless it is good enough that they at first didn't include ti numbers in the 1000 series reviews because it would look too good compared to their new GPU's performance. As I said only real hit on last gen vs. this gen of Nvidia is the efficiency is greatly improved.
 
Nice cherry picking. Tombraider got its DX12 support LONG after launch, and it was more or less a half implementation.

Ok what about Reaper who argued both camps suck on pricing and railed against us being on any side...is he cherry picking? Doesn't sound like he would based on his own words and sentiments. He picked figures a bit more favorable than BiggieShady but still as he pointed out AMD still only got to about 2/3's the performance of the titan. If the fury X was so wonderful and AMD was half as confident as you that they could have ANY and I mean ANY of their cards vaguely compete with titan for obviously way less cash I think they'd be touting it to the hills which obviously they aren't.
 
Dude it's already close to the 1080 in Vulkan/DX12, and lol it smokes the old Titans.


Call me crazy all you want - but the 7970 is stronger than the original Titan, and thus it isn't insane to think it's possible the Fury X will come close to the new Titan in a while. Seems to happen to all of Nvidia's cards.
Crazy is indeed the word.
There is nothing in Direct3D 12 nor Vulkan which will greatly benefit GCN more than Pascal. The primary reason why AMD show greater relative gains in some games is that Nvidia brought most of the Direct3D 12 improvements to all APIs.
All the games shown this far favoring GCN has been AMD exclusives and are clearly biased. And there will be a handful of these, as there will be many console ports ahead.
Even with these biased games, it still wouldn't make a GPU twice as fast. That's just a crazy idea spread by fans.
 
Nice cherry picking. Tombraider got its DX12 support LONG after launch, and it was more or less a half implementation.
Look, it's so easy for you to put @BiggieShady in his place: these are objective measurements, so just show some graphs of AMD beating or even equaling NVIDIA in DX12 and include a link to their origin. You'll then have won your argument hands down and he'll look a fool.

I predict a deathly silence follows or more strawman arguments. Place your bets!
 
Look, it's so easy for you to put @BiggieShady in his place: these are objective measurements, so just show some graphs of AMD beating or even equaling NVIDIA in DX12 and include a link to their origin. You'll then have won your argument hands down and he'll look a fool.

I predict a deathly silence follows or more strawman arguments. Place your bets!

http://www.guru3d.com/articles_page..._graphics_performance_benchmark_review,9.html

https://www.techpowerup.com/reviews/ASUS/GTX_1060_STRIX_OC/12.html

Those are the latest well-built new-API games. BF1 will get DX12 and then we will have another good comparison.

I am here saying that in a year the Fury X will match the 1080 in most of the latest games. If I am wrong you can say so :D
 
Nice cherry picking. Tombraider got its DX12 support LONG after launch, and it was more or less a half implementation.
No problem, let's cherry pick from the cherry picked scenarios ... meaning the good case scenarios for AMD (dx12 or vulkan) ... what we get is: the best case scenario for AMD is when Titan XP is 1.5 times faster than Fury X and worst when it's double the performance.
Interestingly enough there are several newer DX11 titles where Titan XP is only 1.5 times faster. (mindblown, I know, seems like you can optimize for gpu architecture even in dx11)
The point is that gap is way too big and fury x is 28 nm chip ffs :laugh:
you can see that the fury x has 2/3 of the performance for half the price.
Yeah, the price is completely different argument here because every company sets the product's price to the highest amount consumers are ready to pay considering the market at the time. Price changes much more than relative performance level and high fps in games isn't the only thing that makes this kind of product desirable ;)
 
No problem, let's cherry pick from the cherry picked scenarios ... meaning the good case scenarios for AMD (dx12 or vulkan) ... what we get is: the best case scenario for AMD is when Titan XP is 1.5 times faster than Fury X and worst when it's double the performance.
Interestingly enough there are several newer DX11 titles where Titan XP is only 1.5 times faster.
The point is that gap is way too big and fury x is 28 nm chip ffs :laugh:

The Fury is a $300 28nm chip. The Titan is a $1200 16nm chip. It is 50% stronger. That is pathetic.
 
The Fury is a $300 28nm chip. The Titan is a $1200 16nm chip. It is 50% stronger. That is pathetic.
Oh Captain my captain, aren't you repeating what I just said? Why are you comparing them in the first place then ;) also don't you know Maxwell Titan is faster than Fury? Additionally, don't you know what price is, and what value is?
You see, the way you value graphics cards is somewhat limited ... and that also goes for all people that use Titan for gaming.
If you ask nvidia, having a gpu that market is willing to pay $1200 for is exactly opposite of pathetic. (How is this possible, have people ever heard of how good fury x is? How could nvidia brainwash so many people at once, have they been putting chemicals into water supply? I wonder how well Radeon Pro Duo sells ... but at least you don't see people gaming on those :laugh:)
 
Last edited:
http://www.guru3d.com/articles_page..._graphics_performance_benchmark_review,9.html

https://www.techpowerup.com/reviews/ASUS/GTX_1060_STRIX_OC/12.html

Those are the latest well-built new-API games. BF1 will get DX12 and then we will have another good comparison.

I am here saying that in a year the Fury X will match the 1080 in most of the latest games. If I am wrong you can say so :D
Ok, I'm pleasantly surprised. :)

Those Guru3D results clearly shows it comfortably beating a GTX 1080 which is what we wanna see. Perhaps it should actually be beating the TITAN X Pascal if we are comparing the top models of both brands? Not sure on this one, but it's still a really good result and the kind of competition that I wanna see. Just imagine, a reasonably priced high end NVIDIA card that doesn't sport a crippled GPU, lol.

We need all new games to perform like this ideally and keep the two companies head-to-head for the best deals. But then they'll get into a little cartel... No, let's not go there lol.

The TPU graph isn't really valid as the best NVIDIA card there is only a GTX 1070 which is some way behind the 1080.
 
Ok, I'm pleasantly surprised. :)

Those Guru3D results clearly shows it comfortably beating a GTX 1080 which is what we wanna see. Perhaps it should actually be beating the TITAN X Pascal if we are comparing the top models of both brands? Not sure on this one, but it's still a really good result and the kind of competition that I wanna see. Just imagine, a reasonably priced high end NVIDIA card that doesn't sport a crippled GPU, lol.

We need all new games to perform like this ideally and keep the two companies head-to-head for the best deals. But then they'll get into a little cartel... No, let's not go there lol.

The TPU graph isn't really valid as the best NVIDIA card there is only a GTX 1070 which is some way behind the 1080.



LOL I am so tired of feeling like an AMD fanboy when I just flat out am not. I have owned plenty of Nvidia cards, and some of them I liked a lot.

But the fact is that it is obvious to me that these Paxwell cards will fall off a cliff in performance by spring.


When it comes to actual final performance numbers (Once the dust settles: I think the best indicators you can look at are a combination of TFLOPS and Bandwidth.

-Fury OC / Fury X will = 1080

-480 will be like 10% behind the 1070

-470 will beat the 1060 by at least 20%

-460 will probably equal the 1050


What you really need to think about is that Vega should easily be 50% faster than the Fury X, and that will likely put it a tad above the Titan XP. Then Nvidia will launch the 1180 with HBM in July 2017 ;). The real question is if Nvidia can get Volta (With true DX12 support) out before 2018. If not....I am not so sure the 1180 will be able to beat Vega.
 
fury x = 1080 after/will be etc, so not a totally known quantity #1, and #2 you said fury essentially was on par with titan...now you're saying 1080, big difference between those two chips even with the crap stock cooler the titan comes with that limits it. Anyway all good I want AMD competitive, but atm with games and direct x/vulcan etc as it is that just isn't the case, will it be? Well, maybe what you said is accurate, however again we don't know for sure or exactly how it will shake out your guestimating however well educated the guess is based on facts.
 
But the fact is that it is obvious to me that these Paxwell cards will fall off a cliff in performance by spring.

When it comes to actual final performance numbers (Once the dust settles: I think the best indicators you can look at are a combination of TFLOPS and Bandwidth.

-Fury OC / Fury X will = 1080

-480 will be like 10% behind the 1070

-470 will beat the 1060 by at least 20%

-460 will probably equal the 1050
You are talking about peak FLOP/s, which is computational power, not rendering performance.
For an AMD GPU to scale as well as Pascal they need to overcome the following:
1) Saturate the GPU
Computational power is useless unless your scheduler is able to feed it, analyze data dependencies and avoid stalls. Nvidia is excellent at this, while GCN is not. Nothing in either Mantle, Direct3D 12 nor Vulkan exposes these features, so no such API will have any impact on this.
2) Efficient rendering avoiding bottlenecks
One of the most clear examples where Nvidia chose a more efficient path is when it comes to rasterizing and fragment processing. AMD processes it in screen space, which means the same data has to travel back and forth between GPU memory and L2 cache multiple times during one frame rendering, which means memory bandwidth, cache misses and data dependencies becomes an issue. Nvidia on the other hand, has since Maxwell rasterized and processed fragments in regions/tiles, so the data can be mostly kept in L2 cache until it's done, and thereby keeping the GPU at peak performance all throughout rasterizing and fragment processing, which after all is most of the load when rendering.

If AMD were to achieve their peak computational power during rendering, they would need to overhaul their architecture. Only then can this performance level be achieved. It doesn't matter if you have the most theoretical power in the world, if you are not able to utilize it.

So RX 480 will always perform close to GTX 1060, it will never rise above it.

What you really need to think about is that Vega should easily be 50% faster than the Fury X, and that will likely put it a tad above the Titan XP. Then Nvidia will launch the 1180 with HBM in July 2017 ;). The real question is if Nvidia can get Volta (With true DX12 support) out before 2018. If not....I am not so sure the 1180 will be able to beat Vega.
Both Maxwell and Pascal have more complete Direct3D 12 support than any other. Stop spinning the lie of a "missing feature", when everybody knows it has been proven that Nvidia supports it.

LOL I am so tired of feeling like an AMD fanboy when I just flat out am not. I have owned plenty of Nvidia cards, and some of them I liked a lot.
The problem is that you are clearly misguided and biased when discussing the subject. A person can own something and still be biased against them ;)
 
Last edited:
as for performance in Vulkan, which is much closer to what AMD hoped for in an API



you can see that the fury x has 2/3 of the performance for half the price.
to the point though, all this conversation is a pointless red vs green fight. we are the consumers, and as such, we should be outraged with the insane prices of both Nvidia and intel. i suggest voting with our wallets and leaving aside any personal feelings of misguided loyalty to whatever corporation.


I removed the graphs don't seem necessary everyone can go back and see the obvious plus you state it, the point for me was simply to state I have no idea how Tom's original statement is even vaguely close to reality. That was really it, it's not pro-green or anti-red it's simply me looking at the facts as I have them and how the cards' perform even in best case scenarios. You proved that with your graph which is arguably the best at this point in time you can do with ATI and worst you can do for team green. Even with that you still get as you said 2/3's the performance...hardly close or even more as Tom's original post said, that is what I was addressing.

indeed, but my point was mainly price/perf argument, meaning, the titan is clearly overpriced even when one takes into account the extra perf. based on perf versus the fury x, the titan should cost less than 1000$. as for the argument "fury x vs titan x", for me, its a non-starter since both cards are in different price segments and their perf is differentiated by a large margin (as expected), so no point in comparing them directly. unless one makes the comparison relative to their architectures and how they perform as such in different APIs.

Ok what about Reaper who argued both camps suck on pricing and railed against us being on any side...is he cherry picking? Doesn't sound like he would based on his own words and sentiments. He picked figures a bit more favorable than BiggieShady but still as he pointed out AMD still only got to about 2/3's the performance of the titan. If the fury X was so wonderful and AMD was half as confident as you that they could have ANY and I mean ANY of their cards vaguely compete with titan for obviously way less cash I think they'd be touting it to the hills which obviously they aren't.

indeed they aren't, because yes it is not faster than the titan, but that wasn't their goal to begin with. for a 1 gen behind card its doing pretty well IMO. also, i think its a bit pointless for a company to brag about how well their older gen card is ageing.
 
indeed, but my point was mainly price/perf argument, meaning, the titan is clearly overpriced even when one takes into account the extra perf. based on perf versus the fury x, the titan should cost less than 1000$. as for the argument "fury x vs titan x", for me, its a non-starter since both cards are in different price segments and their perf is differentiated by a large margin (as expected), so no point in comparing them directly. unless one makes the comparison relative to their architectures and how they perform as such in different APIs.

Yes I know but that isn't what Tom was saying, he didn't mention prices at all, just started with the idea that a fury x was as good or even better than a titan by suggesting if we wanted performance of what titan might do in future ( I assume with driver updates etc) we'd actually should get a fury x...so obviously that infers the fury x is as good and even better than titan so we should buy it. I agree I won't ever get a titan price is way above what I'd ever get for a card. Yes that also is something I didn't say but should have is that they aren't even in same class/price so that is another reason why I thought it was a joke.



indeed they aren't, because yes it is not faster than the titan, but that wasn't their goal to begin with. for a 1 gen behind card its doing pretty well IMO. also, i think its a bit pointless for a company to brag about how well their older gen card is ageing.

Yes, not only is it not faster than a titan, it can't even come close to tying a titan. True it is an older gen card but at this point it's all they got, literally. So yeah maybe they'd not be touting older cards, but I was simply making a point maybe they would if it showed the card favorably and minimized the titan's value/relative performance etc. So yeah a non-starter is best way to put it for all the reasons you cited as well as I and others. For now we only have Nvidia in high end new cards and we have to wait for AMD to get the fork out of its' ass and make something vaguely comparable.
 
You are talking about peak FLOP/s, which is computational power, not rendering performance.
For an AMD GPU to scale as well as Pascal they need to overcome the following:
1) Saturate the GPU
Computational power is useless unless your scheduler is able to feed it, analyze data dependencies and avoid stalls. Nvidia is excellent at this, while GCN is not. Nothing in either Mantle, Direct3D 12 nor Vulkan exposes these features, so no such API will have any impact on this.
2) Efficient rendering avoiding bottlenecks
One of the most clear examples where Nvidia chose a more efficient path is when it comes to rasterizing and fragment processing. AMD processes it in screen space, which means the same data has to travel back and forth between GPU memory and L2 cache multiple times during one frame rendering, which means memory bandwidth, cache misses and data dependencies becomes an issue. Nvidia on the other hand, has since Maxwell rasterized and processed fragments in regions/tiles, so the data can be mostly kept in L2 cache until it's done, and thereby keeping the GPU at peak performance all throughout rasterizing and fragment processing, which after all is most of the load when rendering.

If AMD were to achieve their peak computational power during rendering, they would need to overhaul their architecture. Only then can this performance level be achieved. It doesn't matter if you have the most theoretical power in the world, if you are not able to utilize it.

So RX 480 will always perform close to GTX 1060, it will never rise above it.


Both Maxwell and Pascal have more complete Direct3D 12 support than any other. Stop spinning the lie of a "missing feature", when everybody knows it has been proven that Nvidia supports it.


The problem is that you are clearly misguided and biased when discussing the subject. A person can own something and still be biased against them ;)


I don't inherently disagree with the points you are making, but I have to say that your counter-argument is deeply flawed.

Everything you just said is based in the idea that what I am saying is theoretical. But it isn't - look at some bloody benchmarks of the latest games. In DX12/Vulkan it seems like AMD is indeed taking full advantage of the computational power of their GPU's. In fact your 1060 vs 480 argument is a perfect example - already the 480 is "Rising above the 1060", and in fact at launch they were already trading blows.

Furthermore it seems like you haven't noticed that once games get harder to run they do in fact saturate AMD's hardware. Just look at how the 7970 beat the 680, then the 780, and now the 780 Ti / 970. Also the 290X crushes the 780 Ti now, and the 390X is close to matching the 980 Ti. There is a pattern of AMD cards rising FAR above their initial competition a year after launch, and it isn't because Nvidia is gimping anything.
 
Seems to be good 30% boost from 1080 but I wish NVidia would stick to GDDRx, GTX 1080 still has outstanding 320 GB/sec.
 
Everything you just said is based in the idea that what I am saying is theoretical. But it isn't - look at some bloody benchmarks of the latest games. In DX12/Vulkan it seems like AMD is indeed taking full advantage of the computational power of their GPU's. In fact your 1060 vs 480 argument is a perfect example - already the 480 is "Rising above the 1060", and in fact at launch they were already trading blows.

Furthermore it seems like you haven't noticed that once games get harder to run they do in fact saturate AMD's hardware. Just look at how the 7970 beat the 680, then the 780, and now the 780 Ti / 970. Also the 290X crushes the 780 Ti now, and the 390X is close to matching the 980 Ti. There is a pattern of AMD cards rising FAR above their initial competition a year after launch, and it isn't because Nvidia is gimping anything.
What you are describing is totally impossible. The new APIs will not and can not counter the inefficiencies in the GCN architecture, and will not result in a 50% relative gain for AMD vs Nvidia. The architectural inefficiencies in GCN are not software, it's hardware design.

The only path forward is architectural overhaul. Volta is going to be a bigger architectural change than Pascal, while AMD has stuck to their GCN since the Kepler days of Nvidia.
 
Back
Top