• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce GTX 1080 8 GB

Pascal's async is no better than Maxwell's. Some reviews have done a DX11 versus DX12 comparison and the score doesn't improve.
What should worry AMD is that exact same thing. Without dedicated hardware built specifically for async compute, Pascal is so fast, it still beats the Async specialist Fiji. I don't know if AMD can throw any more hardware at async but given Pascal beats Fury in practically all DX12 tests, despite not having great hardware for it, I'd say the pressure is on AMD.
 
Pascal's async is no better than Maxwell's. Some reviews have done a DX11 versus DX12 comparison and the score doesn't improve.
What should worry AMD is that exact same thing. Without dedicated hardware built specifically for async compute, Pascal is so fast, it still beats the Async specialist Fiji. I don't know if AMD can throw any more hardware at async but given Pascal beats Fury in practically all DX12 tests, despite not having great hardware for it, I'd say the pressure is on AMD.
That is utter nonsense.

First of all, it's a total myth that AMD has better hardware than Direct3D 12. There is no such thing as specialized hardware for each API. Nvidia chose to bring the driver level changes of Direct3D 12 to all APIs, which of course makes them have a lower relative gain from Direct3D 11 to 12. (1)(2)

The whole point of async shaders is for the shaders to utilize different resources in the GPU. Yet, AMD uses async shaders to compensate for the huge inefficiencies in their scheduler, proven by their performance gain from doing compute and rendering at the same time. The reason why Nvidia doesn't even bother isn't because their architecture can't handle it, it's because it wouldn't gain anything as both the Maxwell and Pascal schedulers are able to achieve optimal utilization. This is why Nvidia is prioritizing general optimizations rather than a feature that would give them less than 1% gain.
 
That is utter nonsense.

First of all, it's a total myth that AMD has better hardware than Direct3D 12. There is no such thing as specialized hardware for each API. Nvidia chose to bring the driver level changes of Direct3D 12 to all APIs, which of course makes them have a lower relative gain from Direct3D 11 to 12. (1)(2)

The whole point of async shaders is for the shaders to utilize different resources in the GPU. Yet, AMD uses async shaders to compensate for the huge inefficiencies in their scheduler, proven by their performance gain from doing compute and rendering at the same time. The reason why Nvidia doesn't even bother isn't because their architecture can't handle it, it's because it wouldn't gain anything as both the Maxwell and Pascal schedulers are able to achieve optimal utilization. This is why Nvidia is prioritizing general optimizations rather than a feature that would give them less than 1% gain.

Thank you for correcting me. So AMD should worry then and everyone should stop talking up async like it's the best thing in the world?
 
Thank you for correcting me. So AMD should worry then and everyone should stop talking up async like it's the best thing in the world?
AMD is in panic mode, and if the rumors of Polaris are true 2016 is going to be their worst year yet. Remember that Nvidia has still not brought their "big guns".

Async shaders will be useful for what it's intended for, but most people on the forums have no idea what the purpose of it really is. During rendering of a single frame, most of work is computational intensive (typically >95% of the time). But some of the smaller tasks are not, such as texture compression, data transfer from system memory, video encoding/decoding etc. Without async shaders most of the GPU will be idle during these tasks, so the sole purpose of async shaders is to do different workloads simultaneously utilizing different GPU resources. Async shaders can be utilized to do things like streaming of textures without a performance penalty, but we are still talking about 1-3% performance gain.

The reason why AMD is getting a "big" performance boost in games like Ashes of the Singularity is because their inefficient architecture has over 30% of the GPU idle during a single intensive task. So the game is actually compensating for the GPU's inefficiency, and it's not a testament to AMD's ability to utilize async shaders. It's used as a solution to a problem of AMD's own making...
 
AMD is in panic mode, and if the rumors of Polaris are true 2016 is going to be their worst year yet. Remember that Nvidia has still not brought their "big guns".

Async shaders will be useful for what it's intended for, but most people on the forums have no idea what the purpose of it really is. During rendering of a single frame, most of work is computational intensive (typically >95% of the time). But some of the smaller tasks are not, such as texture compression, data transfer from system memory, video encoding/decoding etc. Without async shaders most of the GPU will be idle during these tasks, so the sole purpose of async shaders is to do different workloads simultaneously utilizing different GPU resources. Async shaders can be utilized to do things like streaming of textures without a performance penalty, but we are still talking about 1-3% performance gain.

The reason why AMD is getting a "big" performance boost in games like Ashes of the Singularity is because their inefficient architecture has over 30% of the GPU idle during a single intensive task. So the game is actually compensating for the GPU's inefficiency, and it's not a testament to AMD's ability to utilize async shaders. It's used as a solution to a problem of AMD's own making...

ahahah still preemtion
NV-Preemption.png
 
Good grief man. You're the one saying wait till Vega. When there is a card out now with unequivocally higher performance metrics. My point is neither ignorant or moot. Vega will bring HBM2, so what? So will consumer GP100 and big Vega. So the whole thing of saying, ignore GP104, wait for real next gen is hypocritical.
No it's not, I see you're not geting my point. To explain it do you directly: the GTX 1080 isn't a enthusiast card, because the performance gain over 980 Ti or Titan X is too little (also it's the successor of GTX 980, a high end level card). Exactly that would change with a bigger chip like Vega, which is probably more suited for a real upgrade than GTX 1080. You're hyping GTX 1080, nothing more. I'am not, that's the difference between us. Read some reviews with GTX 1080 (with OC and without) vs. custom or overclocked 980 Ti/Titan X (I like this one: http://www.pcgameshardware.de/Nvidi...5598/Specials/Benchmark-Test-Video-1195464/2/ ) , I already did this and the result is that 980 Ti / Titan X users can ignore that card and wait for the next big thing. Vega, GTX 1080 Ti, Titan Pascal, something like that.

I know what you are implying, that GP104 isn't proper next gen but frankly, we know so little about Vega that it may not be hugely different from Fiji. HBM is no longer new tech. HBM2 is to HBM as GDDR5X is to GDDR5.
HBM isn't really important to me, even for Fiji it was just for the purpose to enable less power consumption so a card with <300W TDP is possible, the bandwidth isn't really needed (2x (CF) R9 380X have same specs as a Fury X and manage to come along with much less bandwidth for example). The same holds true for Vega, if they decide to ship it with HBM2 instead GDDR5X, because I think GDDR5X would be sufficient. About the architecture, what I know is, that the architecture will be something really new, whereas Polaris was something along the lines of Fiji, small or no architectural changes with everything else geting changed, so that the shaders get better utilization + HDMI 2.0a, DP 1.3 etc.
AMD-Polaris-Architecture-7.jpg


New Command Processor: better utilization in APIs that have suboptimal usage, like DX11.

Long story short, if you pass GP104 because you want true next gen, you'll be upset when the proper (not crippled, again rumours) Vega part is released a few months after. Moral of story, buy when you want to buy. And also, nobody needs any of these powerful desktop gfx cards except for work.
This is exactly what I said earlier, just in other words: buy when you want, or need it. There is no fixed "plan" to buy something, I don't think anyone waits 5 months, if he needs something new now. My point with the GTX 1080 is, it's not exactly a big upgrade to a GTX 980 Ti or Titan X, so it's better to wait for something like Vega. Why should I be upset, I'm not a enthusiast user, I tend to buy GPUs in the line of 300€ maximum, which I think is the best price to performance point. I only know, if I wanted a enthusiast GPU now, I would certainly not buy a GTX 1080, with prices of GTX 980 Ti that low, I'd get one of these instead and have my fun with it.
 
P.S. and what's with that cooler design? Is that Nvidia's idea of trying to be "edgy"?

I think it's more meant to improve airflow when cards are in SLI and in close spaces. That's been a big issue with running two cards in mATX boards; top card severely overheats because of lack of airflow. Think about the parts of backplate that can be removed... why?
 
I see a lot of RGB lighting marketing going around the board partners, I sure hope this doesn't mean the $599 bracket will be destroyed for a tiny bit of LED shiny lights. I just hope we can turn the lights off.
 
I think it's more meant to improve airflow when cards are in SLI and in close spaces. That's been a big issue with running two cards in mATX boards; top card severely overheats because of lack of airflow. Think about the parts of backplate that can be removed... why?
I meant the random sharp edges added to their stock silver/vapor hamber heatsink, not the slimmer backplate...the backplate is more than welcome.
 
the backplate is more than welcome

The backplate is superfluous on the reference (FE) model, as all the PCB support it needs is from the tightly affixed magnesium alloy shroud.
 
No it's not, I see you're not geting my point. To explain it do you directly: the GTX 1080 isn't a enthusiast card, because the performance gain over 980 Ti or Titan X is too little (also it's the successor of GTX 980, a high end level card). Exactly that would change with a bigger chip like Vega, which is probably more suited for a real upgrade than GTX 1080. You're hyping GTX 1080, nothing more. I'am not, that's the difference between us. Read some reviews with GTX 1080 (with OC and without) vs. custom or overclocked 980 Ti/Titan X (I like this one: http://www.pcgameshardware.de/Nvidi...5598/Specials/Benchmark-Test-Video-1195464/2/ ) , I already did this and the result is that 980 Ti / Titan X users can ignore that card and wait for the next big thing. Vega, GTX 1080 Ti, Titan Pascal, something like that.


HBM isn't really important to me, even for Fiji it was just for the purpose to enable less power consumption so a card with <300W TDP is possible, the bandwidth isn't really needed (2x (CF) R9 380X have same specs as a Fury X and manage to come along with much less bandwidth for example). The same holds true for Vega, if they decide to ship it with HBM2 instead GDDR5X, because I think GDDR5X would be sufficient. About the architecture, what I know is, that the architecture will be something really new, whereas Polaris was something along the lines of Fiji, small or no architectural changes with everything else geting changed, so that the shaders get better utilization + HDMI 2.0a, DP 1.3 etc.
AMD-Polaris-Architecture-7.jpg



New Command Processor: better utilization in APIs that have suboptimal usage, like DX11.


This is exactly what I said earlier, just in other words: buy when you want, or need it. There is no fixed "plan" to buy something, I don't think anyone waits 5 months, if he needs something new now. My point with the GTX 1080 is, it's not exactly a big upgrade to a GTX 980 Ti or Titan X, so it's better to wait for something like Vega. Why should I be upset, I'm not a enthusiast user, I tend to buy GPUs in the line of 300€ maximum, which I think is the best price to performance point. I only know, if I wanted a enthusiast GPU now, I would certainly not buy a GTX 1080, with prices of GTX 980 Ti that low, I'd get one of these instead and have my fun with it.

I understand everything you are saying and trust me, I own a Kingpin, I know high end...... but....

there is no point

Wait for the Vega10 GPU, 1st 14nm true enthusiast GPU - coming to you October. ;) Jokes, I think the Nano is still very good, but if you need something better, wait for the Vega.

Vega 10 is being rushed forward to combat 1080. It might not even beat it. Vega 11 is the proper chip. My exception to your post is that vega 10 will not be the leap from a 980ti either. You need to wait for vega 11 OR big Pascal.
 
I meant the random sharp edges added to their stock silver/vapor hamber heatsink, not the slimmer backplate...the backplate is more than welcome.
That's exactly what I was talking about.. those edges create pathways for air to flow into the fan... you might notice how they pretty much all radiate out from the fan....
 
Vega 10 is being rushed forward to combat 1080. It might not even beat it. Vega 11 is the proper chip. My exception to your post is that vega 10 will not be the leap from a 980ti either. You need to wait for vega 11 OR big Pascal.
If Vega is a HBM2 card it may very well beat GTX 1080, as GTX 1080 isn't that fast compared to 980 Ti, and Vega will easily be faster than Fury X / 980 Ti. And if all that is true, the next big thing is Vega10, not 11 - 11 if you want to wait even longer, yes, again X months. I already told you waiting is no option for a lot of people and that it is a moot point, if you can have higher performance now. A cut down of Vega11 could very well be fast enough to warrant a purchase.
 
I just noticed my GTX760 still is in the charts! :D
 
Just skimmed through the review, and this card is good, but I feel the numbers for core and memory clocks were switched.
 
So just as I thought, very limited availability, and Nvidia cashes in on early adopters, and perhaps only partial availability after the 16th of next month according to many sites.
 
Meanwhile, AMD's stock continues to grow, now at 4.60. That's a 5% gain today. It's funny because launching the 1080 didn't seem to do much good for nVidia's stock. Maybe the market knows something we don't?

Edit: In fact, AMD's stock gained when the date that the NDA for reviews on Polaris 10 was released; the magic date appears to be June 29th. Maybe there really is something we don't know but, that could be me being optimistic, or maybe it's me being optimistic because the market is optimistic. Lets see how the market looks in the next week or two. A few days of optimism isn't enough to say it'll be sliced bread good.
Source
 
Last edited:
It appears to be official that Pascal sucks at async shaders. It doesn't suck as bad Maxwell but it still sucks:
http://wccftech.com/nvidia-gtx-1080-async-compute-detailed/
GTX-1080-Ashes-Of-The-Singularity-DirectX-12-Async-Compute-Performance-4K-Extreme.jpg

Pascal, like Maxwell, is still very much a DX11 card. GCN is a DX12/Mantle/Vulkan card.


AMDs stocks are up because they'll soon have a product they can sell to the masses at prices and quantities NVIDIA can't compete with. Additionally, news of the new Xbox likely bolstered that because it translates to millions of orders for AMD.
 
Are there any cards out there than can push more than 25% O.C. just like nVidia advertised on their Doom demo??
So far all the cards released from various vendors are ubber expensive and O.C maximum to +20%...
 
Bring back those days when Wizz blocks the air vents of the card (290X I think) to see what it does, and video comparisons of fan noise.

I agree we should stick with closed systems and warm up prior to benching
 
Does anyone knows when can we see a SLI review for 1080 cards?
 
It appears to be official that Pascal sucks at async shaders. It doesn't suck as bad Maxwell but it still sucks:
http://wccftech.com/nvidia-gtx-1080-async-compute-detailed/
GTX-1080-Ashes-Of-The-Singularity-DirectX-12-Async-Compute-Performance-4K-Extreme.jpg

Pascal, like Maxwell, is still very much a DX11 card. GCN is a DX12/Mantle/Vulkan card.


AMDs stocks are up because they'll soon have a product they can sell to the masses at prices and quantities NVIDIA can't compete with. Additionally, news of the new Xbox likely bolstered that because it translates to millions of orders for AMD.

Afaik async compute is disabled by the game engine when nvidia card is detected, aots need update to really support gtx1080/1070.
 
Great Review, I always enjoy hi-res images of PCB. That Radeon 295x2 still performs very well against the latest 1080.
 
Back
Top