Looks worse than expected actually. The 1080 was at least 25% faster than the 980ti and the 1070 was about 5-7% faster than the Ti. So Turing is a much smaller jump in performance relative to previous lineups.
It's hard to judge the increase when you bounce outside the tiers.... this is due in part to nVidia's goal of setting up the price / performance ratio of each tier such that a top tier card is more attractive then twin cards from a lower tier.
The 1080 Ti was 73% faster than the 980 Ti
The 1080 was 69% faster than the 980, 37% faster than the 980 Ti
The 1070 was 63% faster then the 970, 14% faster than the 980 Ti
I don't think we can always count on the 2 tier advantage
I was expecting even numbers on the memory ... I'd expected to see ...
3 - 4 GB for cards targeted at 1080p
6 - 8 GB for cards targeted at 1440p
12 - 16 GB for cards targeted at 2160p
seen games dangerously approach that 8gb.
As far as I know, there's no utility as yet that measures actual in game usage, so we actually have no way determining this. As it has been descrbed in various articles, utilities measure memory allocation not usage. The best analogy here is when you buy a $700 GFX card on your Visa CC with a $5,000 limit, you have a liability of $700 and $4300 of credit available. However, when you subsequently apply for a car load, the number reported to the credit agencies is $5000. These utilities are pretty much doing the same thing.
When you install a game, it says "Oh... he has 8 GB of VRAM, so let's allocate 6 GB cause our game is awesome". This is what the utilities report, how much is "allocated" not used. This is evidenced by numerpous tests whereby cards with the same GPU but different VRAM amounts are swapped out and the one with less VRAM has less than the utility said was 'allocated" when the larger card was installed. In every test I have seen , outside poor console ports and other oddities, one of the following happened:
a) The cards ran within 1 or 2 fps of one another, with both cards having a small advantages / disadvantages on both sides. In one instance the game would not instal with the smaller VRAM version, so they installed the later version, tested it and then swapped back ... the cards performed identically.
b) The only instance they were able to see a significant difference was at resolutions and settings which made the game unplayable. Of what significance is it if a 3GB card delivers 13 fs and the 6 GB card delivers 31% more ? The game is unplayable at 13 or 17 fps.
Another instance is the 3GB and 6GB 1060s tested here on TPU. This test is somewhat swayed by the fact that they cards have different GPUs with the 6GB version having 10% more shaders. Now when we look at the performance summary, at 1080p, the 6GB card is 6% faster than the 3 GB card, presumably due to the extra shaders.... But logic dictates that if VRAM is an issue, we should see a significant widening of the performance when we move to 1440p, right ? it doesn't happen ... same 6%.