Generation to generation is a hard yardstick because the performance increases per tier are not consistent.
As for VRAM, ... much ado about nothing. The reason most folks get fixated on VRAM is not because they haven't researched but because they have relied on tools that don't deliver what they think they deliver. For example, thos most common argument is... "when I use [insert favorite utility here] with my 8GB card, it clearly shows I am using 5 GB, so 4GB is clearly inadequate". The fact is there is no utility that actually measures VRAM usage, what it measures is VRAM "allocation". When you have a 8GB card, the installation routine says "Oh we have 8 GB card so let's **allocate** up to 5GB of VRAM for our usage". When you have a 4GB card, the installation routine says "Oh we have 4 GB card so let's **allocate** up to 2.5GB of VRAM for our usage". In reality, the game may never break 2 GB.
You can compare this to your credit card ... you have a limit of $5,000 with $500 charged caused you bought everybody drinks wint he bar when your team won the Superbowl. Now you apply for a car loan, the "liability" reported to the bank when you apply for a car loan, is not the $500 actually owned, but the $5,000 credit limit.
This has been shown time and time again in actual testing ... alienbabeltech was one of the 1st to really blow this mindset outta the water... they tested like 45 games at multiple resolutions up to 5760 x 1080. There was no significant difference in fps between the 2 GB 770 and the 4GB 770 in most of the games and the 2Gb was often faster. And while there was a difference in a few games, it didn't matter. In these instances, does it really matter if the extra 2 Gb of VRAM gives you 30% more fps when the game is unplayable in either case .? Are you really going to invest the extra $ in a 4GB version of a card to take you from 13 fps to 17 fps ? The most jarring example of this is that max Payne 2 would not install on the 2 GB card at 570 x 1080... so when they installed the 4 GB, they finally expected to show an advantage to the extra 2 GB .... surprise, it didn't happen. As it turned out, after getting in installed w/ the 4 GB card, they swapped it out for the 2 GB card. having fooled ithe install utility's demand for 4 GB, the game played at the same fps (within margin of error), same quality, same user experience.
You can see this at the link below tho the original site is down (not in english but ya can see the data)....
Puget Sound did it with the 690, Guru3D did it with the 960, and Extremetech did it with the 980 Ti
https://www.pugetsystems.com/labs/articles/Video-Card-Performance-2GB-vs-4GB-Memory-154/
http://www.guru3d.com/articles_pages/gigabyte_geforce_gtx_960_g1_gaming_4gb_review,12.html
https://www.extremetech.com/gaming/...y-x-faces-off-with-nvidias-gtx-980-ti-titan-x
This is from the last link
First, there’s the fact that out of the fifteen games we tested, only four of could be forced to consume more than the 4GB of RAM. In every case, we had to use high-end settings at 4K to accomplish this. .... While we do see some evidence of a 4GB barrier on AMD cards that the NV hardware does not experience, provoking this problem in current-generation titles required us to use settings that rendered the games unplayable any current GPU.
Yes there are exceptions to this rule... poor console ports for example are notorious for eating up VRAM.
If you look here at TPUs tests on the 3GB and 6GB 1060s you see the same thing. yes, the 1060 6 GB is notably faster than it's 3 GB counterpart (about 6% at 1080p). But let's not forget that the 6 GB is a different card... it has 10% more shaders so it's going to be faster no matter what. Now **if** the VRAM was an issue, we should see the performance advantage gain at 1440p ... it does not.
https://www.techpowerup.com/reviews/MSI/GTX_1060_Gaming_X_3_GB/26.html
This same argument was used on the 970 w/ it's alleged 3.5 Gb deficiency ... yes, if you doid some really weaird things and worked hard to create a problem, uou could..... but not at 1080p or 1440p... and if you bought a 970 to run 2160p, you already made a mistake. Try as they could, various websites tried to duplicate reported issues but were simply unable to do so without 'doing some really weird things" ... and when they could, the 980 exhibited the same behavior.
Getting back to the generational thing ....
The 970 was 39% faster than it's predecessor and just as fast as the 780 Ti @ 1440p
The 980 was 38% faster than it's predecessor
The 960 was 10% faster than it's predecessor
The 980 Ti was 40% faster than it's predecessor
The 1070 was 63% faster than it's predecessor and 14% faster than the 980 Ti @ 1440p
The 1080 was 30% faster than it's predecessor
The 960 was 98% faster than it's predecessor
The 1080 was 73% faster than it's predecessor
Another thing to consider ... The difference between the reference and AIB cards also chnages between generations and model lines.