Monday, January 20th 2020

Rumor: NVIDIA's Next Generation GeForce RTX 3080 and RTX 3070 "Ampere" Graphics Cards Detailed

NVIDIA's next-generation of graphics cards codenamed Ampere is set to arrive sometime this year, presumably around GTC 2020 which takes place on March 22nd. Before the CEO of NVIDIA, Jensen Huang officially reveals the specifications of these new GPUs, we have the latest round of rumors coming our way. According to VideoCardz, which cites multiple sources, the die configurations of the upcoming GeForce RTX 3070 and RTX 3080 have been detailed. Using the latest 7 nm manufacturing process from Samsung, this generation of NVIDIA GPU offers a big improvement from the previous generation.

For starters the two dies which have appeared have codenames like GA103 and GA104, standing for RTX 3080 and RTX 3070 respectively. Perhaps the biggest surprise is the Streaming Multiprocessor (SM) count. The smaller GA104 die has as much as 48 SMs, resulting in 3072 CUDA cores, while the bigger, oddly named, GA103 die has as much as 60 SMs that result in 3840 CUDA cores in total. These improvements in SM count should result in a notable performance increase across the board. Alongside the increase in SM count, there is also a new memory bus width. The smaller GA104 die that should end up in RTX 3070 uses a 256-bit memory bus allowing for 8/16 GB of GDDR6 memory, while its bigger brother, the GA103, has a 320-bit wide bus that allows the card to be configured with either 10 or 20 GB of GDDR6 memory. In the images below you can check out the alleged diagrams for yourself and see if this looks fake or not, however, it is recommended to take this rumor with a grain of salt.
Source: VideoCardz
Add your own comment

173 Comments on Rumor: NVIDIA's Next Generation GeForce RTX 3080 and RTX 3070 "Ampere" Graphics Cards Detailed

#76
Razrback16
CrAsHnBuRnXp
Why? Dual card systems are basically dead these days. Yes, there is still support for SLI but it's at the point where its unneeded and troublesome.
I've had pretty good luck with multi gpu. I only have maybe 2-3 games that don't utilize it. I play exclusively at 4k so for me, multi gpu is pretty much a necessity if I want games to have nice steady framerates and be able to crank up the image quality settings.
Posted on Reply
#77
EarthDog
rtwjunkie
For practical, consumer and gaming use, I see GDDR6 being used on GA102, 103, 104. GDDR6 has proven itself. HBM not so much. I’m with @EarthDog on this.
This... still waiting to see its benefits at this (consumer) level. I get the super high bandwidth, but, clearly it isn't needed and it seems (someone correct me on this if needed) that GDDR6 is cheaper to produce anyway? So...... while for high bandwidth applications it can beneficial, we aren't seeing it needed for gaming or general consumer use.
Posted on Reply
#78
64K
The people that are claiming that Ampere will only be a trivial increase in performance aren't paying attention to Nvidia's track record for performance increases with each successive generation.

Average performance increase over previous generation:

RTX 2080 Ti over GTX 1080 Ti 33% (low due to the introduction of RT Cores and Tensor cores on the 2080 Ti instead of just a lot more CUDA cores)
GTX 1080 Ti over GTX 980 Ti 75%
GTX 980 Ti over GTX 780 Ti 41%
GTX 780 Ti over GTX 580 204%
GTX 580 over GTX 285 70%

Posted on Reply
#79
Aerpoweron
320bit memory bus. I really hope we get more details how that bus is connected to what memory controller. I just want to rule out a mess like the GTX 970 was.

@64K
You skipped a few generations. The GTX 480, and GTX 680 are missing. ;)
Posted on Reply
#80
cucker tarlson
64K
RTX 2080 Ti over GTX 1080 Ti 33% (low due to the introduction of RT Cores and Tensor cores on the 2080 Ti instead of just a lot more CUDA cores)
I think it's a scaling problem
Posted on Reply
#81
dicktracy
64K
The people that are claiming that Ampere will only be a trivial increase in performance aren't paying attention to Nvidia's track record for performance increases with each successive generation.

Average performance increase over previous generation:

RTX 2080 Ti over GTX 1080 Ti 33% (low due to the introduction of RT Cores and Tensor cores on the 2080 Ti instead of just a lot more CUDA cores)
GTX 1080 Ti over GTX 980 Ti 75%
GTX 980 Ti over GTX 780 Ti 41%
GTX 780 Ti over GTX 580 204%
GTX 580 over GTX 285 70%
980ti to 1080ti is pretty much a good example of what to expect from Ampere. New node (7nm EUV this time and not just 7nm!) and new arch + not wanting Intel to ever catch up to them = yuuge increase!
Posted on Reply
#82
eidairaman1
The Exiled Airman
Zmon
Sure there's plenty of research, AMD can only compete with Nvidia at the low-mid tier end currently. They are of course planning on a high end GPU, but how that performs will remain to be seen. We can talk all day about the current die sizes and prices, which will still be the same most likely for Ampere regardless of what AMD does.
5800/5900 series
Posted on Reply
#83
Razrback16
kapone32
Are you sure about that? There is absolutely no stuttering and the money I spent on my 2 Vega 64s was less than buying a brand new 2080TI here in Canada (Including the water blocks).
Ya I remember years back before I tried my first SLI setup (2x 780 Ti) I was scared to death about the stuttering people kept talking about. I've run 780 Ti SLI, Titan X (maxwell) SLI, and now 1080 Ti SLI...I haven't had a lick of stuttering on any games I play. I mean zippo. I generally run vsync @ 60fps in 4k and my games have been butter smooth. If I ever feel like a game is running right around that 60fps limit for my cards and may fluctuate (which can cause input lag in those rare situations), then I switch out of true vsync and enable adaptive vsync at the driver level and that will take care of any issues.

My experiences with multi gpu have been great. It's obviously not for everyone given that the scaling is never 1:1, and in some cases not even close, but if you have tons of cash and / or are just a hardware enthusiast that wants maximum image quality and / or framerates, it's something I'd recommend people try.

Berfs1
Cus 20 series was extremely overpriced from the beginning and everyone knew that.
Ya I'd like to think they'll get the prices back down into the realm of reason, but I am skeptical with NVidia. :) I may need to just plan on buying 2nd hand Turing once the prices get down into reasonable territory.
Posted on Reply
#84
64K
Aerpoweron
@64K
You skipped a few generations. The GTX 480, and GTX 680 are missing. ;)
I left out the GTX 480 (Fermi) because there weren't benches to compare it directly to a GTX 780 Ti (Kepler) here and anyway the GTX 580 (Fermi) was the same generation as the GTX 480 (Fermi) but with a vapor chamber for lower temps and a few more shaders.

The GTX 680 has nothing to do with the comparisons that I was making. It was a midrange Kepler. It wasn't the high end. That was some shenanigans that Nvidia pulled on the uninformed. The GTX 780 Ti which came later was the high end Kepler.
Posted on Reply
#85
EarthDog
64K
That was some shenanigans that Nvidia pulled on the uninformed.
To be clear, this is the same shenanigans as AMD with Polaris, right?

Edit: cat got you tongue? :p
Posted on Reply
#86
efikkan
I would advice against trying to estimate the performance when we don't know a single thing about its performance characteristics. Back when Turing's details were pretty much confirmed, most predicted like a ~10% performance increase, and there were an outcry from many claiming it would be a huge failure. Then it surprised "everyone" by offering significant performance gains anyway.

We still don't know what Nvidia's next gen even is at this point. In the worst case, we're looking at a shrink of Turing with some tweaks and higher clocks, but it could also be a major architectural improvement. While I'm not sure I believe the details of this "leak", I do think future improvements will come from architectural changes rather than just "doubling" of SMs every node shrink.
Posted on Reply
#87
moproblems99
kapone32
Multi GPU is for anyone that wants it. I have not replaced my Vega 64s with 5700Xts for exactly that reason. If Multi GPU was really dead MBs would not be touting it and giving you 2, 3 and 4 SLI bridges. As much as people complain about it being troublesome it is not as bad as people make it out to be. There are plenty of Games that support Multi GPU anyway. As an example I get 107 FPS average at 4K playing Jedi Fallen Order @ Ultra settings.
Multi gpu is great for people that like to waste 50% of their money more than 50% of the time. I'll never do it again. Nothing like sitting there waiting for a patch or driver update to get the other card working. Meanwhile, I finished the game already.
Posted on Reply
#88
cucker tarlson
I think they'll wanna

1.use the performance of ps5 as reference,so expect a 350-400 rtx3060 card trading blows with 2070s/2080og
2.have a lot of sku options from the very beginning to be able to respond to changing pricepoints fluidly,hence the new a103 die
3.jack up the prices in the ~2080 super/2080Ti (rtx3070) territory and upwards.expect $600 3070 cards.
Posted on Reply
#89
Assimilator
LOL @ HBM. It's dead on consumer cards and it's not coming back, the only reason AMD ever used it was because Fiji was a disgusting power hog and if they'd added a GDDR controller and memory on top of that it would've been even more of a TDP turd. I assume they continued with it on Vega because by that time they were invested (and Vega was not without TDP and memory bandwidth issues of its own), but it's pretty significant that with Navi - their first mid- to high-end GPU in a while that doesn't suck power like a sponge - they've ditched HBM entirely.

HBM's higher bandwidth, lower power consumption and increased cost only makes sense in compute environments where the memory subsystem is completely maxed out transferring data between the host system and other cards in that system.
Posted on Reply
#90
moproblems99
64K
The GTX 680 has nothing to do with the comparisons that I was making. It was a midrange Kepler. It wasn't the high end. That was some shenanigans that Nvidia pulled on the uninformed. The GTX 780 Ti which came later was the high end Kepler.
While generally I agree, I believe that NV mid-range 680 was pretty competitive against AMD's high-end...
Posted on Reply
#91
Minus Infinity
Otonel88
10% - 15% over top of the range Nvidia cards like 2080 super or 2080 TI.
If they are running tests and at the moment and the new Ampere generation has an improvement of 50% over the current high end cards then the GPU to come in the next years could have improvements over the current generations as it follows:
3080 (+15% improvement over 2080)
4080 (+30% improvement over 2080) .. and so on.

They will release the performance in batches over the next years cards.

(I am just speculating on percentages, but I guess you got my point)
No I disagree, the next cards after Ampere are a clean sheet new architecture, called Hopper or Harper, named after a female computer scientist IIRC. IMO 3xxx cards will see around 30% at a minimum compared to 2xxx cards, 3080 as fast at least as 2080TI, 3070 faster than 2080 Super, but RT will be much faster for all cards, I honestly expect 100% improvement, they have no choice to if they want to make it a feature you want to enable, it's so lame right now. I'd expect 4xxx cards to be 70%+ faster than 2xxx cards and 200% faster in RT.

Hard to recall Nvidia ever doing a lame 10-15% imprvement on new(er) architecture.
Posted on Reply
#92
kapone32
moproblems99
Multi gpu is great for people that like to waste 50% of their money more than 50% of the time. I'll never do it again. Nothing like sitting there waiting for a patch or driver update to get the other card working. Meanwhile, I finished the game already.
Indeed I have only been running Multi GPU since the GTS 450 days. It is your opinion that it does that not seem good and maybe you had a bad experience. I find that with my Vega 64 crossfire that I do not see a waste of money nor time. I can't speak for Nvidia but every time I update my GPU drivers both cards work perfectly. Maybe I am just lucky.
Posted on Reply
#94
64K
moproblems99
While generally I agree, I believe that NV mid-range 680 was pretty competitive against AMD's high-end...
It was. The GTX 680 was a little faster than the HD 7970 and was about $50 cheaper than the HD 7970 until the HD 7970 GHz Edition High End GPU that AMD released which caught up with Nvidia's upper midrange GPU the GTX 680.
Posted on Reply
#95
moproblems99
64K
It was. The GTX 680 was a little faster than the HD 7970 and was about $50 cheaper than the HD 7970 until the HD 7970 GHz Edition High End GPU that AMD released which caught up with Nvidia's upper midrange GPU the GTX 680.
So, adding 1 + 1, it really doesn't matter that it was a mid-range card, does it? In most cases, people are buying for performance, not chip size, transistor count, board label, or anything else. It is pretty much performance. Would anything have really changed if the board was labeled a 100 or 102?

Assimilator
the only reason AMD ever used it was because Fiji was a disgusting power hog
This is exactly the only reason it ever made it to consumer gpus.

kapone32
Indeed I have only been running Multi GPU since the GTS 450 days. It is your opinion that it does that not seem good and maybe you had a bad experience. I find that with my Vega 64 crossfire that I do not see a waste of money nor time. I can't speak for Nvidia but every time I update my GPU drivers both cards work perfectly. Maybe I am just lucky.
Indeed. The one and only time I used XFire was it's 'hey-day'. The HD 6800 series. It was total trash. Never again.

Edit: That said. I am half tempted to pick up a second V56 and see what happens. For science.
Posted on Reply
#96
EarthDog
kapone32
Indeed I have only been running Multi GPU since the GTS 450 days. It is your opinion that it does that not seem good and maybe you had a bad experience. I find that with my Vega 64 crossfire that I do not see a waste of money nor time. I can't speak for Nvidia but every time I update my GPU drivers both cards work perfectly. Maybe I am just lucky.
I wouldn't call you lucky, but I would say you are in the minority. :)

They just need to either REALLY make it work so scaling is better and consistent across a lot more titles, or abort alltogether.
Posted on Reply
#97
kapone32
moproblems99
So, adding 1 + 1, it really doesn't matter that it was a mid-range card, does it? In most cases, people are buying for performance, not chip size, transistor count, board label, or anything else. It is pretty much performance. Would anything have really changed if the board was labeled a 100 or 102?



This is exactly the only reason it ever made it to consumer gpus.



Indeed. The one and only time I used XFire was it's 'hey-day'. The HD 6800 series. It was total trash. Never again.

Edit: That said. I am half tempted to pick up a second V56 and see what happens. For science.
Well I have to be honest, the only games I played in those days was Total War Medieval 2 and Total War Shogun 2, throw in some Torchlight and Titan Quest as well all of which have full multi GPU support. Then I got into Just Cause 2, Batman Arkham and Dues Ex HR which all support crossfire. Then I discovered Sleeping Dogs and Shadow of Mordor. Then Rome 2 was launched and after that, Attila which again fully support crossfire. Just Cause 3 and I will end it with Total War Warhammer 1 & 2 which both support multi GPU. Even though 3 Kingdoms does not support multi GPU, I fully expect Warhammer 3 to continue crossfire support (as long as you can edit the script) as it will be an expansion of the existing game.
Posted on Reply
#98
EarthDog
kapone32
(as long as you can edit the script)
Exactly the stuff most users simply do not want to deal with and one of the hassles. Most people can't figure out how to install a second card, none the less change .exe files to different names to get things to work. People just want it to work... :)

Unless you are running 4K and don't want to drop $1K on a 60 fps capable card, I suppose its viable.. but the writing has been on the wall for years now, it is a, and rightfully so, a dieing breed.
Posted on Reply
#99
Prima.Vera
Like every generation, most likely, but not guaranteed, the performance for the 30xx series will be something like:
3070 = RTX 2080
3080 = RTX 2080 Ti
3080 Ti = RTX Titanium

So shouldn't be any surprises here tbh...
Posted on Reply
#100
Hyderz
my wallet ran away when it saw the rtx2080ti launch price, until today its still shivers
Posted on Reply
Add your own comment