Monday, January 20th 2020

Rumor: NVIDIA's Next Generation GeForce RTX 3080 and RTX 3070 "Ampere" Graphics Cards Detailed

NVIDIA's next-generation of graphics cards codenamed Ampere is set to arrive sometime this year, presumably around GTC 2020 which takes place on March 22nd. Before the CEO of NVIDIA, Jensen Huang officially reveals the specifications of these new GPUs, we have the latest round of rumors coming our way. According to VideoCardz, which cites multiple sources, the die configurations of the upcoming GeForce RTX 3070 and RTX 3080 have been detailed. Using the latest 7 nm manufacturing process from Samsung, this generation of NVIDIA GPU offers a big improvement from the previous generation.

For starters the two dies which have appeared have codenames like GA103 and GA104, standing for RTX 3080 and RTX 3070 respectively. Perhaps the biggest surprise is the Streaming Multiprocessor (SM) count. The smaller GA104 die has as much as 48 SMs, resulting in 3072 CUDA cores, while the bigger, oddly named, GA103 die has as much as 60 SMs that result in 3840 CUDA cores in total. These improvements in SM count should result in a notable performance increase across the board. Alongside the increase in SM count, there is also a new memory bus width. The smaller GA104 die that should end up in RTX 3070 uses a 256-bit memory bus allowing for 8/16 GB of GDDR6 memory, while its bigger brother, the GA103, has a 320-bit wide bus that allows the card to be configured with either 10 or 20 GB of GDDR6 memory. In the images below you can check out the alleged diagrams for yourself and see if this looks fake or not, however, it is recommended to take this rumor with a grain of salt.
Source: VideoCardz
Add your own comment

173 Comments on Rumor: NVIDIA's Next Generation GeForce RTX 3080 and RTX 3070 "Ampere" Graphics Cards Detailed

#51
kapone32
RazbojnikFor the money you spent...I can't say I'm impressed with those frames...and I can only imagine how awful the stuttering in 4k is lol
Are you sure about that? There is absolutely no stuttering and the money I spent on my 2 Vega 64s was less than buying a brand new 2080TI here in Canada (Including the water blocks).
Posted on Reply
#52
Xaled
64KWhat are you basing this on?

Let's look at past performance increases to get some clarity about what we can probably expect from Ampere with the lower process node and new architecture.

RTX 2080 (non Super) showed an increase of 37% in average performance over the GTX 1080

GTX 1080 showed an increase of 67% in average performance over the GTX 980








Why in the world are you expecting a lousy 15% increase in performance from Ampere over Turing?
Nah, even that is not correct, Titan to Titan would be the best comparison.
Posted on Reply
#53
Berfs1
kapone32Depending on pricing I may make the jump to Nvidia if this holds true. I cringe at the potential price point for these particular cards.
It was rumored that they would be around half the price, not sure of the accuracy of that statement though, but I think where the 2080 Ti MSRP was 999$, I think the 3080 Ti may be 899$, 3080 599$, and 3070 379$. Just my guess though...
Posted on Reply
#54
TheoneandonlyMrK
Would also be nice to hear some rumours about the improvements to RTX hardware and tensor cores too, shader numbers are not going to inform us much about the performance in isolation.

@Berfs1 I Can't see any of that panning out, since when did Nvidia improve their best then undercut it? so I think the prices we have now plus 50 dollars minimum, at least at launch with supers or something coming out later ,once stocks of the 2xxx series run out, makes sense tbf.
Posted on Reply
#55
ppn
If 7nmEUV by Samsung provides 40Mtr/mm2 density just like TSMC 7nmDUV, then we get 3072Cuda 2080Super chip shrinked to 545 ->340 mm2. 2070 ->3060, 2080S->3070, or 20% performance increase for 2060->3060, 33% 2070->3070. combined with price drop. and just because the EUV tools are limited to 429mm2, 3080 only 25% bigger than 3070 means ~420mm2. ANd there could be no 3080Ti. this is where the next gen hopper comes into play with multi chip. Well if 7nmEUV really is 77Mtr/mm2 or comes close to 64mm2 real density then it will be just mindblown as 2080 Ti shrnks in 300mm2 die area. but I doubt it can be done, since NAVI is barely 41Mtr/mm2 real 7nmDUV being 91Mtr/mm2 on paper.
Posted on Reply
#56
Berfs1
theoneandonlymrkWould also be nice to hear some rumours about the improvements to RTX hardware and tensor cores too, shader numbers are not going to inform us much about the performance in isolation.

@Berfs1 I Can't see any of that panning out, since when did Nvidia improve their best then undercut it? so I think the prices we have now plus 50 dollars minimum, at least at launch with supers or something coming out later ,once stocks of the 2xxx series run out, makes sense tbf.
Cus 20 series was extremely overpriced from the beginning and everyone knew that.
Posted on Reply
#57
eidairaman1
The Exiled Airman
Berfs1Cus 20 series was extremely overpriced from the beginning and everyone knew that.
Green are spacecases in price
Posted on Reply
#58
TheoneandonlyMrK
Berfs1Cus 20 series was extremely overpriced from the beginning and everyone knew that.
They are not cheap now either and they have been out a while now, you think they will drop the 2080ti another couple of hundred to fit the 3xxx lineup in, I don't, people did buy them, sure they didn't fly off the shelf but levy that point verses 7nm yield issues and this could be the best way to play it out for Nvidia.

they are not known for a full lineup on a day, they will introduce their next GPU's slowly and consciously to maximize profits, early birds Will pay, then as yield's improve ,a super or something with a price cut ish sort of.
Posted on Reply
#59
64K
medi01It's sounds like it's more than what it actually is.

I have no idea what you are saying. It is what it is. And it is a 37% average increase in performance for a RTX 2080 (non Super) over a GTX 1080 at 1440p in a 23 game suite benched on this site. Have a look at the benches if you want to:

www.techpowerup.com/review/nvidia-geforce-rtx-2080-founders-edition/
Posted on Reply
#60
ppn
1080->2080 1,45x in 4K, but look at transistor count. 13600M/7200M = 1.88x. So in the name of new improvements like async compute and variable shader rate, RTRT and tensor we did not get the real 90% improvement that is provided by transistor count right away but 45% instead. 90% will come later as games start to use it. But next gen will provide linear scaling and that means RTX 3080 17000M transistors ~~ 25% faster than 2080S. If we take into account the clock increase 18Gbps and 2.5Ghz Gpu for another 20% we get the 50% number at 1/2 power.
Posted on Reply
#61
medi01
64KI have no idea what you are saying. It is what it is. And it is a 37% average increase in performance for a RTX 2080 (non Super) over a GTX 1080 at 1440p in a 23 game suite benched on this site. Have a look at the benches if you want to:

www.techpowerup.com/review/nvidia-geforce-rtx-2080-founders-edition/
We are talking about 17% more expensive card, pulling roughly 1/3rd of perf ahead, give or take, one (or was it more?) year later than the original.
Posted on Reply
#62
Space Lynx
Astronaut
RazbojnikYeah...pity there are no games worthy of such powers, I mean when these cards come an average 250$ card will be able to rock 1440p and ray tracing on ultra in all games...plus games are being delayed like hell lately. we're getting there where there will be no point in buying expensive gpu's for gaming, just like there's no point in buying expensive cpu's for gaming...a 250$ cpu maxes all. I'm not complaining...I guess? Right? Hmm, there is a gap here. Gpu's are getting insanely powerful...while we are getting games of lesser quality. Come to think of it, there's a lot of things to complain about in here.
huh? I can't even play witcher 3 maxed out at 1440p 144hz 144 fps yet even with a rtx 2080 super...

sorry if you don't enjoy high refresh but i do, gpu's have a long way to catch up before i can truly enjoy gaming.
Posted on Reply
#63
64K
medi01We are talking about 17% more expensive card, pulling roughly 1/3rd of perf ahead, give or take, one (or was it more?) year later than the original.
I am not talking about the price increase and neither was the guy that I was replying to. We were talking about performance increase. If you want to talk about price increase then yes I think Nvidia did jack up the prices somewhat on Turing more than they needed to.
ppn1080->2080 1,45x in 4K, but look at transistor count. 13600M/7200M = 1.88x. So in the name of new improvements like async compute and variable shader rate we did not get the real 90% improvement that is provided by transistor count right away but 45% instead. 90% will come later as games start to use it. But next gen will provide linear scaling and that means RTX 3080 17000M transistors ~~ 25% faster than 2080S.
A good bit of the transistor increase on Turing is due to the RT cores and Tensor cores on Turings. If Turings had just stuck with a lot more CUDA cores then the performance increase would have been much better over Pascals but Nvidia wanted to push RTRT forward.

The bottom line is that Ampere will be on the 7nm process node as compared to Turing on the 12nm process node so it should be a good bit more efficient than Turing for the same wattage used so there should be room to add a good bit more cores and faster clocks for the same wattage used.

imo the performance increase with the Ampere will be somewhere between 35% to 50% over Turing
Posted on Reply
#64
dicktracy
Big. Navi. Is. Dead. Don't even bother releasing that expensive chip just to get defeated jebaited by midrange Ampere.
Posted on Reply
#65
Vayra86
xkm1948I do wonder how much different the overall design would be from Turing uArc. Also I hope they don't cut down the Tensorflow units. It has been really nice to use consumer level GPU for dl/ml acceleration.
Looking at the shader counts I don't think its going to be a huge change, more like a small refinement of Turing and similar die sizes. 500 additional shaders for the x80.
dicktracyBig. Navi. Is. Dead. Don't even bother releasing that expensive chip just to get defeated jebaited by midrange Ampere.
Business as usual. AMD still hasn't caught up and I reckon they might not get there anytime soon either. You don't simply make two generational jumps in one go. So guess what. They will compete on price in the upper mid range and that is all... once more.
ppn1080->2080 1,45x in 4K, but look at transistor count. 13600M/7200M = 1.88x. So in the name of new improvements like async compute and variable shader rate, RTRT and tensor we did not get the real 90% improvement that is provided by transistor count right away but 45% instead. 90% will come later as games start to use it. But next gen will provide linear scaling and that means RTX 3080 17000M transistors ~~ 25% faster than 2080S. If we take into account the clock increase 18Gbps and 2.5Ghz Gpu for another 20% we get the 50% number at 1/2 power.
I think that's realistic; 25% faster is kinda what they need to differentiate from Turing. And as a bonus they even keep the 2080ti performance level somewhat 'premium' for a while longer until they bring a new big die. I do think that a 3080ti can remain at Turing's 4352 shaders, then its still going to obliterate everything (still ~25% above 3080), and it would mean that potentially that card won't be costing upwards of 1K this time. As it should.
Posted on Reply
#66
TheoneandonlyMrK
ppn1080->2080 1,45x in 4K, but look at transistor count. 13600M/7200M = 1.88x. So in the name of new improvements like async compute and variable shader rate, RTRT and tensor we did not get the real 90% improvement that is provided by transistor count right away but 45% instead. 90% will come later as games start to use it. But next gen will provide linear scaling and that means RTX 3080 17000M transistors ~~ 25% faster than 2080S. If we take into account the clock increase 18Gbps and 2.5Ghz Gpu for another 20% we get the 50% number at 1/2 power.
That would be nice , I think your being optimistic though, but hopefully.
Posted on Reply
#67
ZoneDymo
Be sure to empty your wallets loyal customers.
Posted on Reply
#68
Vayra86
ZoneDymoBe sure to empty your wallets loyal customers.
The more you buy...
Posted on Reply
#69
Zmon
It's going to be a bit ridiculous if Nvidia is going to keep their current price hike for their 3xxx series. It was already ridiculous that the 2080ti had an MSRP of $1199, which was a massive 58% increase over the 1080ti's MSRP of $699. If we go back to Kepler and Maxwell, Maxwell even had a price decrease over Kepler of about $50. There isn't really any good justification for the current price gouging besides the "massive die" argument. I hope Nvidia fixes this, but I highly doubt it with a current lack of competition from AMD.
Posted on Reply
#70
Vayra86
ZmonIt's going to be a bit ridiculous if Nvidia is going to keep their current price hike for their 3xxx series. It was already ridiculous that the 2080ti had an MSRP of $1199, which was a massive 58% increase over the 1080ti's MSRP of $699. If we go back to Kepler and Maxwell, Maxwell even had a price decrease over Kepler of about $50. There isn't really any good justification for the current price gouging besides the "massive die" argument. I hope Nvidia fixes this, but I highly doubt it with a current lack of competition from AMD.
But there is a good argument to keep Turing levels of pricing this time: similar die size on a smaller node. So effectively, you're looking at a more complicated die here on a node that is new, so yields are not optimal yet. Maxwell was cheap for good reasons; the 970 for example had a cut down die and was the real price king; and it was on a very easy and familiar node with very high yields. Even so, the 980 wasn't exactly a bang/buck card. And the 980ti neither. Still good though I agree.

But that doesn't mean 3080ti will keep its price point, see my post above.

Nvidia will want to keep its (royal) margin, but most of their price bumps are actually quite understandable within that rationale. Those 'expensive' Turing dies are effin' huge and its only the second gen on 16~12nm.
Posted on Reply
#71
eidairaman1
The Exiled Airman
ZmonIt's going to be a bit ridiculous if Nvidia is going to keep their current price hike for their 3xxx series. It was already ridiculous that the 2080ti had an MSRP of $1199, which was a massive 58% increase over the 1080ti's MSRP of $699. If we go back to Kepler and Maxwell, Maxwell even had a price decrease over Kepler of about $50. There isn't really any good justification for the current price gouging besides the "massive die" argument. I hope Nvidia fixes this, but I highly doubt it with a current lack of competition from AMD.
Do your research, amd answered last year, they aren't finished either
Posted on Reply
#72
Zmon
eidairaman1Do your research, amd answered last year, they aren't finished either
Sure there's plenty of research, AMD can only compete with Nvidia at the low-mid tier end currently. They are of course planning on a high end GPU, but how that performs will remain to be seen. We can talk all day about the current die sizes and prices, which will still be the same most likely for Ampere regardless of what AMD does.
Posted on Reply
#73
cucker tarlson
looks like turing copy paste.
eidairaman1amd answered last year
they answered tu106,with same performance but lack of features
Posted on Reply
#74
Nihilus
Looks like Nvidia is going for a more even performance stepping next time around. This gen, the 2070 Super, 2080, and 2080 Super were all closer together than the 2080 Super was to the 2080ti.

Next gen, the 3070 should be 2080 Super-like and the 3080 could be at 2080ti levels.

Someone mentioned 16 GB for the 3080ti, but I find that highly doubtful unless it is using HBM2. No way they are running a 512 bit bus on gddr6.

Best guess is 12 gb of GDDR6 at 16gb/s or more.
Posted on Reply
#75
rtwjunkie
PC Gaming Enthusiast
GungarHBM is by far the most powerful and efficient graphic memory out there right now. AMD is just incapable of producing any good gpu, for sometime now.
For practical, consumer and gaming use, I see GDDR6 being used on GA102, 103, 104. GDDR6 has proven itself. HBM not so much. I’m with @EarthDog on this.
Posted on Reply
Add your own comment
Apr 19th, 2024 20:38 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts