Friday, August 28th 2020

NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

Just ahead of the September launch, specifications of NVIDIA's upcoming RTX Ampere lineup have been leaked by industry sources over at VideoCardz. According to the website, three alleged GeForce SKUs are being launched in September - RTX 3090, RTX 3080, and RTX 3070. The new lineup features major improvements: 2nd generation ray-tracing cores and 3rd generation tensor cores made for AI and ML. When it comes to connectivity and I/O, the new cards use the PCIe 4.0 interface and have support for the latest display outputs like HDMI 2.1 and DisplayPort 1.4a.

The GeForce RTX 3090 comes with 24 GB of GDDR6X memory running on a 384-bit bus at 19.5 Gbps. This gives a memory bandwidth capacity of 936 GB/s. The card features the GA102-300 GPU with 5,248 CUDA cores running at 1695 MHz, and is rated for 350 W TGP (board power). While the Founders Edition cards will use NVIDIA's new 12-pin power connector, non-Founders Edition cards, from board partners like ASUS, MSI and Gigabyte, will be powered by two 8-pin connectors. Next up is specs for the GeForce RTX 3080, a GA102-200 based card that has 4,352 CUDA cores running at 1710 MHz, paired with 10 GB of GDDR6X memory running at 19 Gbps. The memory is connected with a 320-bit bus that achieves 760 GB/s bandwidth. The board is rated at 320 W and the card is designed to be powered by dual 8-pin connectors. And finally, there is the GeForce RTX 3070, which is built around the GA104-300 GPU with a yet unknown number of CUDA cores. We only know that it has the older non-X GDDR6 memory that runs at 16 Gbps speed on a 256-bit bus. The GPUs are supposedly manufactured on TSMC's 7 nm process, possibly the EUV variant.
Source: VideoCardz
Add your own comment

216 Comments on NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

#151
Hotobu
TechLurker
Something I noticed not being mentioned is that they are seem to be settling on advertising as PCIe 4.0 ready (whether or not it's really necessary yet), which likely answers the question/dilemma GamersNexus touched upon in regards to how NVIDIA would advertise their cards. While it's an insignificant element for those in the know, it's a big thing for the masses who'd just look at it and then panic because their Intel mobo doesn't have PCIe 4.0 capability, and the only ones on the market with PCIe 4.0 at the time of release is AMD.

I'm actually hoping NVIDIA decides to completely follow through on marketing, if only to see how Intel would spin things in order to stop would-be RTX 3000 series buyers from panic buying an AMD build to go along with their new GPU (which would hilariously also benefit the same mobo makers frustrated with Intel's failure on intended PCIe 4.0).
I definitely thought about this. There will certainly be some people that go with AMD/x570 just to get PCIe 4.0 (unecessarily) which is funny because this launch could be a bit of a boost to AMD solely because of that. I'm kind of in that same boat because I'm on x370 now, and while I'd like to stay with it I am probably going to end up getting the 3090 after the dust settles depending on benchmarks RDNA2 etc, but one thing I need to know first is if there's any benefit to running it on PCIe 4.0. There may not be, but I want a definitive answer on that before I do anything.
Posted on Reply
#152
Caring1
I want to see some funky fish tailed cards.
Posted on Reply
#153
sYn
So 7nm woopwoop, and the new tensor where able to calculate fp32, and we will have an GPU with tripple the Tflops xD, lets see
Posted on Reply
#154
GhostRyder
Well seems we were right to be skeptical of pricing rumors before as while the prices are still high they are not as big of a jump as thought on the top. Now I only have one major concern...

That 12 pin, I intend to water cool this card But it seems I am going to be stuck one of two ways:
1: Buy the reference, but have to purchase new PSU due to 12 pin.
2: Hope to god I pick a non reference design that gets a water block and uses the 3 8 pin connections.

Guess ill have to wait and see.
Posted on Reply
#155
Hotobu
GhostRyder
Well seems we were right to be skeptical of pricing rumors before as while the prices are still high they are not as big of a jump as thought on the top. Now I only have one major concern...

That 12 pin, I intend to water cool this card But it seems I am going to be stuck one of two ways:
1: Buy the reference, but have to purchase new PSU due to 12 pin.
2: Hope to god I pick a non reference design that gets a water block and uses the 3 8 pin connections.

Guess ill have to wait and see.
Why would you have to buy a new PSU? Why can't you use an adapter? Especially if it's a modular PSU you can probably buy a new cable direct from your PSU manufacturer eventually.
Posted on Reply
#156
bubbleawsome
GhostRyder
Well seems we were right to be skeptical of pricing rumors before as while the prices are still high they are not as big of a jump as thought on the top. Now I only have one major concern...

That 12 pin, I intend to water cool this card But it seems I am going to be stuck one of two ways:
1: Buy the reference, but have to purchase new PSU due to 12 pin.
2: Hope to god I pick a non reference design that gets a water block and uses the 3 8 pin connections.

Guess ill have to wait and see.
Reference cards are supposedly shipping with a 2x8pin->12pin adaptor.
Posted on Reply
#157
lexluthermiester
TheLostSwede
No poll option for Waiting for the reviews?
Looks like it was added and most are voting for it. I'm in with that vote. But in addition to reviews, price points are important. Most people in this new economic condition of the world are going to be less able to afford the same prices NVidia was charging for the RTX20xx line-up.

(Before anyone flames me for repeating what might have already been said, TLDR. Yes, I know I'm late to the party again..)
Holy hannah the wattage on these cards! Granted, they are are going to be premium performance cards, but unless buyers already have beefy PSU's, said buyers must also include the cost of a PSU in addition to the cost of the GPU's. Of course those costs haven't been stated, but we can safely presume that they will very likely be similar to the RTX20XX series.

One also has to wonder if NVidia has a plan for the GTX line-up and what they might look like.
Posted on Reply
#158
Frick
Fishfaced Nincompoop
Flying Fish
This TGP rather than TDP is going to be annoying and cause confusion for people...Can see it in this thread already.

I mean how do you compared TGP to the old TDP values. is the 350W TGP gonna be similar to the 250W TDP of the 2080Ti, plus extra for board power? But then can you see the rest of the components using another 100W?
TGP?
Jinxed
Trying to pull 2 year old arguments? Hardly. Given the fact that many AAA titles are raytracing enabled, the fact that main game engines like UE now support raytracing and some of the biggest games are going to be raytraced - Cyberpunk 2077, Minecraft just for an example - nobody believes those old arguments anymore. And you seem to have a bit of a split personality - your beloved AMD is saying new consoles and their RDNA2 GPUs will support raytracing as well. So what are you really trying to say? Are you trying to prepare for the eventuality that AMD's raytracing performance sucks?
But it's a good question. If I turn RT off, does the card use less power? If no, why not, if the reason for the power use is RT hardware?
Posted on Reply
#159
calkapokole
Leaked specs of RTX3080 don't make sense. Knowing GA100 and A100 chip configuration it's easy to deduct that fully enabled GA102 chip will have 6144 CUDA Cores split between 96 SMs which are further organized into 6 GPCs. If RTX3080 uses only 68 SMs (~71%) then a lot of silicon is wasted. I think there is a space between GA102 and GA104 chips for GA103 chip which fully enabled will have 5120 CUDA Cores (80 SMs, 5 GPCs). The RTX3080 will probably be based on GA103 chip if indeed it has 4352 CUDA Cores.
Posted on Reply
#160
BiggieShady
What's with 7nm process power consumption? 3080 vs 2080 Ti, similar specs, +150MHz on the GPU and faster GDDR but maybe less ram chips ... the result is +100W on a 7nm node ... I wouldn't expect power envelope expanded by hundred watts even on the same node.
Posted on Reply
#161
saki630
Without bench's to show how poor the performance difference between the 3080 vs. 3090 at the same settings, we are going to be fighting over nothing here. Its obvious the 3080 is the best choice if it was priced where it should be. The 3090 is the 'ti' variant that the 2080ti people will purchase and see performance gains. Then some time around March 2021, the real 3090ti variant releases and prices adjust accordingly.

I have a 1080ti, I want a performance increase (3080+) without spending a kidney. This 1080ti was $550, Someone tell me what I can purchase for $5-700 in the next month that will give me an increase in performance in my sexsimulator69?
Posted on Reply
#162
Hardware Geek
Jism
This is just a enterprise card designed for AI / DL / whatever workload being pushed into gaming. These cards normally fail the enterprise quality stamp. So having up to 350W of TDP / TBP is not unknown. It's like Linus torwards said about Intel: Stop putting stuff in chips that only make themself look good in really specific (AVX-512) workloads. These RT/Tensor cores proberly count up big for the extra power consumption.

Price is proberly in between 1000 and 2000$. Nvidia is the new apple.
Do you mean *probably*?
Posted on Reply
#164
BiggieShady
Ah, beloved Gainward aka Palit division for EU, good to see it still going strong.
Posted on Reply
#165
CandymanGR
3090 launch price: 1400$
2080 ti launch price (standard edition): 999$



I laugh on some people who really believed nvidia when said: "Ampere will be cheaper than what Turing was". (edited)
Even some tech sites mentioned that. It is so funny when people believing in hope, than in reality. And the 70% faster than Volta nvidia claims, is a joke. Not even in RTX scenarios the difference will be that big. Maybe it will be that much in CUDA processing, and that's all.

P.S. Thats a 40% increase in price. I bet, the difference in performance (in gaming) will be less than this (maybe 25-30% in real world scenarios). Mark this post for reference, when reviews come out. Over and Out.
Posted on Reply
#166
RandallFlagg
Mark Little
...
RTX 3090 5248 CUDA, 1695 MHz boost, 24 GB, 936 GB/s, 350 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process
...

RTX 3080 4352 CUDA, 1710 MHz boost, 10 GB, 760 GB/s, 320 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process

Almost no difference between these cards except on the RT and Tensor side. If the price is much lower than $1000 for the 3080 then you can get 2080 Ti performance on the 'cheap'.
Yes, it's not a bad upgrade but it is predictable. Basically everything in their lineup shifts one level, 3080=2080Ti + higher clock, 3070=2080+higher clock. If the pattern follows to the midrange, which I think it will, we'll see slightly higher than 2070 / 2070 Super performance from the 3060 / 3060 Ti.

The biggest impact to future PC games and the capabilities of PC games will be if they take the 1650/1660 series and include ray tracing and DLSS at 2060+ levels of performance. That will be a mainstream card and would become a new baseline for developers to target for games to be released in 2-3 years.

Still I suspect we will see same performance at the same price point for a while (~6 months after release), regardless of what the name is. Only the 3090 offers significantly more performance, and that's very much niche with more marketing value as those types of cards typically garner 0.1% of market share.

Ironically the pricing situation may hinge a lot on potential competition from Intel and its new Xe discrete GPU.
Posted on Reply
#167
GhostRyder
Hotobu
Why would you have to buy a new PSU? Why can't you use an adapter? Especially if it's a modular PSU you can probably buy a new cable direct from your PSU manufacturer eventually.
PSU is no longer supported guarantee it. It’s a modular design but it uses screw in round connectors and I have had it for awhile. It’s a 1300 watt gold PSU.
bubbleawsome
Reference cards are supposedly shipping with a 2x8pin->12pin adaptor.
Oh I missed that, then I am no longer worried I’ll just get a reference 3090 and a water block and use the adaptors for awhile.
Posted on Reply
#168
saikamaldoss
Can’t wait to know more about RDNA2 so I can make a informed decision..NV or AMD

my Vega64 can’t do 4K 30fps in project cars 3. I get 26fps but still it’s smooth thanks to freesync but can’t want to upgrade.
Posted on Reply
#169
Prior
NVLink SLI is only available on 3090, why would you want to sli a beast Nvidia?
Posted on Reply
#170
P4-630
Prior
NVLink SLI is only available on 3090, why would you want to sli a beast Nvidia?
If you got unlimited cash to burn.
Posted on Reply
#171
ppn
2080 SUPER
Transistors13,600 million Shading Units3072 RT Cores48

3070
Transistors30,000 million Shading Units3072 RT Cores96

What,

other than doubling the RT core from 48 to 96, what other benefit did the doubling the transistor count do, OMG, this could have been 6144 CUDA core count for the transistor budget it has.
Posted on Reply
#172
medi01
Hotobu
Raytracing should take visual fidelity a step further.
Had that been the case, people wouldn't need to ask Epic whether Unreal PS5 demo was using DXR like calls or not.

RT fails to deliver on its main promise: _easier_to_develop_ realistic reflections/shadows.

Short term, it could evaporate the way PhysX did.
Posted on Reply
#173
rtwjunkie
PC Gaming Enthusiast
GhostRyder
Buy the reference, but have to purchase new PSU due to 12 pin.
From what I've seen, adapters will be included.
Posted on Reply
#174
RandallFlagg
medi01
Had that been the case, people wouldn't need to ask Epic whether Unreal PS5 demo was using DXR like calls or not.

RT fails to deliver on its main promise: _easier_to_develop_ realistic reflections/shadows.

Short term, it could evaporate the way PhysX did.
PhysX did not evaporate. It became ubiquitous to the point people don't know it's there anymore. It's used by Unreal Engine 3+, Unity, and host of others.
Posted on Reply
#175
RoutedScripter
I come here as someone who does not like spoilers ... just days away this seems like a total psycho obsession with some of the people who think they're doing something noble with leaks ... at least the news media, if they are eager to profit off the leaks because of drama and traffic, should put up big spoiler warnings and some standards in this regard, I'm so sick of this, no, I do not know what the leak is, I only came here to say this, I will be going on tech-site blackout until I watch the proper reveal. Yes I was hiding my eyes not to do a single peek of the content or any comments, I did not read any posts here in this thread either.
Posted on Reply
Add your own comment