Friday, August 28th 2020

NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

Just ahead of the September launch, specifications of NVIDIA's upcoming RTX Ampere lineup have been leaked by industry sources over at VideoCardz. According to the website, three alleged GeForce SKUs are being launched in September - RTX 3090, RTX 3080, and RTX 3070. The new lineup features major improvements: 2nd generation ray-tracing cores and 3rd generation tensor cores made for AI and ML. When it comes to connectivity and I/O, the new cards use the PCIe 4.0 interface and have support for the latest display outputs like HDMI 2.1 and DisplayPort 1.4a.

The GeForce RTX 3090 comes with 24 GB of GDDR6X memory running on a 384-bit bus at 19.5 Gbps. This gives a memory bandwidth capacity of 936 GB/s. The card features the GA102-300 GPU with 5,248 CUDA cores running at 1695 MHz, and is rated for 350 W TGP (board power). While the Founders Edition cards will use NVIDIA's new 12-pin power connector, non-Founders Edition cards, from board partners like ASUS, MSI and Gigabyte, will be powered by two 8-pin connectors. Next up is specs for the GeForce RTX 3080, a GA102-200 based card that has 4,352 CUDA cores running at 1710 MHz, paired with 10 GB of GDDR6X memory running at 19 Gbps. The memory is connected with a 320-bit bus that achieves 760 GB/s bandwidth. The board is rated at 320 W and the card is designed to be powered by dual 8-pin connectors. And finally, there is the GeForce RTX 3070, which is built around the GA104-300 GPU with a yet unknown number of CUDA cores. We only know that it has the older non-X GDDR6 memory that runs at 16 Gbps speed on a 256-bit bus. The GPUs are supposedly manufactured on TSMC's 7 nm process, possibly the EUV variant.
Source: VideoCardz
Add your own comment

216 Comments on NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

#26
ThrashZone
HI,
Waiting for reviews and then some before even thinking of 30 series after 20 series space invaders :)
But from the looks of it hydro copper all the way this ugle air cooler I'll never see in person.
Posted on Reply
#27
Flying Fish
This TGP rather than TDP is going to be annoying and cause confusion for people...Can see it in this thread already.

I mean how do you compared TGP to the old TDP values. is the 350W TGP gonna be similar to the 250W TDP of the 2080Ti, plus extra for board power? But then can you see the rest of the components using another 100W?
Posted on Reply
#28
Krzych
So it is basically confirmed that 3080 will be at least 20% faster than 2080 Ti if it has the same amount of CUDA cores but on a new architecture, there is no way for it to deliver less considering the spec. 3090 should be at least 50%. This should be like bare minimum conservative expectation at this point.
Posted on Reply
#29
mouacyk
If true, this will be the first time after GTX 580 that NVidia is going >256-bit for the *80 again, but still severely short of 384-bits though and with a price tag to choke on.
Posted on Reply
#30
BorisDG
Can't wait to grab my KFA2 HOF RTX 3090 baby. <3
Posted on Reply
#31
steen
Flying FishThis TGP rather than TDP is going to be annoying and cause confusion for people...Can see it in this thread already.

I mean how do you compared TGP to the old TDP values. is the 350W TGP gonna be similar to the 250W TDP of the 2080Ti, plus extra for board power? But then can you see the rest of the components using another 100W?
Yep. Been doing it myself using TBP instead of TGP. Nv says they're similar. I wouldn't be using frameview, though...
Posted on Reply
#32
Dux
If those specs are real, AMD could easily beat them TFLOPS for TFLOPS. Considering XBOX Series X has a 12 TFLOPS GPU and you can't go wild with TDP in a small console. I think Nvidia will be mostly pushing for Ray Tracing performance gains. Again..if these specs are real. 17,8 TFLOPS on RTX 3090 by the math. 5248 CUDA cores x 2 = 10496 X 1695Mhz clock = 17,8 TFLOPS GPU.
Posted on Reply
#33
Jinxed
Mark LittleEdit: Maybe comparing to the RTX 3080 is more informative:

RTX 3080 4352 CUDA, 1710 MHz boost, 10 GB, 760 GB/s, 320 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process

Almost no difference between these cards except on the RT and Tensor side. If the price is much lower than $1000 for the 3080 then you can get 2080 Ti performance on the 'cheap'.
This could be important. Same amount of CUDA cores, yet quite a bit higher TDP rating even though it's on 7nm EUV versus 12nm means one thing and one thing only: The amount of RT and Tensor cores will be MASSIVELY higher.
Posted on Reply
#34
chodaboy19
According to this rumor the RTX 3080 will have variants and some of those variants will cram 20GB of VRAM:
The card is reportedly going to feature up to 10 GB of memory that is also going to be GDDR6X but there are several vendors who will be offering the card with a massive 20 GB frame buffer but at higher prices. Since the memory is running at 19 Gbps across a 320-bit bus interface, we can expect a bandwidth of up to 760 GB/s.
wccftech.com/nvidia-geforce-rtx-3090-rtx-3080-rtx-3070-7nm-graphics-cards-specs-leak-out/
Posted on Reply
#35
ppn
CrAsHnBuRnXpGood. Still 2 8 pin connectors from board partners. Nothing has changed. Just nvidia being weird.
12pin is hidden under the shroud. this is just for the purposes for making detachable shroud with integrated 8pins insead of being soldered like 1060,2060 FE.

in case the dies are bigger than 429mm2 can't possibly be EUV .
Posted on Reply
#36
BoboOOZ
DuxCroIf those specs are real, AMD could easily beat them TFLOPS for TFLOPS. Considering XBOX Series X has a 12 TFLOPS GPU and you can't go wild with TDP in a small console. I think Nvidia will be mostly pushing for Ray Tracing performance gains. Again..if these specs are real. 17,8 TFLOPS on RTX 3090 by the math. 5248 CUDA cores x 2 = 10496 X 1695Mhz clock = 17,8 TFLOPS GPU.
Not to mention AMD have significantly higher speeds, and performance scales much better with frequency than with cores.

Problem is, we have no idea what AMD is preparing with RDNA2, outside of reasonable guesses based on those console APUs.
Posted on Reply
#37
RedelZaVedno
WTF is happening with TDPs of xx80 series? Smaller nodes, yet higher TDPs. We're coming to IR panel wattage here. 3080 alone can heat up 6 m² room, add 160W CPU on top of that and you get a decent standalone room heater. I'm not gonna install AC just to use PC :(

GTX 980... 165 W
GTX 1080...180 W
RTX 2080(S)... 215/250 W
RTX 3080... 320W
Posted on Reply
#38
Jism
This is just a enterprise card designed for AI / DL / whatever workload being pushed into gaming. These cards normally fail the enterprise quality stamp. So having up to 350W of TDP / TBP is not unknown. It's like Linus torwards said about Intel: Stop putting stuff in chips that only make themself look good in really specific (AVX-512) workloads. These RT/Tensor cores proberly count up big for the extra power consumption.

Price is proberly in between 1000 and 2000$. Nvidia is the new apple.
Posted on Reply
#39
Jinxed
RedelZaVednoWTF is happening with TDPs of xx80 series? Smaller nodes, yet higher TDPs. We're coming to IR panel wattage here. 3080 alone can heat up 6 m² room, add 160W CPU on top of that and you get a decent standalone room heater. I'm not gonna install AC just to use PC :(

GTX 980... 165 W
GTX 1080...180 W
RTX 2080(S)... 215/250 W
RTX 3080... 320W
Simple. There will be a LOT more RT cores on the GPU.
Posted on Reply
#40
medi01
BoboOOZNot to mention AMD have significantly higher speeds, and performance scales much better with frequency than with cores.
There is no word on whether 1.7Ghz is base or boost frequency.
Given that Sony can push its RDNA chip to 2.1Ghz, if 7nm is true, NV boost frequency must be higher than that.
JinxedRT cores
Why on earth would RT cores need to consume anything, unless one of those dozen games that support it is running?
Posted on Reply
#41
RedelZaVedno
JinxedSimple. There will be a LOT more RT cores on the GPU.
I understand that, but it's still crazy. Gaming PC should not consume +500W of power. Most of PC gamers don't live in Alaska or Greenland like places.
Posted on Reply
#42
GeorgeMan
I'll most probably pass. 1080ti should be enough for a couple more years, until we have some real next gen games and until we see if ray tracing really proceeds or it'll remain a gimmick. By then we should have affordable gpus with more vram than 1080Ti's 11GB.
Posted on Reply
#43
EarthDog
RedelZaVednoI understand that, but it's still crazy. Gaming PC should not consume +500W of power. Most of PC gamers don't live in Alaska or Greenland like places.
and even then, room temp is still 22C. You act like they need to sit outside to be cooled. Remember the 295x2? A 500W gpu...
GeorgeManuntil we see if ray tracing really proceeds
wait... so consoles and amd are going RT and you have a question on if it proceeds?
Posted on Reply
#44
Dux
medi01There is no word on whether 1.7Ghz is base or boost frequency.
Given that Sony can push its RDNA chip to 2.1Ghz, if 7nm is true, NV boost frequency must be higher than that.


Why on earth would RT cores need to consume anything, unless one of those dozen games that support it is running?
1.7GHz is around the boost clock for RTX 2000 series, is it not?
Edit. It says 1710MHz boost for RTX3080
Posted on Reply
#45
ZoneDymo
honestly to me this feels a bit Intel 6700k to 7700k type of stuff.

just kinda meh all around...and really 10gb of ram for a 3080? I would atleast have given it 16 or so.

oh welll, guess just like with Intel, this is what you get with no competition in that area, remember the RX5700(XT) is really more around RTX2060s / RTX2070s territory.
Posted on Reply
#46
GeorgeMan
EarthDogand even then, room temp is still 22C. You act like they need to sit outside to be cooled. Remember the 295x2? A 500W gpu...

wait... so consoles and amd are going RT and you have a question on if it proceeds?
I'm talking about nvidia's approach (RTX and dedicated hardware), not ray tracing as a technology. Consoles include AMD and it'll be working in a different way.
Posted on Reply
#47
BoboOOZ
medi01There is no word on whether 1.7Ghz is base or boost frequency.
Given that Sony can push its RDNA chip to 2.1Ghz, if 7nm is true, NV boost frequency must be higher than that.
Why on earth would RT cores need to consume anything, unless one of those dozen games that support it is running?
Oh, I imagine 1.7 is base, not boost. But 400MHz boost gain is way overoptimistic, IMO, it will be more around 200-250MHz.
And 7nm would be at least surprising at this point, but who knows.
Posted on Reply
#48
RedelZaVedno
EarthDogand even then, room temp is still 22C. You act like they need to sit outside to be cooled. Remember the 295x2? A 500W gpu...

wait... so consoles and amd are going RT and you have a question on if it proceeds?
What do you mean? My PC room's temperature warms up to 26C (and above when outside temps are hitting +35C for a few days) during summer months and I live in a well isolated house positioned in moderate climate. Add +500W PC into the room and you get easily above 28C. I underclock 1080TI during summer months to get it to consume around 160W during gaming, but I can't see how I could underclock 320W GPU to get similar results.
Posted on Reply
#49
Jinxed
medi01Why on earth would RT cores need to consume anything, unless one of those dozen games that support it is running?
Trying to pull 2 year old arguments? Hardly. Given the fact that many AAA titles are raytracing enabled, the fact that main game engines like UE now support raytracing and some of the biggest games are going to be raytraced - Cyberpunk 2077, Minecraft just for an example - nobody believes those old arguments anymore. And you seem to have a bit of a split personality - your beloved AMD is saying new consoles and their RDNA2 GPUs will support raytracing as well. So what are you really trying to say? Are you trying to prepare for the eventuality that AMD's raytracing performance sucks?
Posted on Reply
#50
BoboOOZ
DuxCro1.7GHz is around the boost clock for RTX 2000 series, is it not?
Edit. It says 1710MHz boost for RTX3080
Ouch, it does indeed say boost, but things really don't add up. AMD managed to boost their first 7nm iteration at 1.9GHz, why would Nvidia only manage 1.7? I would believe this only if it was on Samsung 8nm and if it's really not a good node...
Posted on Reply
Add your own comment
Apr 19th, 2024 05:10 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts