• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

RTX 3080
10 GB of GDDR6X memory running on a 320-bit bus at 19 Gbps (memory bandwidth capacity of 760 GB/s)
4,352 CUDA cores running at 1710 MHz
320 W TGP (board power)
-------------------------------------------------------------
FP32 (float) performance = 14.88 ?????????????

RTX 2080TI

11 GB of GDDR6 memory running on a 352-bit bus at 14 Gbps (memory bandwidth capacity of 616 GB/s)
4,352 CUDA cores running at 1545 MHz
250 W TGP (board power)
------------------------------------------------------------
FP32 (float) performance = 13.45

This can't be right, only 10 % rasterization performance gain over 2080TI? I REALY hope Nvidia managed to get some IPC gain due to new node/arch or it's gonna be Turing Deja Vu all over again :(
 
HI,
Waiting for reviews and then some before even thinking of 30 series after 20 series space invaders :)
But from the looks of it hydro copper all the way this ugle air cooler I'll never see in person.
 
This TGP rather than TDP is going to be annoying and cause confusion for people...Can see it in this thread already.

I mean how do you compared TGP to the old TDP values. is the 350W TGP gonna be similar to the 250W TDP of the 2080Ti, plus extra for board power? But then can you see the rest of the components using another 100W?
 
So it is basically confirmed that 3080 will be at least 20% faster than 2080 Ti if it has the same amount of CUDA cores but on a new architecture, there is no way for it to deliver less considering the spec. 3090 should be at least 50%. This should be like bare minimum conservative expectation at this point.
 
If true, this will be the first time after GTX 580 that NVidia is going >256-bit for the *80 again, but still severely short of 384-bits though and with a price tag to choke on.
 
Can't wait to grab my KFA2 HOF RTX 3090 baby. <3
 
This TGP rather than TDP is going to be annoying and cause confusion for people...Can see it in this thread already.

I mean how do you compared TGP to the old TDP values. is the 350W TGP gonna be similar to the 250W TDP of the 2080Ti, plus extra for board power? But then can you see the rest of the components using another 100W?
Yep. Been doing it myself using TBP instead of TGP. Nv says they're similar. I wouldn't be using frameview, though...
 
If those specs are real, AMD could easily beat them TFLOPS for TFLOPS. Considering XBOX Series X has a 12 TFLOPS GPU and you can't go wild with TDP in a small console. I think Nvidia will be mostly pushing for Ray Tracing performance gains. Again..if these specs are real. 17,8 TFLOPS on RTX 3090 by the math. 5248 CUDA cores x 2 = 10496 X 1695Mhz clock = 17,8 TFLOPS GPU.
 
Edit: Maybe comparing to the RTX 3080 is more informative:

RTX 3080 4352 CUDA, 1710 MHz boost, 10 GB, 760 GB/s, 320 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process

Almost no difference between these cards except on the RT and Tensor side. If the price is much lower than $1000 for the 3080 then you can get 2080 Ti performance on the 'cheap'.

This could be important. Same amount of CUDA cores, yet quite a bit higher TDP rating even though it's on 7nm EUV versus 12nm means one thing and one thing only: The amount of RT and Tensor cores will be MASSIVELY higher.
 
According to this rumor the RTX 3080 will have variants and some of those variants will cram 20GB of VRAM:

The card is reportedly going to feature up to 10 GB of memory that is also going to be GDDR6X but there are several vendors who will be offering the card with a massive 20 GB frame buffer but at higher prices. Since the memory is running at 19 Gbps across a 320-bit bus interface, we can expect a bandwidth of up to 760 GB/s.

 
Good. Still 2 8 pin connectors from board partners. Nothing has changed. Just nvidia being weird.

12pin is hidden under the shroud. this is just for the purposes for making detachable shroud with integrated 8pins insead of being soldered like 1060,2060 FE.

in case the dies are bigger than 429mm2 can't possibly be EUV .
 
If those specs are real, AMD could easily beat them TFLOPS for TFLOPS. Considering XBOX Series X has a 12 TFLOPS GPU and you can't go wild with TDP in a small console. I think Nvidia will be mostly pushing for Ray Tracing performance gains. Again..if these specs are real. 17,8 TFLOPS on RTX 3090 by the math. 5248 CUDA cores x 2 = 10496 X 1695Mhz clock = 17,8 TFLOPS GPU.
Not to mention AMD have significantly higher speeds, and performance scales much better with frequency than with cores.

Problem is, we have no idea what AMD is preparing with RDNA2, outside of reasonable guesses based on those console APUs.
 
WTF is happening with TDPs of xx80 series? Smaller nodes, yet higher TDPs. We're coming to IR panel wattage here. 3080 alone can heat up 6 m² room, add 160W CPU on top of that and you get a decent standalone room heater. I'm not gonna install AC just to use PC :(

GTX 980... 165 W
GTX 1080...180 W
RTX 2080(S)... 215/250 W
RTX 3080... 320W
 
This is just a enterprise card designed for AI / DL / whatever workload being pushed into gaming. These cards normally fail the enterprise quality stamp. So having up to 350W of TDP / TBP is not unknown. It's like Linus torwards said about Intel: Stop putting stuff in chips that only make themself look good in really specific (AVX-512) workloads. These RT/Tensor cores proberly count up big for the extra power consumption.

Price is proberly in between 1000 and 2000$. Nvidia is the new apple.
 
WTF is happening with TDPs of xx80 series? Smaller nodes, yet higher TDPs. We're coming to IR panel wattage here. 3080 alone can heat up 6 m² room, add 160W CPU on top of that and you get a decent standalone room heater. I'm not gonna install AC just to use PC :(

GTX 980... 165 W
GTX 1080...180 W
RTX 2080(S)... 215/250 W
RTX 3080... 320W
Simple. There will be a LOT more RT cores on the GPU.
 
Not to mention AMD have significantly higher speeds, and performance scales much better with frequency than with cores.
There is no word on whether 1.7Ghz is base or boost frequency.
Given that Sony can push its RDNA chip to 2.1Ghz, if 7nm is true, NV boost frequency must be higher than that.

Why on earth would RT cores need to consume anything, unless one of those dozen games that support it is running?
 
I'll most probably pass. 1080ti should be enough for a couple more years, until we have some real next gen games and until we see if ray tracing really proceeds or it'll remain a gimmick. By then we should have affordable gpus with more vram than 1080Ti's 11GB.
 
I understand that, but it's still crazy. Gaming PC should not consume +500W of power. Most of PC gamers don't live in Alaska or Greenland like places.
and even then, room temp is still 22C. You act like they need to sit outside to be cooled. Remember the 295x2? A 500W gpu...

until we see if ray tracing really proceeds
wait... so consoles and amd are going RT and you have a question on if it proceeds?
 
There is no word on whether 1.7Ghz is base or boost frequency.
Given that Sony can push its RDNA chip to 2.1Ghz, if 7nm is true, NV boost frequency must be higher than that.


Why on earth would RT cores need to consume anything, unless one of those dozen games that support it is running?
1.7GHz is around the boost clock for RTX 2000 series, is it not?
Edit. It says 1710MHz boost for RTX3080
 
honestly to me this feels a bit Intel 6700k to 7700k type of stuff.

just kinda meh all around...and really 10gb of ram for a 3080? I would atleast have given it 16 or so.

oh welll, guess just like with Intel, this is what you get with no competition in that area, remember the RX5700(XT) is really more around RTX2060s / RTX2070s territory.
 
Last edited:
and even then, room temp is still 22C. You act like they need to sit outside to be cooled. Remember the 295x2? A 500W gpu...

wait... so consoles and amd are going RT and you have a question on if it proceeds?
I'm talking about nvidia's approach (RTX and dedicated hardware), not ray tracing as a technology. Consoles include AMD and it'll be working in a different way.
 
There is no word on whether 1.7Ghz is base or boost frequency.
Given that Sony can push its RDNA chip to 2.1Ghz, if 7nm is true, NV boost frequency must be higher than that.
Why on earth would RT cores need to consume anything, unless one of those dozen games that support it is running?

Oh, I imagine 1.7 is base, not boost. But 400MHz boost gain is way overoptimistic, IMO, it will be more around 200-250MHz.
And 7nm would be at least surprising at this point, but who knows.
 
and even then, room temp is still 22C. You act like they need to sit outside to be cooled. Remember the 295x2? A 500W gpu...

wait... so consoles and amd are going RT and you have a question on if it proceeds?
What do you mean? My PC room's temperature warms up to 26C (and above when outside temps are hitting +35C for a few days) during summer months and I live in a well isolated house positioned in moderate climate. Add +500W PC into the room and you get easily above 28C. I underclock 1080TI during summer months to get it to consume around 160W during gaming, but I can't see how I could underclock 320W GPU to get similar results.
 
Why on earth would RT cores need to consume anything, unless one of those dozen games that support it is running?
Trying to pull 2 year old arguments? Hardly. Given the fact that many AAA titles are raytracing enabled, the fact that main game engines like UE now support raytracing and some of the biggest games are going to be raytraced - Cyberpunk 2077, Minecraft just for an example - nobody believes those old arguments anymore. And you seem to have a bit of a split personality - your beloved AMD is saying new consoles and their RDNA2 GPUs will support raytracing as well. So what are you really trying to say? Are you trying to prepare for the eventuality that AMD's raytracing performance sucks?
 
Back
Top