Friday, August 28th 2020

NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

Just ahead of the September launch, specifications of NVIDIA's upcoming RTX Ampere lineup have been leaked by industry sources over at VideoCardz. According to the website, three alleged GeForce SKUs are being launched in September - RTX 3090, RTX 3080, and RTX 3070. The new lineup features major improvements: 2nd generation ray-tracing cores and 3rd generation tensor cores made for AI and ML. When it comes to connectivity and I/O, the new cards use the PCIe 4.0 interface and have support for the latest display outputs like HDMI 2.1 and DisplayPort 1.4a.

The GeForce RTX 3090 comes with 24 GB of GDDR6X memory running on a 384-bit bus at 19.5 Gbps. This gives a memory bandwidth capacity of 936 GB/s. The card features the GA102-300 GPU with 5,248 CUDA cores running at 1695 MHz, and is rated for 350 W TGP (board power). While the Founders Edition cards will use NVIDIA's new 12-pin power connector, non-Founders Edition cards, from board partners like ASUS, MSI and Gigabyte, will be powered by two 8-pin connectors. Next up is specs for the GeForce RTX 3080, a GA102-200 based card that has 4,352 CUDA cores running at 1710 MHz, paired with 10 GB of GDDR6X memory running at 19 Gbps. The memory is connected with a 320-bit bus that achieves 760 GB/s bandwidth. The board is rated at 320 W and the card is designed to be powered by dual 8-pin connectors. And finally, there is the GeForce RTX 3070, which is built around the GA104-300 GPU with a yet unknown number of CUDA cores. We only know that it has the older non-X GDDR6 memory that runs at 16 Gbps speed on a 256-bit bus. The GPUs are supposedly manufactured on TSMC's 7 nm process, possibly the EUV variant.
Source: VideoCardz
Add your own comment

216 Comments on NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

#51
Dux
BoboOOZOuch, it does indeed say boost, but things really don't add up. AMD managed to boost their first 7nm iteration at 1.9GHz, why would Nvidia only manage 1.7? I would believe this only if it was on Samsung 8nm and if it's really not a good node...
Well, it says 1650MHz boost for my RTX 2060 Super. But still it boosts to 1850MHz out of the box. And 2000MHz + with slight OC.
Posted on Reply
#52
RedelZaVedno
BoboOOZOuch, it does indeed say boost, but things really don't add up. AMD managed to boost their first 7nm iteration at 1.9GHz, why would Nvidia only manage 1.7? I would believe this only if it was on Samsung 8nm and if it's really not a good node...
It is on Samsung's 8nm (comparable to TSCM's 10nm), all leakers pointed in that direction (Samsung's "5nm" in base case scenario). And yes, it is a shitty node compared to TSMC's 7nm EUV, hence shitty clock speeds.
DuxCroWell, it says 1650MHz boost for my RTX 2060 Super. But still it boosts to 1850Mhz out of the box. And 2000Mhz + with slight OC.
5240 shading units GPU is a different beast than 1920 SU GPU... More SU lower clock speeds.
Posted on Reply
#53
M2B
That 1.7GHz boost is not indicator of the actual gaming load boosts.
The 2080Ti for example is rated for 1545MHz and you'll never see one under 1750MHz in actual games, unless there is something broken with the cooling.
Posted on Reply
#54
Metroid
"HDMI 2.1 and DisplayPort 1.4a. "

Finally, now, all monitors that requires a huge bandwidth will come with hdmi 2.1.
Posted on Reply
#55
Dux
RedelZaVednoIt is on Samsung's 8nm (comparable to TSCM's 10nm), all leakers pointed in that direction (Samsung's "5nm" in base case scenario). And yes, it is a shitty node compared to TSMC's 7nm EUV, hence shitty clock speeds.
If i remember correctly, Nvidia said that AMD reserved most of the 7nm over at TSMC. But they went on ahead and beat AMD into reserving 5nm for next year. So wait for RTX 3000 Super series for TSMC 5nm.
Posted on Reply
#56
rtwjunkie
PC Gaming Enthusiast
Raendoronly 10GB on 3080? Seriously? And 8GB on 3070 is same we had ever since Pascal already for x70. That's lame.
Do you really “need” more than 10GB VRAM?
Posted on Reply
#57
RedelZaVedno
DuxCroIf i remember correctly, Nvidia said that AMD reserved most of the 7nm over at TSMC. But they went on ahead and beat AMD into reserving 5nm for next year. So wait for RTX 3000 Super series for TSMC 5nm.
Not a chance. Apple and AMD are TSMC's prime partners. NVidia can get a piece of 5nm production, but not the production scale it needs for PC gaming GPUs. I can see 4xxx HPC and high end Quadros being on TSCM's 5nm but nothing else.
Posted on Reply
#58
EarthDog
GeorgeManI'm talking about nvidia's approach (RTX and dedicated hardware), not ray tracing as a technology. Consoles include AMD and it'll be working in a different way.
The point, however, is that RT tech is here... it isn't a gimmick with everyone all in. Capeesh? :)
RedelZaVednoWhat do you mean? My PC room's temperature warms up to 26C (and above when outside temps are hitting +35C for a few days) during summer months and I live in a well isolated house positioned in moderate climate. Add +500W PC into the room and you get easily above 28C. I underclock 1080TI during summer months to get it to consume around 160W during gaming, but I can't see how I could underclock 320W GPU to get similar results.
Have a cookie...just saying 300W is nothing compared to some cards in the past. :)
Posted on Reply
#59
Vayra86
Theyre having a laugh. I might hard pass on this for another gen. Still left wondering how on earth this is all worth it just for a handful of pretty weak RT titles...
Posted on Reply
#60
BoboOOZ
RedelZaVednoIt is on Samsung's 8nm (comparable to TSCM's 10nm), all leakers pointed in that direction (Samsung's "5nm" in base case scenario). And yes, it is a shitty node compared to TSMC's 7nm EUV, hence shitty clock speeds.
Well, it seems even the guys from videocardz aren't sure it's 7nm, so it does look quite surprising. However, RDNA1 was on plain 7nm, not P or EUV. But maybe Nvidia boost ratings are conservative, as the others are pointing out, we'll have to see.
Posted on Reply
#61
Jinxed
Vayra86Theyre having a laugh. I might hard pass on this for another gen. Still left wondering how on earth this is all worth it just for a handful of pretty weak RT titles...
As in Cyberpunk 2077? Yeah, right, that's a weak title. Minecraft? Yeah, also a weak title. Anything made on UE for the foreseeable future? Also weak and unimportant. Console ports made for DXR? Also unimportant .. ;) Riiiiight.
Posted on Reply
#62
Turmania
Seems a bit power hungry but we will know for sure with reviews.
Posted on Reply
#63
BoboOOZ
RedelZaVednoNot a chance. Apple and AMD are TSMC's prime partners. NVidia can get a piece of 5nm production, but not the production scale it needs for PC gaming GPUs. I can see 4xxx HPC and high end Quadros being on TSCM's 5nm but nothing else.
Indeed and also, TSMC seem to prefer splitting their capacity amongst their clients, rather than allowing one client to book the whole capacity for a given node.
Posted on Reply
#65
Nyek
DisplayPort 1.4a seriously?
Posted on Reply
#66
TheoneandonlyMrK
DuxCroIf i remember correctly, Nvidia said that AMD reserved most of the 7nm over at TSMC. But they went on ahead and beat AMD into reserving 5nm for next year. So wait for RTX 3000 Super series for TSMC 5nm.
You realise AMD and Tsmc announced a partnership on 5nm before that rumour came out.
As for Tsmc 5nm supers, that's dreamy IMHO.

Soooo 300 watts max some said yesterday, you can't exceed two 8 pins at 300 they said, balls I said you wouldn't need heavier gauge wire.

350 watts at base clocks, what's the max OC pull on that 500?, We'll see.

I own a Vega 64 ,course you can, and I'll take this opportunity to welcome Nvidia back in to forman grill territory, it's been lonely here for two years:p
Posted on Reply
#67
Maelwyse
Mark LittleSo let's see

RTX 3090 5248 CUDA, 1695 MHz boost, 24 GB, 936 GB/s, 350 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process

I guess all the potential fab process power savings were erased by the extra RAM, RAM speed, CUDA, RT and tensor cores.

Edit: Maybe comparing to the RTX 3080 is more informative:

RTX 3080 4352 CUDA, 1710 MHz boost, 10 GB, 760 GB/s, 320 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process

Almost no difference between these cards except on the RT and Tensor side. If the price is much lower than $1000 for the 3080 then you can get 2080 Ti performance on the 'cheap'.
There are BIG differences, between the 3080 and the 2080TI.
1. Clock speed. 1710-1545, >10% clock rate. That right there explains MOST of the power difference, as well as the efficiency differences
2. RTX core changes. You can call this IPC, if you wanted to, if my understanding is right. Supposedly the RTX cores are hugely more efficient.
3. Heat. I am only just now experiencing how much heat a video card is going to push into the room. Let's put it this way. You can buy a ~$300 air conditioning unit that exhausts to the outside. Those can put out roughly 8000-12000 BTU of heat (per hour). In my experience, this can cool my bedroom from 80F to 70F in about 10 minutes, Your mileage will vary. A simple conversion shows 320watts is roughly 1,091.9 BTU. If my uneducated, guesswork/screwed up math is close to right, that's roughly 10 degrees per hour that needs to be cooled, or vented out of the room. that's on top of what your computer puts out without the graphics card. I know my room gets HOT if I run games for a few hours solid. it's going to be roughly 30% more heat from the video card. And I'm still running a 10-series card.
Posted on Reply
#69
EarthDog
Is this thread about NV GPUs soon to be released or AMD nodes?
Posted on Reply
#70
Jinxed
EarthDogIs this thread about NV GPUs soon to be released or AMD nodes?
Exaclty. From the attempts to downplay raytracing to the AMD fans talking about future nodes, it doesn't seem like the Red team had much confidence in RDNA2 / Big Navi. I for one am really curious just how much Nvidia is going to push raytracing forward. The rumours about a 4x increase could be true, given the TDP and classic CUDA core counts from this leak. Maybe even more than that? Can't wait to see.
Posted on Reply
#71
BoboOOZ
Well, a bit more on topic, I saw the poll, and apparently there are 5% of the users willing to wait 5 years or more for Intel to come with competitive high-end desktop graphics :p ...
Posted on Reply
#72
AnarchoPrimitiv
Wow, these ARE power hungry, now I'm really starting to believe the leaks that Nvidia was forced to do this because RDNA2 is that competitive and that second biggest Navi will be the price to performance to power usage star of the new gen cards.

Now, I'm really excited for RDNA2.... But, I did just get a 5700xt last November, so maybe I actually just might check out the Xbox Series X in all honesty.... Never been excited about consoles, but it's definitely different this time around
Posted on Reply
#73
Dux
Idk should i upgrade from my HD 4850 512MB GDDR3? Not much performance difference, it seems. Minus the lack of Ray tracing. :roll:
Posted on Reply
#74
ZoneDymo
rtwjunkieDo you really “need” more than 10GB VRAM?
bit of a hard question to answer, do you "need" anything in this space? do you even "need" a dedicated gpu?

This is about high end gaming and more Vram to work with is better, higher resolution textures and shadow quality and other stuff.
10gb on a new flagship 3080 is..... just extremely lackluster.

Like I said its like Big N is following Big I and just sell us a nothingburger due to lack of competition in these price brackets.
Posted on Reply
#75
BoboOOZ
DuxCroIdk should i upgrade from my HD 4850 512MB GDDR3? Not much performance difference, it seems. Minus the lack of Ray tracing. :roll:
Well if you're only playing Baldur's Gate II, ray tracing isn't supported anyway...
Posted on Reply
Add your own comment
May 3rd, 2024 18:22 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts