• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

1.7GHz is around the boost clock for RTX 2000 series, is it not?
Edit. It says 1710MHz boost for RTX3080
Ouch, it does indeed say boost, but things really don't add up. AMD managed to boost their first 7nm iteration at 1.9GHz, why would Nvidia only manage 1.7? I would believe this only if it was on Samsung 8nm and if it's really not a good node...
 
Ouch, it does indeed say boost, but things really don't add up. AMD managed to boost their first 7nm iteration at 1.9GHz, why would Nvidia only manage 1.7? I would believe this only if it was on Samsung 8nm and if it's really not a good node...
Well, it says 1650MHz boost for my RTX 2060 Super. But still it boosts to 1850MHz out of the box. And 2000MHz + with slight OC.
 
Ouch, it does indeed say boost, but things really don't add up. AMD managed to boost their first 7nm iteration at 1.9GHz, why would Nvidia only manage 1.7? I would believe this only if it was on Samsung 8nm and if it's really not a good node...
It is on Samsung's 8nm (comparable to TSCM's 10nm), all leakers pointed in that direction (Samsung's "5nm" in base case scenario). And yes, it is a shitty node compared to TSMC's 7nm EUV, hence shitty clock speeds.

Well, it says 1650MHz boost for my RTX 2060 Super. But still it boosts to 1850Mhz out of the box. And 2000Mhz + with slight OC.

5240 shading units GPU is a different beast than 1920 SU GPU... More SU lower clock speeds.
 
That 1.7GHz boost is not indicator of the actual gaming load boosts.
The 2080Ti for example is rated for 1545MHz and you'll never see one under 1750MHz in actual games, unless there is something broken with the cooling.
 
"HDMI 2.1 and DisplayPort 1.4a. "

Finally, now, all monitors that requires a huge bandwidth will come with hdmi 2.1.
 
It is on Samsung's 8nm (comparable to TSCM's 10nm), all leakers pointed in that direction (Samsung's "5nm" in base case scenario). And yes, it is a shitty node compared to TSMC's 7nm EUV, hence shitty clock speeds.
If i remember correctly, Nvidia said that AMD reserved most of the 7nm over at TSMC. But they went on ahead and beat AMD into reserving 5nm for next year. So wait for RTX 3000 Super series for TSMC 5nm.
 
only 10GB on 3080? Seriously? And 8GB on 3070 is same we had ever since Pascal already for x70. That's lame.
Do you really “need” more than 10GB VRAM?
 
If i remember correctly, Nvidia said that AMD reserved most of the 7nm over at TSMC. But they went on ahead and beat AMD into reserving 5nm for next year. So wait for RTX 3000 Super series for TSMC 5nm.
Not a chance. Apple and AMD are TSMC's prime partners. NVidia can get a piece of 5nm production, but not the production scale it needs for PC gaming GPUs. I can see 4xxx HPC and high end Quadros being on TSCM's 5nm but nothing else.
 
I'm talking about nvidia's approach (RTX and dedicated hardware), not ray tracing as a technology. Consoles include AMD and it'll be working in a different way.
The point, however, is that RT tech is here... it isn't a gimmick with everyone all in. Capeesh? :)
What do you mean? My PC room's temperature warms up to 26C (and above when outside temps are hitting +35C for a few days) during summer months and I live in a well isolated house positioned in moderate climate. Add +500W PC into the room and you get easily above 28C. I underclock 1080TI during summer months to get it to consume around 160W during gaming, but I can't see how I could underclock 320W GPU to get similar results.
Have a cookie...just saying 300W is nothing compared to some cards in the past. :)
 
Theyre having a laugh. I might hard pass on this for another gen. Still left wondering how on earth this is all worth it just for a handful of pretty weak RT titles...
 
It is on Samsung's 8nm (comparable to TSCM's 10nm), all leakers pointed in that direction (Samsung's "5nm" in base case scenario). And yes, it is a shitty node compared to TSMC's 7nm EUV, hence shitty clock speeds.
Well, it seems even the guys from videocardz aren't sure it's 7nm, so it does look quite surprising. However, RDNA1 was on plain 7nm, not P or EUV. But maybe Nvidia boost ratings are conservative, as the others are pointing out, we'll have to see.
 
Theyre having a laugh. I might hard pass on this for another gen. Still left wondering how on earth this is all worth it just for a handful of pretty weak RT titles...
As in Cyberpunk 2077? Yeah, right, that's a weak title. Minecraft? Yeah, also a weak title. Anything made on UE for the foreseeable future? Also weak and unimportant. Console ports made for DXR? Also unimportant .. ;) Riiiiight.
 
Not a chance. Apple and AMD are TSMC's prime partners. NVidia can get a piece of 5nm production, but not the production scale it needs for PC gaming GPUs. I can see 4xxx HPC and high end Quadros being on TSCM's 5nm but nothing else.
Indeed and also, TSMC seem to prefer splitting their capacity amongst their clients, rather than allowing one client to book the whole capacity for a given node.
 
If i remember correctly, Nvidia said that AMD reserved most of the 7nm over at TSMC. But they went on ahead and beat AMD into reserving 5nm for next year. So wait for RTX 3000 Super series for TSMC 5nm.
You realise AMD and Tsmc announced a partnership on 5nm before that rumour came out.
As for Tsmc 5nm supers, that's dreamy IMHO.

Soooo 300 watts max some said yesterday, you can't exceed two 8 pins at 300 they said, balls I said you wouldn't need heavier gauge wire.

350 watts at base clocks, what's the max OC pull on that 500?, We'll see.

I own a Vega 64 ,course you can, and I'll take this opportunity to welcome Nvidia back in to forman grill territory, it's been lonely here for two years:p
 
So let's see

RTX 3090 5248 CUDA, 1695 MHz boost, 24 GB, 936 GB/s, 350 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process

I guess all the potential fab process power savings were erased by the extra RAM, RAM speed, CUDA, RT and tensor cores.

Edit: Maybe comparing to the RTX 3080 is more informative:

RTX 3080 4352 CUDA, 1710 MHz boost, 10 GB, 760 GB/s, 320 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process

Almost no difference between these cards except on the RT and Tensor side. If the price is much lower than $1000 for the 3080 then you can get 2080 Ti performance on the 'cheap'.

There are BIG differences, between the 3080 and the 2080TI.
1. Clock speed. 1710-1545, >10% clock rate. That right there explains MOST of the power difference, as well as the efficiency differences
2. RTX core changes. You can call this IPC, if you wanted to, if my understanding is right. Supposedly the RTX cores are hugely more efficient.
3. Heat. I am only just now experiencing how much heat a video card is going to push into the room. Let's put it this way. You can buy a ~$300 air conditioning unit that exhausts to the outside. Those can put out roughly 8000-12000 BTU of heat (per hour). In my experience, this can cool my bedroom from 80F to 70F in about 10 minutes, Your mileage will vary. A simple conversion shows 320watts is roughly 1,091.9 BTU. If my uneducated, guesswork/screwed up math is close to right, that's roughly 10 degrees per hour that needs to be cooled, or vented out of the room. that's on top of what your computer puts out without the graphics card. I know my room gets HOT if I run games for a few hours solid. it's going to be roughly 30% more heat from the video card. And I'm still running a 10-series card.
 
Is this thread about NV GPUs soon to be released or AMD nodes?
Exaclty. From the attempts to downplay raytracing to the AMD fans talking about future nodes, it doesn't seem like the Red team had much confidence in RDNA2 / Big Navi. I for one am really curious just how much Nvidia is going to push raytracing forward. The rumours about a 4x increase could be true, given the TDP and classic CUDA core counts from this leak. Maybe even more than that? Can't wait to see.
 
Well, a bit more on topic, I saw the poll, and apparently there are 5% of the users willing to wait 5 years or more for Intel to come with competitive high-end desktop graphics :p ...
 
Wow, these ARE power hungry, now I'm really starting to believe the leaks that Nvidia was forced to do this because RDNA2 is that competitive and that second biggest Navi will be the price to performance to power usage star of the new gen cards.

Now, I'm really excited for RDNA2.... But, I did just get a 5700xt last November, so maybe I actually just might check out the Xbox Series X in all honesty.... Never been excited about consoles, but it's definitely different this time around
 
Idk should i upgrade from my HD 4850 512MB GDDR3? Not much performance difference, it seems. Minus the lack of Ray tracing. :roll:
 
Do you really “need” more than 10GB VRAM?

bit of a hard question to answer, do you "need" anything in this space? do you even "need" a dedicated gpu?

This is about high end gaming and more Vram to work with is better, higher resolution textures and shadow quality and other stuff.
10gb on a new flagship 3080 is..... just extremely lackluster.

Like I said its like Big N is following Big I and just sell us a nothingburger due to lack of competition in these price brackets.
 
Back
Top