Tuesday, December 15th 2020

NVIDIA GeForce RTX 3080 Ti Graphics Card Launch Postponed to February

In the past, we heard rumors about NVIDIA's upcoming GeForce RTX 3080 Ti graphics card. Being scheduled for January release, we were just a few weeks away from it. The new graphics card is designed to fill the gap between the RTX 3080 and higher-end RTX 3090, by offering the same GA102 die with the only difference being that the 3080 Ti is GA102-250 instead of GA102-300 die found RTX 3090. It allegedly has the same CUDA core count of 10496 cores, same 82 RT cores, 328 Tensor Cores, 328 Texture Units, and 112 ROPs. However, the RTX 3080 Ti is supposed to bring the GDDR6X memory capacity down to 20 GBs, instead of the 24 GB found on RTX 3090.

However, all of that is going to wait a little bit longer. Thanks to the information obtained by Igor Wallosek from Igor's Lab, we have data that NVIDIA's upcoming high-end GeForce RTX 3080 Ti graphics card is going to be postponed to February for release. Previous rumors suggested that we are going to get the card in January with the price tag of $999. That, however, has changed and NVIDIA allegedly postponed the launch to February. It is not yet clear what the cause behind it is, however, we speculate that the company can not meet the high demand that the new wave of GPUs is producing.
Sources: Igor's Lab, via VIdeoCardz
Add your own comment

121 Comments on NVIDIA GeForce RTX 3080 Ti Graphics Card Launch Postponed to February

#101
Vayra86
ValantarDidn't the 970 just have a single double density die, though, giving that last 0.5GB a 32-bit bus width (or was it two/64-bit)? This time around it would at least be several, making the performance difference much smaller (effectively 10GB with a 320-bit bus + for example 6GB with a 192-bit bus). Of course they'd need to include some sort of data priority system in the driver to minimize the effects of this difference.
It is possible. The GTX 660 for example had 0.5 GB with 64 bit and the 1.5 GB on 192 bit I believe; 550ti also had a different split.


Source: www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/2

And there is no SLI issue now with these bandwidths, so dont rule it out.. I dont know if they added support for it, it had been absent since Pascal.
Posted on Reply
#102
lexluthermiester
MxPhenom 216They don't though. Nvidia's cards are still beating out AMDs offerings at 1440p to 4k for right now.
With some current games maybe, however, some games on highest settings go above 10GB and the 3080 is being held back by that limitation already. Such will be more pronounced in a short time. This is not the first time GPU performance has been limited by a lack of VRAM.
MxPhenom 216The amount of VRAM doesn't tell the whole story and never has.
No, but it tells part of the story and an important part.
Vayra86It is possible. The GTX 660 for example had 0.5 GB with 64 bit and the 1.5 GB on 192 bit I believe
Only the 2GB version. The 1.5GB and 3GB versions were full bandwidth.
Posted on Reply
#103
MxPhenom 216
ASIC Engineer
lexluthermiesterWith some current games maybe, however, some games on highest settings go above 10GB and the 3080 is being held back by that limitation already. Such will be more pronounced in a short time. This is not the first time GPU performance has been limited by a lack of VRAM.

No, but it tells part of the story and an important part.


Only the 2GB version. The 1.5GB and 3GB versions were full bandwidth.
What games?

Cyberpunk looks to be the only game released this year that could max out a 3080 frame buffer, and that's only with RT.

With how quickly we get new GPUs (yearly) it doesn't make a whole lot of sense to to buy a card with future game requirements in mind when its only speculation, because at that point you'd never buy a card. Buy what's best and fits your budget for todays game (or your workloads specifically if you need one for compute, etc.) and when its not enough, a better card will be out that will trump what you have on every metric.

VRAM narrative being pushed on forums lately is way over exaggerated.
Posted on Reply
#104
Valantar
Vayra86It is possible. The GTX 660 for example had 0.5 GB with 64 bit and the 1.5 GB on 192 bit I believe; 550ti also had a different split.


Source: www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/2

And there is no SLI issue now with these bandwidths, so dont rule it out.. I dont know if they added support for it, it had been absent since Pascal.
Interesting! Also, given the amount of memory chips present and the overall bus width, a solution like this is arguably less problematic on a contemporary high-end SKU than on these older ones. It definitely wouldn't surprise me.
Posted on Reply
#105
lexluthermiester
MxPhenom 216What games?

Cyberpunk looks to be the only game released this year that could max out a 3080 frame buffer, and that's only with RT.

With how quickly we get new GPUs (yearly) it doesn't make a whole lot of sense to to buy a card with future game requirements in mind when its only speculation, because at that point you'd never buy a card. Buy what's best and fits your budget for todays game (or your workloads specifically if you need one for compute, etc.) and when its not enough, a better card will be out that will trump what you have on every metric.

VRAM narrative being pushed on forums lately is way over exaggerated.
Ok, think whatever you want. The rest of us will get on with reality...
Posted on Reply
#106
Vayra86
MxPhenom 216What games?

Cyberpunk looks to be the only game released this year that could max out a 3080 frame buffer, and that's only with RT.

With how quickly we get new GPUs (yearly) it doesn't make a whole lot of sense to to buy a card with future game requirements in mind when its only speculation, because at that point you'd never buy a card. Buy what's best and fits your budget for todays game (or your workloads specifically if you need one for compute, etc.) and when its not enough, a better card will be out that will trump what you have on every metric.

VRAM narrative being pushed on forums lately is way over exaggerated.
Cyberpunk is past gen development, not next gen ;)

Its a crystal ball thing, no conclusive evidence today of either right or wrong. But if you look back we see the mainstream follows consoles and those do have equal or more VRAM with a weaker GPU. Not less. Its easy to conclude the midrange will aimply require 8 GB - it already does as per the Cyberpunk, Godfall examples - and if you want higher resolutions, it probably will move towards or beyond 10.

You might buy a GPU for two years... I buy them to last three to five; and at 3440x1440 Cyberpunk is happy to use 7 GB on medium-high settings. Thats 2020. Not 2025 or even 2022. The 3080 is just badly balanced out, and everyone is free to ignore that, but I'm not. I like to have good resale value on my cards going down the line, so that if and when I upgrade, I can just pull out 1-200 bucks, sell the old card, and move up. The TCO that way is incredibly low, talking 100 EUR or less per year of gaming - and almost always up to date and using sub-top GPUs.
Posted on Reply
#107
lexluthermiester
Vayra86Cyberpunk is past gen development, not next gen ;)
Good point!
Posted on Reply
#108
N3M3515
MxPhenom 216Nvidia's cards are still beating out AMDs offerings at 1440p to 4k for right now.
I'm always amused by the word "beating" like it's something noticeable on a blind test for example. One would think that beating is something like 25%+, but 5%? 8%? gimme a break.
It's more like trading blows, some games amd "beats" nvidia and some games nvidia "beats" amd. (Hell, on some review sites the 6800xt is like 2% ahead of the 3080 on the 23 game total). The rest of them is equal performance. Of course there are some outliers where 3070 "beats" 6800xt and in some the 6800xt "beats" the 3090.
Posted on Reply
#109
MxPhenom 216
ASIC Engineer
N3M3515I'm always amused by the word "beating" like it's something noticeable on a blind test for example. One would think that beating is something like 25%+, but 5%? 8%? gimme a break.
It's more like trading blows, some games amd "beats" nvidia and some games nvidia "beats" amd. (Hell, on some review sites the 6800xt is like 2% ahead of the 3080 on the 23 game total). The rest of them is equal performance. Of course there are some outliers where 3070 "beats" 6800xt and in some the 6800xt "beats" the 3090.
The point I'm trying to make is it that there doesn't appear to be any evidence that the 10GB frame buffer on the 3080 is its downfall at those resolutions right now.
Posted on Reply
#110
lexluthermiester
MxPhenom 216The point I'm trying to make is it that there doesn't appear to be any evidence that the 10GB frame buffer on the 3080 is its downfall at those resolutions right now.
Right now, no. But it's right up at the limit. And that's just for games. For many other tasks GPU's are used for, 10GB is a limiting factor now, a serious one in some cases.
Posted on Reply
#111
Valantar
lexluthermiesterRight now, no. But it's right up at the limit. And that's just for games. For many other tasks GPU's are used for, 10GB is a limiting factor now, a serious one in some cases.
That's true, but there's also a question about whether one can really complain - Geforce is after all a very explicitly gaming-oriented product line, and while one can obviously argue the unfairness of gatekeeping pro/premium features and so on, you're ultimately buying a product that's meant to game on and do some general acceleration of home user oriented workloads. There's always a point where your specific needs force you to step past the line between amateur/home user equipment and professional equipment, and at that point you're likely using said equipment for things that really don't leave you much reason to complain about home user equipment being insufficient. Of course one can argue that a $700 GPU ought to handle pretty much anything you throw at it, which ... well, it does, really.
Posted on Reply
#112
lexluthermiester
ValantarGeforce is after all a very explicitly gaming-oriented product line
And that's a fair statement however, with current games already bumping up against the 8GB and 10GB barrier, future gaming advances will leave these cards coming up short.
ValantarOf course one can argue that a $700 GPU ought to handle pretty much anything you throw at it, which ... well, it does, really.
My 2080 with 8GB is already limiting a few things that I could be doing much faster if the card had 16GB of VRAM. Regardless of the performance level of the 3080 is not as attractive with an artificial limit of 10GB. A 3080ti with 20GB is much better, but not for $1000. $850, ok that's more attractive. An alternate version of of the 3080(or 3070) with 16gb for $700? Now that can be justified.

Right now, NVidia is just handing business to AMD because RTRT is not as important to some while more VRAM is.
Posted on Reply
#113
Valantar
lexluthermiesterAnd that's a fair statement however, with current games already bumping up against the 8GB and 10GB barrier, future gaming advances will leave these cards coming up short.

My 2080 with 8GB is already limiting a few things that I could be doing much faster if the card had 16GB of VRAM. Regardless of the performance level of the 3080 is not as attractive with an artificial limit of 10GB. A 3080ti with 20GB is much better, but not for $1000. $850, ok that's more attractive. An alternate version of of the 3080(or 3070) with 16gb for $700? Now that can be justified.

Right now, NVidia is just handing business to AMD because RTRT is not as important to some while more VRAM is.
$850 for a 20GB 3080Ti barely covers the gross DRAM cost increase over a 10GB 3080 for Nvidia though, let alone margins for them and their partners and any other cost increases (such as needing double-sided boards). Sincerely doubt that'll happen for a while.
Posted on Reply
#114
lexluthermiester
Valantar$850 for a 20GB 3080Ti barely covers the gross DRAM cost increase over a 10GB 3080 for Nvidia though
I doubt that. If that were true, $700 for the 3080 would be in the same situation, especially for AIB's using custom PCB's & cooling. $850 would allow for a very reasonable margin of profit for everyone involved.
Posted on Reply
#115
MxPhenom 216
ASIC Engineer
lexluthermiesterI doubt that. If that were true, $700 for the 3080 would be in the same situation, especially for AIB's using custom PCB's & cooling. $850 would allow for a very reasonable margin of profit for everyone involved.
Maybe for GDDR6. But GDDR6X is a lot more expensive and for Ti card Nvidia will not drop to GDDR6. Also only Micron is making it right now.
Posted on Reply
#116
lexluthermiester
MxPhenom 216Maybe for GDDR6. But GDDR6X is a lot more expensive and for Ti card Nvidia will not drop to GDDR6. Also only Micron is making it right now.
That's a really good point. I wonder what the BOM data looks like? My guess is that it's not super expensive. Micron wants to sell memory not sit on inventory so they are likely going to be charging reasonable prices for bulk orders.
Posted on Reply
#117
MxPhenom 216
ASIC Engineer
lexluthermiesterThat's a really good point. I wonder what the BOM data looks like? My guess is that it's not super expensive. Micron whats to sell memory not sit on inventory so they are likely going to be charging reasonable prices for bulk orders.
If they can fullfill those orders. From what I have heard, they can't. Theres not nearly as much GDDR6X being made compared to GDDR6.
Posted on Reply
#118
N3M3515
I'm going to bookmark this and wait 1 year maybe 1 and a half :D
Posted on Reply
#119
Sir Alex Ice
Wait till people realize that Chinese new year is in February and they launch exactly 0 stock because there will be no production.
Posted on Reply
Add your own comment
May 13th, 2024 11:26 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts