Tuesday, December 15th 2020

NVIDIA GeForce RTX 3080 Ti Graphics Card Launch Postponed to February

In the past, we heard rumors about NVIDIA's upcoming GeForce RTX 3080 Ti graphics card. Being scheduled for January release, we were just a few weeks away from it. The new graphics card is designed to fill the gap between the RTX 3080 and higher-end RTX 3090, by offering the same GA102 die with the only difference being that the 3080 Ti is GA102-250 instead of GA102-300 die found RTX 3090. It allegedly has the same CUDA core count of 10496 cores, same 82 RT cores, 328 Tensor Cores, 328 Texture Units, and 112 ROPs. However, the RTX 3080 Ti is supposed to bring the GDDR6X memory capacity down to 20 GBs, instead of the 24 GB found on RTX 3090.

However, all of that is going to wait a little bit longer. Thanks to the information obtained by Igor Wallosek from Igor's Lab, we have data that NVIDIA's upcoming high-end GeForce RTX 3080 Ti graphics card is going to be postponed to February for release. Previous rumors suggested that we are going to get the card in January with the price tag of $999. That, however, has changed and NVIDIA allegedly postponed the launch to February. It is not yet clear what the cause behind it is, however, we speculate that the company can not meet the high demand that the new wave of GPUs is producing.
Sources: Igor's Lab, via VIdeoCardz
Add your own comment

121 Comments on NVIDIA GeForce RTX 3080 Ti Graphics Card Launch Postponed to February

#76
my_name_is_earl
Don't worries guys, all two card per store will be in-stock.
Posted on Reply
#77
Fluffmeister
xenocide
Is there really any reason for this GPU to exist?
To one up the 6900 XT EOL Edition probably.
Posted on Reply
#78
Gruffalo.Soldier
I'm the only one
Cool it gives people not shitting money a chance to save enough.
Posted on Reply
#79
Valantar
tigger
Cool it gives people not shitting money a chance to save enough.
Does it though? I would think a $1000 GPU is still well outside of the reach of most people not shitting money. Most gamers use $2-300 GPUs, after all, so for them the difference between $1000 and $1500 likely doesn't matter much.
Posted on Reply
#80
Gruffalo.Soldier
I'm the only one
Valantar
Does it though? I would think a $1000 GPU is still well outside of the reach of most people not shitting money. Most gamers use $2-300 GPUs, after all, so for them the difference between $1000 and $1500 likely doesn't matter much.
I would never buy a $1k+ GPU anyway. They are are for true shitters. I will be looking at £5-600 max. There is too much eliteism now in PC's that is exactly why people will pay the inflated prices on Ebay for new GPU's because they need to have it now, rather than wait. you cant brag online with a old GPU can you.
Big deal you have a 3090, do you have to go on every forum and social media group on the net to post pics of your PC showing the card, and post pics of dumb meaningless benchies showing how much better your score is. IMO good luck to the scalpers they are providing a way for rich shitheads to spend their money.
Posted on Reply
#81
Ominence
tigger
Big deal you have a 3090, do you have to go on every forum and social media group on the net to post pics of your PC showing the card, and post pics of dumb meaningless benchies showing how much better your score is. IMO good luck to the scalpers they are providing a way for rich shitheads to spend their money.
i share your sentiment mate, social media age means that true progress is stunted and profits are undeservedly elevated. the globally networked herd hive mentality and their FOMO compensating behaviours.
Posted on Reply
#82
300BaudBob
I have a dream of getting a low bug version of msfs next spring and loading it up with 8k textures to look good on my OLED TV. This card could be tempting for that ... But so might an AMD card. The price is a bit uncomfortable but if it would give me a near steady 30fps with full eye candy...well that could be worth it.
The 20GB would make me less worried if loading up a bunch of 8k textures.
Maybe more a fantasy than a dream ;)
Posted on Reply
#83
spnidel
xenocide
It's almost like VRAM allocation doesn't matter much. Given we know Nvidia designs the memory system based around the performance of the GPU, and we saw for a decade what slapping more VRAM on cards did (Spoiler: Literally nothing). There is not a single game out there that is bottlenecked by VRAM currently, and I'm not sure there ever has been.
massive cope
Posted on Reply
#84
neatfeatguy
saki630
can they stop with the paper launches of crap we dont need. Get us some 3070's asap
I almost got up and walked out of work yesterday when I saw my local MicroCenter showed they had a limited number of 3070 cards (only 1 model) from ASUS at the store. Coupled with the fact they also showed 12+ 5600X CPUs in stock.....damn stupid work and me needing to have a job to pay for things! Sadly, I opted it was better to stay employed instead of running off to pay for new hardware.

I really need to stop stalking microcenter.com for hardware right now. I'm still expecting things to settle down and become easily available more towards March....I can wait a few more months.
Posted on Reply
#85
EarthDog
tigger
There is too much eliteism
Is it eliteism...or is it people saving and spending their hard earned cash and buying what they want because they can? I worked my ass off to get where I am now. If I happen buy a $1500 GPU (I wouldn't buy the inflated prices, note) I don't want any whiny ass people who can't/don't want to afford it for whatever reason running their yap. People are excited about being able to get a rare, pricy item and they post it. Who cares! :)

Some of us are beyond a 1440p/75Hz monitor where a 3080 or greater is needed for 165Hz/FPS or 4K/60+ in the first place. ;)
Posted on Reply
#86
Vayra86
lexluthermiester
More irony. You're really good at that.

Sure it would, it would just be an alternate version with alternate specs. Common practice. Seriously, history much?
History says that an x80ti will absolutely have to be faster than a 3080 below it. Cone on now...
Posted on Reply
#87
lexluthermiester
Vayra86
History says that an x80ti will absolutely have to be faster than a 3080 below it. Cone on now...
Lets take the GTX460 for example. There were five different variants of that card, 768MB, three 1GB models and a 2GB each with various shader and core counts. Would it not have made more sense to name the 768MB version a GTX450ti? Yes it would have. But they didn't and here we are. AMD has done similar. RX400 & RX500 series anyone? I could go on like that. Locically you might be correct, but in practice, NVidia will do as they please with their product line-up and name their products as they please, as they always have.

Now as for the aforementioned rumor, if NVidia brings a 16GB version of the 3080 but with the slightly reduce VRAM speed, I'm totally OK with that tradeoff. And I couldn't care less if they called it a 3080SE or just a 3080. The specs are what people should care about, not the name.
Posted on Reply
#88
Valantar
lexluthermiester
Lets take the GTX460 for example. There were five different variants of that card, 768MB, three 1GB models and a 2GB each with various shader and core counts. Would it not have made more sense to name the 768MB version a GTX450ti? Yes it would have. But they didn't and here we are. AMD has done similar. RX400 & RX500 series anyone? I could go on like that. Locically you might be correct, but in practice, NVidia will do as they please with their product line-up and name their products as they please, as they always have.

Now as for the aforementioned rumor, if NVidia brings a 16GB version of the 3080 but with the slightly reduce VRAM speed, I'm totally OK with that tradeoff. And I couldn't care less if they called it a 3080SE or just a 3080. The specs are what people should care about, not the name.
I mean, Nvidia clearly has the resources to test whether a cut-down memory bus (possibly combined with higher clocked memory) would disadvantage a potential 3080 Ti. It wouldn't even be particularly difficult for them to test, just write a custom BIOS for some test board and run it. If it works well, I have little doubt they'd make that, though I struggle to see a wealth of scenarios where 16GB of VRAM would deliver a performance boost significant enough to outweigh the far more common bandwidth limitations. I don't see it as likely, but I don't have access to Nvidia's engineering resources either.
Posted on Reply
#89
lexluthermiester
Valantar
I mean, Nvidia clearly has the resources to test whether a cut-down memory bus (possibly combined with higher clocked memory) would disadvantage a potential 3080 Ti
Lets be fair, the 20GB version will be the 3080ti. A 16gb version will likely just be a 3070ti or stick with the 3080 with the different identifier.
Valantar
It wouldn't even be particularly difficult for them to test, just write a custom BIOS for some test board and run it. If it works well, I have little doubt they'd make that
True
Valantar
though I struggle to see a wealth of scenarios where 16GB of VRAM would deliver a performance boost significant enough to outweigh the far more common bandwidth limitations.
Absolute performance isn't always the deciding factor of a product focus. The extra 6GB VRAM would be very handy for non-gaming tasks that many people do, myself included, that greatly benefit from lot of VRAM but that coming at a price that is not going to be as hard on the pocket-book as a 3080ti or 3090.
Posted on Reply
#90
Vayra86
lexluthermiester
Lets take the GTX460 for example. There were five different variants of that card, 768MB, three 1GB models and a 2GB each with various shader and core counts. Would it not have made more sense to name the 768MB version a GTX450ti? Yes it would have. But they didn't and here we are. AMD has done similar. RX400 & RX500 series anyone? I could go on like that. Locically you might be correct, but in practice, NVidia will do as they please with their product line-up and name their products as they please, as they always have.

Now as for the aforementioned rumor, if NVidia brings a 16GB version of the 3080 but with the slightly reduce VRAM speed, I'm totally OK with that tradeoff. And I couldn't care less if they called it a 3080SE or just a 3080. The specs are what people should care about, not the name.
If they would sure, but its a fantasy and will not happen. If anything we might see a refresh with higher GDDR6X speeds.
Posted on Reply
#91
Ominence
300BaudBob
Maybe more a fantasy than a dream ;)
life is dreams realised. go for it mate:clap:
Posted on Reply
#92
lexluthermiester
Vayra86
If they would sure, but its a fantasy and will not happen.
I don't agree. They need a 16GB model to compete with AMD. Even if there are only limited use-case-scenario's for that amount of VRAM ATM, they'll look inferior if they don't match up. And let's be fair, the extra VRAM is going to be useful for future gaming/compute possibilities. There is more than a market for such a card.
Vayra86
If anything we might see a refresh with higher GDDR6X speeds.
They might do that too!
Posted on Reply
#93
Valantar
lexluthermiester
I don't agree. They need a 16GB model to compete with AMD. Even if there are only limited use-case-scenario's for that amount of VRAM ATM, they'll look inferior if they don't match up. And let's be fair, the extra VRAM is going to be useful for future gaming/compute possibilities. There is more than a market for such a card.

They might do that too!
I dont know - I'm under the impression that GPU product segmentation and naming, at least what is directed at Western markets, has homogenized and become more systematic in recent years (at least partially alongside hardware segmentation becoming clearer and better defined with GPU core components multiplying and being grouped together). It's been a long, long time since you could buy anything remotely high end where the same naming tier covered vastly different hardware configurations. The 1060 is the closest, and that was a midrange series. The low end is still a free-for-all though.
Posted on Reply
#94
lexluthermiester
Valantar
It's been a long, long time since you could buy anything remotely high end where the same naming tier covered vastly different hardware configurations. The 1060 is the closest, and that was a midrange series. The low end is still a free-for-all though.
Good points. Still, anything is possible. I just don't see NVidia letting that market gap stay unfilled.
Posted on Reply
#95
Valantar
lexluthermiester
Good points. Still, anything is possible. I just don't see NVidia letting that market gap stay unfilled.
No, that's true. They do seem to have painted themselves into a bit of a corner, but there are several ways out of that, of course. They did a mixed memory density config on the 970, and the XSX hs that too, so I guess they could go that route for a 14/16GB 3080 Ti, though of course those last few GB would be quite slow.
Posted on Reply
#96
lexluthermiester
Valantar
They did a mixed memory density config on the 970....though of course those last few GB would be quite slow.
Oh hell NO! Bad idea! Let's not have that crap going on again. 2GB chips x8(or 1GB x16 with a dual sided PCB) at 256bit bus is perfectly acceptable.
Posted on Reply
#97
Vayra86
lexluthermiester
Oh hell NO! Bad idea! Let's not have that crap going on again. 2GB chips x8(or 1GB x16 with a dual sided PCB) at 256bit bus is perfectly acceptable.
If Jensen is that arrogant after getting burned with the 970.... wow
Posted on Reply
#98
lexluthermiester
Vayra86
If Jensen is that arrogant after getting burned with the 970.... wow
That was a lesson they likely learned well.
Posted on Reply
#99
Valantar
Didn't the 970 just have a single double density die, though, giving that last 0.5GB a 32-bit bus width (or was it two/64-bit)? This time around it would at least be several, making the performance difference much smaller (effectively 10GB with a 320-bit bus + for example 6GB with a 192-bit bus). Of course they'd need to include some sort of data priority system in the driver to minimize the effects of this difference.
Posted on Reply
#100
MxPhenom 216
ASIC Engineer
lexluthermiester
I don't agree. They need a 16GB model to compete with AMD. Even if there are only limited use-case-scenario's for that amount of VRAM ATM, they'll look inferior if they don't match up. And let's be fair, the extra VRAM is going to be useful for future gaming/compute possibilities. There is more than a market for such a card.

They might do that too!
They don't though. Nvidia's cards are still beating out AMDs offerings at 1440p to 4k for right now. The amount of VRAM doesn't tell the whole story and never has.
Posted on Reply
Add your own comment