Monday, February 14th 2022

NVIDIA Provides a Statement on MIA RTX 3090 Ti GPUs

NVIDIA's RTX 3090 Ti graphics card could very well be a Spartan from 343 Industries' Halo, in that it too is missing in action. Originally announced at CES 2022 for a January 27th release, the new halo product for the RTX 30-series family even had some of its specifications announced in a livestream. However, the due date has come and gone for more than half a month, and NVIDIA still hadn't said anything about the why and the how of it - or when should gamers hoping to snag the best NVIDIA graphics card of this generation ready their F5 keys (and bank accounts). Until now - in a statement to The Verge, NVIDIA spokesperson Jen Andersson said that "We don't currently have more info to share on the RTX 3090 Ti, but we'll be in touch when we do". Disappointed? So are we.

While the reasons surrounding the RTX 3090 Ti's delayed launch still aren't clear - and with NVIDIA's response, we're left wondering if they ever will be - there were some warning signs that not all the grass was green on the RTX 3090 Ti's launch. The consensus seems to be that NVIDIA found some last-minute production issues with the RTX 3090 Ti, which prompted an emergency delay on the cards' launch. The purported problems range from issues with the card's PCB, BIOS, and even GDDR6X 21 Gbps memory modules - but it's unclear which of these (or perhaps which combination) truly prompted the very real delay on the product launch.
Source: The Verge
Add your own comment

31 Comments on NVIDIA Provides a Statement on MIA RTX 3090 Ti GPUs

#26
Vayra86
NaterI still say it's because no Radeon 6950XT showed up. There's no reason for it in this market, especially if they're having issues producing one, which it seems they are.
This has never stopped Nvidia before
ir_cowThey probably realized releasing another video card that cost 3k is stupid. Especially when 40 series and AMDs cards are due out later this year.
This has never stopped Nvidia before

The only thing stopping Nvidia or its cards.... is HEAT, boys. Heat.
They built a boost algorithm that works better than the hardware it's used on. Everytime we see Space Invaders or EVGA producing yet another shite cooler, its a heat problem.

Or, put differently for this current gen, Ampere is shit on Samsung 8nm, as it was initially scaled for TSMC.
Nobody in their right mind willingly chooses to up the TDP on the top end of its stack by nearly a third in one gen - and that's just counting the non ti. Every lower configuration can 'make it' fine, albeit also with inflated TDPs. But the top end... that's new territory. We didn't need Raja after all to get a solid handle on >300W cards in the consumer space did we...

Its a trend with all newer components now. To get more perf, we get more heat, and the margin for error is thin, but hardware is stopped from frying itself by smart algorithms.
Posted on Reply
#27
Raiden85
Coming from a Strix 3090 OC owner the TI variant of the 3090 is absolutely stupid, just look at the spec difference, it's absolutely tiny to the point where I doubt you'll see just a few frames of difference if any.

Just look at the spec difference between the 3080 and 3090, the spec difference between those two is pretty massive on paper but in reality it's what, 10 to 15fps faster in the best cases. For a 3090 Ti you would be paying way more than a 3090, you'd get a card with a much larger power draw so most likely hotter for what, 0fps to 5fps if your very lucky.

It would be a DOA product at launch and ridiculed by reviewers, it would be the worst Ti card to ever be released and nvidia knows it.
Posted on Reply
#28
Dr. Dro
Vayra86They built a boost algorithm that works better than the hardware it's used on. Everytime we see Space Invaders or EVGA producing yet another shite cooler, its a heat problem.
Sorry, call me old school but, give me a stable static clock over whatever rubbish this GPU boost system is any day every day... I actually do a custom curve having my card target a ~1100MHz base so it flatlines on the maximum possible overboost bin all the time (which turns out to be the 1755 MHz target I want it to run at). GPU boost is especially awful on cards with a lower power limit, you will never see it behave correctly pretty much ever - this thing loves to throttle. Trust me when I tell you it is a better experience to have the stable, flatlined clocks at a very modest vcore and absolutely no thermal or power throttling whatsoever than just letting this boost algorithm rubbish run wild.
Posted on Reply
#29
AusWolf
Dr. DroSorry, call me old school but, give me a stable static clock over whatever rubbish this GPU boost system is any day every day... I actually do a custom curve having my card target a ~1100MHz base so it flatlines on the maximum possible overboost bin all the time (which turns out to be the 1755 MHz target I want it to run at). GPU boost is especially awful on cards with a lower power limit, you will never see it behave correctly pretty much ever - this thing loves to throttle. Trust me when I tell you it is a better experience to have the stable, flatlined clocks at a very modest vcore and absolutely no thermal or power throttling whatsoever than just letting this boost algorithm rubbish run wild.
I sort of agree, but sort of don't. I prefer seeing flat clocks myself, though I have to admit that not every scene is the same in any game, so the card may need less or more power to render them. Targeting the TDP level instead of clocks makes sense for cooling, and is better for your PSU as well.
Posted on Reply
#30
nguyen
AusWolfI sort of agree, but sort of don't. I prefer seeing flat clocks myself, though I have to admit that not every scene is the same in any game, so the card may need less or more power to render them. Targeting the TDP level instead of clocks makes sense for cooling, and is better for your PSU as well.
I use 3 undervolt profiles:
1725mhz/750mV (~280W)
1830mhz/800mV (~310W)
1920mhz/850mV (~350W)
I just toggle between them in-game to find the optimal FPS/efficiency, 90% of the time I use 1830mhz/800mV though
Posted on Reply
#31
Vayra86
Dr. DroSorry, call me old school but, give me a stable static clock over whatever rubbish this GPU boost system is any day every day... I actually do a custom curve having my card target a ~1100MHz base so it flatlines on the maximum possible overboost bin all the time (which turns out to be the 1755 MHz target I want it to run at). GPU boost is especially awful on cards with a lower power limit, you will never see it behave correctly pretty much ever - this thing loves to throttle. Trust me when I tell you it is a better experience to have the stable, flatlined clocks at a very modest vcore and absolutely no thermal or power throttling whatsoever than just letting this boost algorithm rubbish run wild.
Meh. As much as I understand the reasoning, I do disagree. A well built boost algorithm is just extra performance and usually where it counts.

It requires a different stance I think towards overclocking or undervolting.
You're no longer setting the exact situation you want, you're setting the limitations you want. Much like @nguyen here above: 3 sets of limitations for voltage and boost will ensure you have the maximum clock within that limitation. The net result is probably an equal amount of control to the old situation, but still a fluctuation of clocks, where the fluctuation is likely to be overclock potential you'd never have had with a flat clock line.

But... it all depends on how refined and well designed the boost algorithm is, and what its stock limits are.
Posted on Reply
Add your own comment
May 3rd, 2024 20:12 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts