• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Cancels GeForce RTX 4080 12GB, To Relaunch it With a Different Name

Anyone having access (I don't) in DRAMexchange can see the GDDR6 spot price differences and have an indication.(just an indication!)
Spot price for 8Gbit GDDR6 is even lower than GDDR5 and there is a chance the 16Gb GDDR6 to be only around 1.5X the 8Gb price.(and Nvidia probably buys lower than the below spot session lows...)
So for example the actual difference between 8 8Gbit GDDR6 ICs (256bit bus case/8GB total) and 6 16Gbit GDDR6 ICs (192bit bus case/12GB) can even be only $5 total depending the 16Gbit GDDR6 IC price.
There is a reason for example ARC A770 8GB has only $20 difference with 16GB version and this could be it (8 x $5 for 8Gbit ICs vs 8 x $7.5 for 16Gbit ICs) (i don't buy Intel just want to push 16GB version)
Anyone with access can enlight us!


IMG_20221014_232823.jpg
 
Names are PR. Consumers need to inform themselves about the specs. I will not defend people, who buy their GPUs based on naming or nicely colored packaging.
The delay is an estimation. Fill in whatever timespan you like.
Estimation means you have no source. Unless AMD managed to make a properly scaling multi GPU chiplet card, they are unlikely to have anything competitive with the 4090, launching around the 4080 would be fine. I would expect the 7900XT to beat the 4080 easily in rasterized games; raytracing performance is unlikely to be as good as Nvidia.
 
Names have meaning. For a long time, Nvidia's top tier GPU was the x80. It changed to x80 Ti when AMD surprised them with Hawaii, and remained so until Ampere. I would also like to know your source regarding a delay in RDNA3.

Not exactly. We've had the GTX 295, 590 and 690. Then x90 took a long nap until Ampere.
 
More like, they got caught in their own stupidity and now are acting desperate to fix things:

- Pretending there is a scalper/miner shortage on 4090
- "Loyalty program" for buying FE 4090, only for current nvidia owners
- Killing the LHR from 30 series
- And now, cancelling the dumb 4080 12GB.

More "unusual moves" to be expected as Jensen continues to wake up from his hubris enduced coma.
Ah, there are plot twists like that. I have to admit that I haven't even been looking that much news about these new ones as their pricing make them so uninteresting.

But I wouldn't call this a canceling but rather renaming it to a model it should've been in the beginning.
 
Ah, there are plot twists like that. I have to admit that I haven't even been looking that much news about these new ones as their pricing make them so uninteresting.

But I wouldn't call this a canceling but rather renaming it to a model it should've been in the beginning.

Well it's more than renaming cause there is no way they will just call it 4070 but keep the same price. That would reach a new level of rejection.
 
Wrong.

Per NVs own discovery (was in their slides) they've shockingly discovered that customers stick with series (e.g. 970 => 1070 => 2070) rather than sticking with the price bracket.

4080 losing "80" would be a major hit.
Perspective, it doesn't make a difference to me and to anyone who is informed beforehand. If you want to talk about sales strategies and uninformed consumers, you are right. If nVidia gets a major hit, depends more on AMD, then AD104 naming.
 
Estimation means you have no source. Unless AMD managed to make a properly scaling multi GPU chiplet card, they are unlikely to have anything competitive with the 4090, launching around the 4080 would be fine. I would expect the 7900XT to beat the 4080 easily in rasterized games; raytracing performance is unlikely to be as good as Nvidia.

Why? I think you think the RTX 4090 is out of reach, why exactly?

RX 6950 XT difference to RTX 4090 is only 53%.

1665780745605.png


The performance jump from the older generation top dog RX 5700 XT to RX 6900 XT was 101% in a single move.

1665780802255.png


AMD can do it.
 
Well it's more than renaming cause there is no way they will just call it 4070 but keep the same price. That would reach a new level of rejection.
Just wondering that what will happen to the cards already been made, manufacturers will bios-flash those with a 4070 named bios?
 
Estimation means you have no source. Unless AMD managed to make a properly scaling multi GPU chiplet card, they are unlikely to have anything competitive with the 4090, launching around the 4080 would be fine. I would expect the 7900XT to beat the 4080 easily in rasterized games; raytracing performance is unlikely to be as good as Nvidia.

AMD knows for sure how the 7900XT stacks up to the 4090 in pure rasterization. The 7900XT launch is placed closer to the 4080 launch because it will be more comparable in performance to that card. It also buys AMD some time to improve their driver software. The hardware is already finished, probably sitting on pallets in some warehouse's finished goods section.

Radeon RT cores will be weaker than GeForce RT cores and there's no indication that AMD will dethrone NVIDIA any time soon in machine learning either.

And one key battleground is the developer environment. NVIDIA stands very tall here.
 
Why? I think you think the RTX 4090 is out of reach, why exactly?

RX 6950 XT difference to RTX 4090 is only 53%.

View attachment 265527

The performance jump from the older generation top dog RX 5700 XT to RX 6900 XT was 101% in a single move.

View attachment 265528

AMD can do it.
I think they can do it, but all indications are that they are using chiplets for the larger GPUs. Thus they will take a power hit compared to a monolithic GPU as on-die interconnects will now be inter-chip.

AMD knows for sure how the 7900XT stacks up to the 4090 in pure rasterization. The 7900XT launch is placed closer to the 4080 launch because it will be more comparable in performance to that card. It also buys AMD some time to improve their driver software. The hardware is already finished, probably sitting on pallets in some warehouse's finished goods section.

Radeon RT cores will be weaker than GeForce RT cores and there's no indication that AMD will dethrone NVIDIA any time soon in machine learning either.

And one key battleground is the developer environment. NVIDIA stands very tall here.
The 4080 isn't much faster than a 3090 Ti; again, Nvidia's own benchmarks show a 10 to 25% improvement over the 3090 Ti. AMD has claimed a performance per watt increase of over 50%. A 7900XT 50% faster than a 6950 XT would be comfortably 35% faster than a 3090 Ti. On the other hand, your points about RT cores, machine learning, and developer relations are all valid.
 
Last edited:
Just wondering that what will happen to the cards already been made, manufacturers will bios-flash those with a 4070 named bios?

Yes. They are probably sitting in some warehouse in bulk packaging anyhow waiting for final firmware. Even if there were a few sample units sent out, they will all likely get reflashed to new code that identifies it as whatever model number NVIDIA decides.

There are product stickers on the PCB with the wrong SKU and some AIB partners might have put the model number on the cooler.

All of the packaging will have to be scrapped of course.

Remember that Apple's iOS software is RTM'ed shortly before the iPhone launch, maybe 10-14 days to give the manufacturer time to flash units for channel distribution and bricks-and-mortar stores.
 
AMD needs to make their drivers more stable, tried a RX 580, had drivers issues even when surfing it was BSODing. Passed to a RTX2070, never looked back since.

I don't know if they improved drastically with time, tbh I am looking seriously at RDNA3 too in addition to the 4080 for my next upgrade...but if picking a GPU with the most stable drivers is gullible, then I am gullible. Nothing instinctive with picking up Nvidia...had 2 AMD GPU a ATI 4870 and an RX 580, both were drivers hell, at one point you get tired of DDU, troubleshooting, etc.
There have been no significant tech reviewer comments about AMD driver failures. Only reason anyone thinks AMD drivers are bad is anonymous internet posts like this. Nothing you said can be verified but it will still make someone casually reading these forums think twice and continue to perpetuate this myth.
 
Okay, maybe it doesn't, it's still within the same performance bracket. Ofc the 900 USD price is insane if it doesn't beat the 3090... :)
Have people gone mad? How is 900 justifiable for a 4070??, the freaking 3070 beat the 2080 Ti!! and for less than half the price!!, that would translate to less than half or the $1500 for the 3090 = 650 - 700 and that is pushing the price still.
 
Estimation means you have no source.
Every forecast is an estimate. My argument was, nVidia can cash in until AMDs new GPUs are physically available.
Unless AMD managed to make a properly scaling multi GPU chiplet card, they are unlikely to have anything competitive with the 4090, launching around the 4080 would be fine. I would expect the 7900XT to beat the 4080 easily in rasterized games; raytracing performance is unlikely to be as good as Nvidia.
I agree. Market price depends on demand and competitive AMD products. As MSRP loses its meaning, we may have to wait for the street price, once rx7000 is available. Soon we will see if that's pressuring nVidia to drop the AD104 and AD103 in price. I hope so.
 
Every forecast is an estimate. My argument was, nVidia can cash in until AMDs new GPUs are physically available.
I agree. Market price depends on demand and competitive AMD products. As MSRP loses its meaning, we may have to wait for the street price, once rx7000 is available. Soon we will see if that's pressuring nVidia to drop the AD104 and AD103 in price. I hope so.
Of course, they'll cash in both before and after the availability of AMD's new GPUs. The uninformed masses will continue to buy Nvidia even when AMD is better. That is why the 3060 has almost the same street price as the far superior 6700 XT in Canada.
 
The 7900XT launch is placed closer to the 4080 launch because it will be more comparable in performance to that card

AMD also wants to get into business of next gen cards being sold as yet another (more expensive) higher tier.

What year is it, seriously...

all indications are that they are using chiplets for the larger GPUs.
Chiplets is how underdog had trounced Intel.

I doubt Frau Su would do that if it meant losing the flagship competition outright.
 
Pre-binned Navi21 XTXH could hit 3GHz with the right conditions.
Supposedly Navi31 with the same conditions can hit close to 4GHz, let's say 3.9GHz that's +30% speed improvement.
The most power efficient Navi21 GPU in most cases/res was RX6800.
That had 2105MHz boost, add 30% on that (already high, 15% is the official TSMC difference regarding nodes) and maybe we have 2735MHz boost for the most efficient Navi31 based GPU (the one that they claim has +50% more performance/W?)
With so low frequency and if we are talking about a 300W to 335W Navi31 based model the performance potential can be uneventful. (in relation with what Nvidia can achieve)
Add to that the pessimistic scenario that the +50% performance/W claim made using upcoming FSR3.0 that maybe RDNA3 has an advantage there and the conclusions are even more pessimistic regarding performance potential.
Anyway, the above are probably bullshit, I don't believe them, i just examined the possibilities...
 
AMD also wants to get into business of next gen cards being sold as yet another (more expensive) higher tier.


What year is it, seriously...


Chiplets is how underdog had trounced Intel.

I doubt Frau Su would do that if it meant losing the flagship competition outright.
No one doubts AMD's engineering chops; connecting a multi die GPU would require a massive off-chip interconnect, but it can be done. If they could pull it off, then such a multi die GPU, with the proper scaling, would easily surpass the 4090 in rasterization, but we will see in November.
 
AMD also wants to get into business of next gen cards being sold as yet another (more expensive) higher tier.

Both NVIDIA and AMD have excess inventory of previous generation cards in the channel as well as unused GPU chips.

As far as I can tell, NVIDIA doesn't have any problem right now selling 4090 cards. The best binned GPU chips will end up in data centers anyhow.

It's really the low to mid-range graphics cards (Ampere and RDNA2) that are the major source of headaches for both companies and have contributed to these weird marketing conundrums.

What year is it, seriously...

A very tricky one for AMD, Intel, NVIDIA, and others.

Chiplets is how underdog had trounced Intel.

Well, the hardware for the 7900XT is already done. It's not like AMD can make it a multi-chiplet GPU in reaction to the 4090. These sort of architectural decisions need to be made years in advance.

It's important to point out that Intel gave AMD a chance to catch up by failing to transition to a smaller process node in a timely manner. It's not just the chiplet design that helped, TSMC gets a lot of the credit in the success of Ryzen 2, Ryzen 3, and now Ryzen 4.

Both AMD and NVIDIA are using TSMC's foundries for this new generation's family of GPUs. NVIDIA probably gave AMD a little help by using Samsung's foundries for the Ampere generation.
 
I bet they changed their mind because of combination of things, community realizing rtx 4080 12 gb are just old rtx 30 cards, and them not wanting people to find out, next having people find out about dlss actually working fine on rtx 30 cards and 20 cards.
I wish Nvidia treated their own customers properly with new featrures, imagine buying a gtx 1080 and all the old cards get cool new features, but then they release the rtx 20 series and they cant even release anything new for gtx 1080, and having to rely on AMD FSR instead.
 
Great, now what to do with my $900.... :banghead:
Maybe stop $ feeding the troll?

Maybe this was a PR stunt. Now they can say they listen to their customers’ concerns and do something about it!
Preemptive , perceive company image damage control ?
Dont worry they will announce the new 4070 ti 12 gig version tomorrow.
Or , better yet ? Super ti - mx-q.
They should have simply "re-launched" the 4080 16GB as the 4085, all problems solved. View attachment 265499
Long gone are the days of the "GF-Fermi" where one could unlock more shader cores with bios mod.

I am 3 pages in so far and calling it a night.
Is it just me "sensing a pattern" with the last couple graphics cards launches from the "big green camp"?
I mean first to time around(rtx3xna series) could be seen as a one off , but , all over again can shift towards another p.o.w. , such as , the "green camp" turned into a troll(price-scalping troll).

Not like that I couldn't fork the buck-a-roos for a 3000 series at launch'es (-90-80-70) , though it of been to the last cent then. Similarly different thing now , kinda all over again type a situation, got the savings ok after not having to "burn" trough them post surgery. Nope , nVidia's RTX 4090 out of stock, or way north of $2Kilo. At peace with skipping this gen' of theirs I am.
 
Yes. They are probably sitting in some warehouse in bulk packaging anyhow waiting for final firmware. Even if there were a few sample units sent out, they will all likely get reflashed to new code that identifies it as whatever model number NVIDIA decides.

There are product stickers on the PCB with the wrong SKU and some AIB partners might have put the model number on the cooler.

All of the packaging will have to be scrapped of course.

Remember that Apple's iOS software is RTM'ed shortly before the iPhone launch, maybe 10-14 days to give the manufacturer time to flash units for channel distribution and bricks-and-mortar stores.
Though relabeling packages isn't too uncommon. I remember RX 480 4GB for example, they used the same ones as 8GB versions, just a 4GB sticker over it.
 
Back
Top