• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Expects Upcoming Blackwell GPU Generation to be Capacity-Constrained

You mean things like DLSS? :D
There is a whole bunch of smart people working on all that, regardless of how well the transistors shrink.
Thats not really what I meant, I was thinking, more efficient designs of the existing nodes. But I suppose software solutions like dlss fit into that.
 
$NVDA needs to keep rising, so what better way but to create an artificial supply shortage ( reminiscent of the memory shortages?), but who is gonna prove it for an anti-trust suit?
 
$NVDA needs to keep rising, so what better way but to create an artificial supply shortage ( reminiscent of the memory shortages?), but who is gonna prove it for an anti-trust suit?
Isn't it all something they can make up anyway, if they project they can sell 50 million units and can only make 30 million, there's a constraint. The magic being to some degree they can project whatever they choose, and thus create this headline.

"we won't be able to make enough of these, they'll fly off shelves", because of course they've said some version of that, and it's probably true.
 
Jensen - "What narrative are we using to justify our BS prices for newer GPU's?"

Nvidia Sales goon - "Supply constraint?"

Jensen - "Perfect!"
This is the same crap they (Nvidia and AMD both) pulled when this most recent generation of cards came out.

And without any trace of RTX™ and DLSS™, who cares ?
Easy now, Jensen. ;)

This is because it will be built on the 3 nm process.

Meanwhile, AMD's new RX 8700 XT (top RDNA 1.4 part) will stay on the older 4 nm process.
Prediction is double the raster performance, and triple the ray-tracing performance.

RTX 5090 will be 60% faster than RTX 4090, and launching between Q4 this year and Q2 next year depending on how fail AMD will be.
What on earth is RDNA 1.4?

RDNA3 was a learning process for them with regard to MCM packaging and the wins/losses that come from that approach and RDNA4 is supposed to essentially be a bug-fixed and much more optimized version of RDNA3.

RDNA5 is supposed to be six to nine months behind RDNA4 which seems to indicate that RDNA4 is more of a half-generation GPU series, more like a "super" release but targeted toward the midrange or lower cards.


Please let me vape some out of that crystal ball you've got there. I love me a dose of wishful thinking
It isn't wishful thinking as much as it's a Twitter/Discord circle-jerk by people that went to the YouTube school of Engineering.

You mean things like DLSS? :D
There is a whole bunch of smart people working on all that, regardless of how well the transistors shrink.
DLSS, FSR, and XeSS's days are numbered.

Microsoft and the Khronos Group are sick of vendors doing proprietary shit and have some smart people working on vendor-independent upscaling.

Edit: before any of you start pounding out an empassioned response to tell me that DLSS is hardware accellerated, whatever DirectX and Vulkan upscaling standards come out of this will allow for optional hardware accelleration either on the GPU or CPU. Nvidia/AMD/Intel can write their drivers so the upscaling tech executes using the same hardware features that they're currently using. That's how it went down with multitexturing, shaders, deferred rendering, and so on.
 
Last edited:
Thats not really what I meant, I was thinking, more efficient designs of the existing nodes. But I suppose software solutions like dlss fit into that.
It is theoretically possible, after all, we have seen what NVidia managed to do with their backs against the wall and no potential die shrink - that was Maxwell. Significant performance and efficiency increase on the same node as Kepler. But there is a caveat to that - it coincided with a radical change in rendering techniques, what with the move to deferred rendering and new consoles. Maxwell was designed with it in mind, Kepler wasn’t and so it quickly fell behind. There is no shift like that on the horizon now. Well, unless we count RT, and that DOES need more raw transistors for RT cores.
 
It is theoretically possible, after all, we have seen what NVidia managed to do with their backs against the wall and no potential die shrink - that was Maxwell. Significant performance and efficiency increase on the same node as Kepler. But there is a caveat to that - it coincided with a radical change in rendering techniques, what with the move to deferred rendering and new consoles. Maxwell was designed with it in mind, Kepler wasn’t and so it quickly fell behind. There is no shift like that on the horizon now. Well, unless we count RT, and that DOES need more raw transistors for RT cores.
All I'm saying is future performance increases are going to come less and less from node shrinks. Whatever that manifests as, I can't say.
 
All I'm saying is future performance increases are going to come less and less from node shrinks. Whatever that manifests as, I can't say.
Chiplets, most likely. Large monolithic GPUs are already a losing proposition, the AD102 apparently has terrible yields, so swapping to some form of MCM would make sense for everyone involved.
I also wouldn’t rule out shrinks just yet. Sure, we may be approaching limits of silicon as a material, but other options exist. Graphene and synthetic diamonds have shown some potential here.
 
Nvidia is just a behemoth in GPU market that they dont even care... their product is just flat better than the competition. You want the best, pay for it.
Telling NVIDIA to care about gamers, my friends they are not a social services company.

It hurts yes, gamers and crypto have been feeding the monster.... now we have this.

We just need to have some solid alternatives from AMD and Intel.
 
All I'm saying is future performance increases are going to come less and less from node shrinks. Whatever that manifests as, I can't say.
All of the architectural tricks that increase performance require more transistors which means node shrinks are the most important part of the performance ladder and that's unlikely to change. We may move to new materials at some point, but architectural techniques rely on node shrinks as well.
 
Jensen - "What narrative are we using to justify our BS prices for newer GPU's?"

Nvidia Sales goon - "Supply constraint?"

Jensen - "Perfect!"
DEADASS this ^

This is the same crap they (Nvidia and AMD both) pulled when this most recent generation of cards came out.


Easy now, Jensen. ;)


What on earth is RDNA 1.4?

RDNA3 was a learning process for them with regard to MCM packaging and the wins/losses that come from that approach and RDNA4 is supposed to essentially be a bug-fixed and much more optimized version of RDNA3.

RDNA5 is supposed to be six to nine months behind RDNA4 which seems to indicate that RDNA4 is more of a half-generation GPU series, more like a "super" release but targeted toward the midrange or lower cards.



It isn't wishful thinking as much as it's a Twitter/Discord circle-jerk by people that went to the YouTube school of Engineering.


DLSS, FSR, and XeSS's days are numbered.

Microsoft and the Khronos Group are sick of vendors doing proprietary shit and have some smart people working on vendor-independent upscaling.

Edit: before any of you start pounding out an empassioned response to tell me that DLSS is hardware accellerated, whatever DirectX and Vulkan upscaling standards come out of this will allow for optional hardware accelleration either on the GPU or CPU. Nvidia/AMD/Intel can write their drivers so the upscaling tech executes using the same hardware features that they're currently using. That's how it went down with multitexturing, shaders, deferred rendering, and so on.
This person get it!
 
This was just obvious.
This is because it will be built on the 3 nm process.

Meanwhile, AMD's new RX 8700 XT (top RDNA 1.4 part) will stay on the older 4 nm process.
Prediction is double the raster performance, and triple the ray-tracing performance.

RTX 5090 will be 60% faster than RTX 4090, and launching between Q4 this year and Q2 next year depending on how fail AMD will be.
And be around $3K, while keeping the old gen cards at inflated prices as well.
Lisa Su - "perfect, well just copy that pricing model too"
Why bother. If they colluded with Nvidia, and there's no pressure for regulators. The consumers will have to buy for whatever price these come.
 
It's a bit like insanity if you ask me. Why are we pushing so hard for smaller and smaller manufacturing processes with crap yields that come from a singular place? This current path only results in expensive, hot running, loud, watt guzzling equipment that there are too few to satisfy demand (allegedly). Maybe if we gave the process engineers a break we'd all be better off for it.

Because people want performance gains. And it's not just corporate clients. PC Gamers = I want performance gains and more games, don't care if companies lose money, don't care if companies go broke, I want it and I want it now, stomps footsies and toddler goes back to their room. Corporate Clients = I want gains tell me the cost.

The obvious fix for this is to get off PC gaming. But as that won't happen, it's cloud or 30k for a GPU. So pick one of the three.
 
Prepare for the most overpriced cards in the history.
What's sad is, suckers will still buy no matter the price, therefore keeping nGreedia happy and the stocks up.
Let's hope Intel and AMD will bring some competition, but it's hardly believable...

2020, RTX 3080 - $700 - €700
2022, RTX 4080 - $1200 - €900
2024, RTX 5080 - $2040 - €1100

2026, RTX 6080 - $3468 - €1300
2028, RTX 7080 - $5896 - €1500
2030, RTX 8080 - $10022 - €1700
2032, RTX 9080 - $17038 - €1900
2034, RTX 10080 - $28965 - €2100
Fixed.
 
Last edited:
Nvidia and board partners will only sell new GPUs to PCMR at auctions, not in retail.
 
Or just build an Actual Graphics card and not a AI card. When the RTX cards came out they are not real Graphics card they are AI cards more AI for more scaling and DLSS. The GTX was the last of the pure graphics cards The pure hardware cards. No software no drivers no AI scalers nothing. Load up game and play.
 
Well, if you deliberately don't make enough... nGreedia just being nGreedia.
 
Why don't they use a more flexible way. Make a video card with 2 different chips. 1 GPU for pure graphics acceleration, and one for AI, RTRT, upscaling and all related...
Remember the time when video cards had 2 two chips? One for 2D graphics and one for 3D one?
 
It's a bit like insanity if you ask me. Why are we pushing so hard for smaller and smaller manufacturing processes with crap yields that come from a singular place? This current path only results in expensive, hot running, loud, watt guzzling equipment that there are too few to satisfy demand (allegedly). Maybe if we gave the process engineers a break we'd all be better off for it.

Because it's the only way such advanced processors are feasible. Semiconductors aren't magic.

Jensen - "What narrative are we using to justify our BS prices for newer GPU's?"

Nvidia Sales goon - "Supply constraint?"

Jensen - "Perfect!"

Of course we've heard nothing of the sort regarding the unjustifiable, insane consumer-grade Ryzen Threadripper prices, have we? Just gotta dog on Nvidia for some quick acceptance points.

Well, if you deliberately don't make enough... nGreedia just being nGreedia.

Sure because they can just "make" them, it's not like they don't rely on TSMC, yields being good or anything. Apple's also totally not getting the lion's share of N3 wafers, iPhones are made of just pixie and fairy dust or something. Even Intel will use this node to make their next-gen Core processors. That's how insane the demand for this node has become.

But of course, you just "make" them for like, "really cheap" and then "charge thousands" because it's greed and not because they spent multiple billions on R&D and have actual constraints involving third parties, technology and at this scale, even the concept of physics itself. Money just solves (absolves) everything!
 
Last edited:
Because it's the only way such advanced processors are feasible. Semiconductors aren't magic.



Of course we've heard nothing of the sort regarding the unjustifiable, insane consumer-grade Ryzen Threadripper prices, have we? Just gotta dog on Nvidia for some quick acceptance points.



Sure because they can just "make" them, it's not like they don't rely on TSMC, yields being good or anything. Apple's also totally not getting the lion's share of N3 wafers, iPhones are made of just pixie and fairy dust or something. Even Intel will use this node to make their next-gen Core processors. That's how insane the demand for this node has become.

But of course, you just "make" them for like, "really cheap" and then "charge thousands" because it's greed and not because they spent multiple billions on R&D and have actual constraints involving third parties, technology and at this scale, even the concept of physics itself. Money just solves (absolves) everything!
"Just gotta dog on Nvidia for some quick acceptance points." WTF you on about? I just came here to make a funny yet truthful joke & you're chucking a hissy fit over it thinking I want likes & love, you reek of bitterness & if you wanna go complain about AMD's threadripper prices, be the 1st one & go make a thread about it and whinge there.
 
It's not uncommon for new high end parts to be in short supply at launch. It's been like that for decades.

How long the supply will be an issue after launch remains to be seen though.
 
Man the SEC/FTC needs to bitch slap nvidia upside the head for playing games like this
making statements like this is stock manipulation 101 and super illegal
 
Man the SEC/FTC needs to bitch slap nvidia upside the head for playing games like this
making statements like this is stock manipulation 101 and super illegal

Actually, they're obligated to do so. It's in shareholders' direct best interests to be made legally aware of the fact that the company may potentially face supply chain and production difficulties, all of which have a direct impact on a potential earnings forecast and on the income itself. Regulatory agencies would need to be involved if Nvidia willingly misled investors on their actual situation.

The TSMC N3 node is in extreme demand, and the processors Nvidia has are easily amongst the most advanced using this node, which means that yields are not exactly perfect. Everything they said is true.
 
Man the SEC/FTC needs to bitch slap nvidia upside the head for playing games like this
making statements like this is stock manipulation 101 and super illegal
On the other hand, they're compelled by law to state what the CFO stated to the investors. What she said is based on their projections and calculations, which in turn are based on realistic and probable predictions of supply, demand and other factors, headwinds, tailwinds, hurricanes, whatever. You think they can't prove that to the agencies?
 
RDNA3 was a learning process for them with regard to MCM packaging and the wins/losses that come from that approach and RDNA4 is supposed to essentially be a bug-fixed and much more optimized version of RDNA3.

RDNA5 is supposed to be six to nine months behind RDNA4 which seems to indicate that RDNA4 is more of a half-generation GPU series, more like a "super" release but targeted toward the midrange or lower cards.
RDNA5 is very late 2025 at the earliest and is more than 12 months behind RDNA4 which ships by Q4 2024.
RDNA4 may only be mid-range but the 8700XT class gpu is said to be faster than 7900XT yet under $500.
 
Back
Top