• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Nvidia Reportedly Readies 800W RTX 4090 Ti With 18,176 Cores, 48GB GDDR6X

  • Thread starter Thread starter Deleted member 6693
  • Start date Start date
Well they do make sense if you take overclocking into account, one is 2.4 GHz, the other 2,9Ghz and to get to that level 20% overclock equals 70% more power, see 450 W * 1,2*1,2*1,2 and 2x Vram.
 
Well if this has 48 gigs VRAM at least the lower models wont be under spec'd on it this time round I think.

Will cost 50p an hour to use in the UK though.
 
I'm betting this will draw 5-600w max and just has the power connectors to support 800w for LN2
 
Last edited:
All of these RTX 4000 "leaks" are so obviously nonsensical that it continues to amaze me how many people fall for them.
Exactly. How does a rough 2000 shaders and 24GB of RAM account for 350W?!

The supposed 4080 > 4090 adds 8GB of RAM and 6000 shaders for 30W

1659030938717.png


Its obviously nonsense, and just a reason to post a headline with numbers confetti to get all the nitwits on the hype train
 
Sure.
2x400w 2x faster than a 400w card 2x the price of a 400W card. :laugh:
by that logic lower tier cards can go exactly like the enthusiast cards. Why not?
2x150w card 2x faster than a 150w card 2x the price of the 150w card.
Its well known that power and performance dont scale linearly. Which means if the 4090 is actually 2x400w for 2x3090 performance, power limiting it to 400w means it will still completely poop on the 3090 by 50% or more.
 
Its well known that power and performance dont scale linearly. Which means if the 4090 is actually 2x400w for 2x3090 performance, power limiting it to 400w means it will still completely poop on the 3090 by 50% or more.
I'm sure and sincerely hope someone will call your claim and check if that is the case. Because I can bet one product is not equal to the other which means making claims like you did is more than a wishful thinking rather than a rule or a fact.
 
4090Ti will never have 48gb vram. leakers exaggerate vram for attention.

4080 WILL NOT have 16gb vram. I promise.

Kopite7kimi said 3080Ti will have 20gb vram.
he better turn around his ice bucket on his own head:

1659082470248.png
 
4090Ti will never have 48gb vram. leakers exaggerate vram for attention.

4080 WILL NOT have 16gb vram. I promise.

Kopite7kimi said 3080Ti will have 20gb vram.
he better turn around his ice bucket on his own head:

View attachment 256257

It would actually make more sense for a 4090ti to have an absurd amount of VRAM than no. The only way to justify buying it is for someone needing a lot of VRAM or some other very very specific needs.
 
It would actually make more sense for a 4090ti to have an absurd amount of VRAM than no. The only way to justify buying it is for someone needing a lot of VRAM or some other very very specific needs.
if 4080 comes with 16gb, 4080Ti will have enough vram for 8k.
That's a bad marketing move, they better reserve 8k for 4090 only.

the only Quadro card with 48gb is A6000 at 4949$
 
I'm sure and sincerely hope someone will call your claim and check if that is the case. Because I can bet one product is not equal to the other which means making claims like you did is more than a wishful thinking rather than a rule or a fact.
Dont you worry, ill check it myself, but no need frankly. Its common sense, power with performance dont scale linearly. Its been known for the last 30 years. If you clock something 10% higher it will require 20% more power. If you clock it 20% higher it will require 50% more power.

The 3090 on its own is proof enough. From 300w at 1860mhz to 550w at 2100 mhz it need 85% more power for 13% more performance. Thats why i currently have it at 250-300 watts @ 1860mhz.

That's where all the silly claims about alderlake not being efficient come from. Clock zen 3 to 4.9ghz and lets see what happens. Youd need ln2 canisters for that to happen though
 
if 4080 comes with 16gb, 4080Ti will have enough vram for 8k.
That's a bad marketing move, they better reserve 8k for 4090 only.

the only Quadro card with 48gb is A6000 at 4949$
If the xx90 Ti stays in the "this is a Titan but we don't want to call it a Titan 'cause then gamers with tons of cash and impulse control deficiencies won't buy it" category, absurd amounts of VRAM are reasonable. It's not like the 3090 Ti ever makes use of its 24GB outside of productivity applications (or games modded so stupidly and poorly they really aren't worth considering), so unreasonable is par for the course for this product segment. 2x whatever the 4090 has seems "reasonable" to expect in this scenario.


Exactly. How does a rough 2000 shaders and 24GB of RAM account for 350W?!

The supposed 4080 > 4090 adds 8GB of RAM and 6000 shaders for 30W
This isn't too unreasonable IMO: we've seen similar things before with the higher end GPU being a cut-down, lower clocked wide GPU while the lower end GPU is a higher clocked narrower GPU. RX 6700 vs 6800 isn't too far off. Given that the rumors don't mention clock speeds at all, it's entirely possible that the 4090 is (relatively) wide-and-slow to keep power """in check""" (as if 400W-ish could ever reasonably be called that), while the Ti is a "fuck it, let it burn" SKU with clocks pushed far past any kind of efficiency range.

Of course, there are major questions regarding whether 800W can actually be dissipated from a GPU-sized die quickly enough for it to not overheat without sub-ambient liquid cooling or similar. Heatpipes are definitely out the window, and even a huge vapor chamber with a massive fin stack will struggle unless you're hammering it with extreme airflow.
 
If the xx90 Ti stays in the "this is a Titan but we don't want to call it a Titan 'cause then gamers with tons of cash and impulse control deficiencies won't buy it" category, absurd amounts of VRAM are reasonable. It's not like the 3090 Ti ever makes use of its 24GB outside of productivity applications (or games modded so stupidly and poorly they really aren't worth considering), so unreasonable is par for the course for this product segment. 2x whatever the 4090 has seems "reasonable" to expect in this scenario.



This isn't too unreasonable IMO: we've seen similar things before with the higher end GPU being a cut-down, lower clocked wide GPU while the lower end GPU is a higher clocked narrower GPU. RX 6700 vs 6800 isn't too far off. Given that the rumors don't mention clock speeds at all, it's entirely possible that the 4090 is (relatively) wide-and-slow to keep power """in check""" (as if 400W-ish could ever reasonably be called that), while the Ti is a "fuck it, let it burn" SKU with clocks pushed far past any kind of efficiency range.

Of course, there are major questions regarding whether 800W can actually be dissipated from a GPU-sized die quickly enough for it to not overheat without sub-ambient liquid cooling or similar. Heatpipes are definitely out the window, and even a huge vapor chamber with a massive fin stack will struggle unless you're hammering it with extreme airflow.
Ι think the hardest part would be to remove 800w from the case, not so much the gpu itself. Of course if aibs cheap out on the designs its gonna be a disaster, but i have hopes they wont on such a high end product
 
Clock zen 3 to 4.9ghz and lets see what happens. Youd need ln2 canisters for that to happen though
That is not true unless you haven't given enough details about the 4.9Ghz clock. 5900x or 5950X can clock above 5Ghz with an air cooler so that's that about your LN2 claim for Ryzen obviously extreme OC requires that and that come for both Intel and AMD no doubt about that.
You can't compare two totally different CPU on a matter of frequency and how much they boost. You can only compare those by the performance, power consumption or performance to power consumption and in general not by a certain scenario whichever satisfies your claim.
I'm saying,
You can't claim that a CPU is efficient in IDLE. that is silly right? and because it is efficient at IDLE you draw conclusion it is efficient in general because that is not true and it is even silly to suggest such a thing. That is clear and yet we have
efficient in gaming which means it is efficient in general which is as silly as IDLE state efficiency.
Now if you want to go with that notion you have to get IDLE efficiency metric, gaming efficiency metric and full load efficiency metric and maybe then you will get some sort of score describing efficiency by consolidating all those into one score.
Or you can do it that way. Idle efficiency (certain lvl of utilization) than you have gaming efficiency (certain lvl of utilization) and full load (100% of utilization) all in a given time and get an average described in units that would make sense.
Dont you worry, ill check it myself, but no need frankly. Its common sense, power with performance dont scale linearly. Its been known for the last 30 years. If you clock something 10% higher it will require 20% more power. If you clock it 20% higher it will require 50% more power.
Well I'm sorry to tell you this but You don't strike me as an honest, unbiased and transparent person and since you always talk about numbers (your claim) you have never presented any graphs metrics aka DATA or any evidence but keep trying. There is a balance between power and performance. Balance or equilibrium between the two is most desirable which we all know NV and AMD have been altering for the pursuit of the performance which in fact bump the prices for all products not just the top end like people tend to claim saying that's not a problem. I think it is a problem which some people are simply arrogant or blind to see.
 
It's not like the 3090 Ti ever makes use of its 24GB outside of productivity applications (or games modded so stupidly and poorly they really aren't worth considering), so unreasonable is par for the course for this product segment.
It does at 8k

here's some 8k TV at 1999$ LG QNED MiniLED 99 Series 2021 65 inch Class 8K Smart TV w/ AI ThinQ® (64.5'' Diag) (65QNED99UPA) | LG USA




Ι think the hardest part would be to remove 800w from the case, not so much the gpu itself. Of course if aibs cheap out on the designs its gonna be a disaster, but i have hopes they wont on such a high end product
they probably already overclocked it to the roof to reach 800w
 
That is not true unless you haven't given enough details about the 4.9Ghz clock. 5900x or 5950X can clock above 5Ghz with an air cooler so that's that about your LN2 claim for Ryzen obviously extreme OC requires that and that come for both Intel and AMD no doubt about that.
You can't compare two totally different CPU on a matter of frequency and how much they boost. You can only compare those by the performance, power consumption or performance to power consumption and in general not by a certain scenario whichever satisfies your claim.
I'm saying,
You can't claim that a CPU is efficient in IDLE. that is silly right? and because it is efficient at IDLE you draw conclusion it is efficient in general because that is not true and it is even silly to suggest such a thing. That is clear and yet we have
efficient in gaming which means it is efficient in general which is as silly as IDLE state efficiency.
Now if you want to go with that notion you have to get IDLE efficiency metric, gaming efficiency metric and full load efficiency metric and maybe then you will get some sort of score describing efficiency by consolidating all those into one score.
Or you can do it that way. Idle efficiency (certain lvl of utilization) than you have gaming efficiency (certain lvl of utilization) and full load (100% of utilization) all in a given time and get an average described in units that would make sense.

Well I'm sorry to tell you this but You don't strike me as an honest, unbiased and transparent person and since you always talk about numbers (your claim) you have never presented any graphs metrics aka DATA or any evidence but keep trying. There is a balance between power and performance. Balance or equilibrium between the two is most desirable which we all know NV and AMD have been altering for the pursuit of the performance which in fact bump the prices for all products not just the top end like people tend to claim saying that's not a problem. I think it is a problem which some people are simply arrogant or blind to see.
That's for one core.
 
That is not true unless you haven't given enough details about the 4.9Ghz clock. 5900x or 5950X can clock above 5Ghz with an air cooler so that's that about your LN2 claim for Ryzen obviously extreme OC requires that and that come for both Intel and AMD no doubt about that.
You can't compare two totally different CPU on a matter of frequency and how much they boost. You can only compare those by the performance, power consumption or performance to power consumption and in general not by a certain scenario whichever satisfies your claim.
I'm saying,
You can't claim that a CPU is efficient in IDLE. that is silly right? and because it is efficient at IDLE you draw conclusion it is efficient in general because that is not true and it is even silly to suggest such a thing. That is clear and yet we have
efficient in gaming which means it is efficient in general which is as silly as IDLE state efficiency.
Now if you want to go with that notion you have to get IDLE efficiency metric, gaming efficiency metric and full load efficiency metric and maybe then you will get some sort of score describing efficiency by consolidating all those into one score.
Or you can do it that way. Idle efficiency (certain lvl of utilization) than you have gaming efficiency (certain lvl of utilization) and full load (100% of utilization) all in a given time and get an average described in units that would make sense.
5950x can run cbr23 at 4.9ghz all core? At what wattage? Im not comparing them, im just saying that power and performance dont scale linearly at all. That applies to cpus and gpus as well. Any cpu or gpu at 100 watts will be more efficient than the identical cpu or gpu at 200 watts. That is common sense, arguing with that is just silly.

My point is, any efficiency comparison that isnt done normalized for something (either power draw or performance) is fundamentally flawed. It doesnt take a genius to figure out that a 4090ti at 800w is going to be less efficienct than a 4090ti at 400w. And the same applies to the cpus, comparing a 240w plimit cpu to a 125w plimit cpu is just dumb
 
Despite a node shrink power consumption is going up, i was expecting it to go down. :D
families-energy_consumption-fossil_fuels-energy_bills-electricity_consumption-electricity_bills-amrn2520_low.jpg
 
You definitely don't need 48 GB of VRAM for gaming but maybe it's intended for being a card to do work related projects on.
 
... does that video show it utilizing its 24GB? You seem to be making the classic mistake of confusing allocated VRAM with actually necessary, actively utilized VRAM. And even accounting for that, the highest I could see was FS at 20200 MB, and most other high-ish games were in the 16-17GB range. Which, knowing how extremely variable and dependent on drivers, VRAM headroom, opportunistic asset streaming and other factors VRAM allocation is, alongside the difference between allocated and actually used assets in VRAM, tells me that no, this GPU does not make use of its 24GB of VRAM in games. Not even close. For a convincing argument otherwise you'd need a very similar SKU with a lower VRAM amount (16GB? 20GB?) and a clear and demonstrable performance difference. Otherwise, most of those VRAM numbers can be put down to "the GPU has tons of free space, so the game aggressively streams in assets ahead of time, with the knowledge that most of them will enver be used".
 
  • Like
Reactions: Lei
That's for one core.
It was not specified it had to be all cores nor the workload thus my concern not enough information.
My point is, any efficiency comparison that isnt done normalized for something (either power draw or performance) is fundamentally flawed. It doesnt take a genius to figure out that a 4090ti at 800w is going to be less efficienct than a 4090ti at 400w. And the same applies to the cpus, comparing a 240w plimit cpu to a 125w plimit cpu is just dumb
Disagree. It is the producer that puts the power limit and specification of a product and advertise it as such. You can't say that 12900KS is efficient if you cut the power down to 100Watts since it is not being advertised as 100Watt product and it's price point is not reflecting a product with a performance and power capabilities as a 100Watt one. You need to take more factors into perspective not just whatever suits your opinion.
5950x can run cbr23 at 4.9ghz all core? At what wattage? Im not comparing them, im just saying that power and performance dont scale linearly at all. That applies to cpus and gpus as well. Any cpu or gpu at 100 watts will be more efficient than the identical cpu or gpu at 200 watts. That is common sense, arguing with that is just silly.
You have not said all cores at 4.9 but even if you did, you mentioned LN2 to achieve it which is wrong and now you are asking about what wattage and yet you claim you do not want to compare Intels product to AMDs. That is exactly what you want to do that is why the questions you ask steer into that comparison. It never did scale linearly but that is an obvious thing. Sometimes your comparisons and use of that dont scale linearly is taken out of context.
 
... does that video show it utilizing its 24GB? You seem to be making the classic mistake of confusing allocated VRAM with actually necessary, actively utilized VRAM. And even accounting for that, the highest I could see was FS at 20200 MB, and most other high-ish games were in the 16-17GB range. Which, knowing how extremely variable and dependent on drivers, VRAM headroom, opportunistic asset streaming and other factors VRAM allocation is, alongside the difference between allocated and actually used assets in VRAM, tells me that no, this GPU does not make use of its 24GB of VRAM in games. Not even close. For a convincing argument otherwise you'd need a very similar SKU with a lower VRAM amount (16GB? 20GB?) and a clear and demonstrable performance difference. Otherwise, most of those VRAM numbers can be put down to "the GPU has tons of free space, so the game aggressively streams in assets ahead of time, with the knowledge that most of them will enver be used".




1659089949222.png
 
My point is, any efficiency comparison that isnt done normalized for something (either power draw or performance) is fundamentally flawed. It doesnt take a genius to figure out that a 4090ti at 800w is going to be less efficienct than a 4090ti at 400w. And the same applies to the cpus, comparing a 240w plimit cpu to a 125w plimit cpu is just dumb
The normalization is inherent: the comparison is done at the manufacturer-defined default operating specifications of the product. That is what defines the efficiency of the product. This is of course different from, say, architectural efficiency, which is a huge range and highly variable across clock speeds - but that's only really relevant when comparing across architectures, not when speaking of the efficiency of a specific SKU. This is the same as arguing that the Vega 64 and 56 were actually quite efficient - and sure, they were if you manually downclocked them by a few hundred MHz. But that doesn't matter, as that's not how consumers bought them, nor how they were used by the vast majority of users. How the product comes set up from the factory, out of the box, without expert tweaking is the only normalization that matters when testing a product.
 
The normalization is inherent: the comparison is done at the manufacturer-defined default operating specifications of the product. That is what defines the efficiency of the product. This is of course different from, say, architectural efficiency, which is a huge range and highly variable across clock speeds - but that's only really relevant when comparing across architectures, not when speaking of the efficiency of a specific SKU. This is the same as arguing that the Vega 64 and 56 were actually quite efficient - and sure, they were if you manually downclocked them by a few hundred MHz. But that doesn't matter, as that's not how consumers bought them, nor how they were used by the vast majority of users. How the product comes set up from the factory, out of the box, without expert tweaking is the only normalization that matters when testing a product.
I fundamentally disagree when it comes to products like k and ks series which are meant to be tinkered with. If you don't tune the crap out of them you are spending money for nothing, you can go for the non k versions. Its the same with ram for example, you have to at least activate xmp at the very least.

What matters to me at least is architectural efficiency, not what settings intel or amd decided to put out of the box. If we go by that logic, intel or amd can sell a cpu with a 30w power limit and hurray, suddenly they'll have the most efficient cpu on planet earth. Do they though? Nope, the determination has to be done with normalised values

It was not specified it had to be all cores nor the workload thus my concern not enough information.

Disagree. It is the producer that puts the power limit and specification of a product and advertise it as such. You can't say that 12900KS is efficient if you cut the power down to 100Watts since it is not being advertised as 100Watt product and it's price point is not reflecting a product with a performance and power capabilities as a 100Watt one. You need to take more factors into perspective not just whatever suits your opinion.

You have not said all cores at 4.9 but even if you did, you mentioned LN2 to achieve it which is wrong and now you are asking about what wattage and yet you claim you do not want to compare Intels product to AMDs. That is exactly what you want to do that is why the questions you ask steer into that comparison. It never did scale linearly but that is an obvious thing. Sometimes your comparisons and use of that dont scale linearly is taken out of context.
Its obvious im talking about all cores.

Why would i care what power limit intel amd or nvidia puts in their products. That's the most useless criteria to me. It literally takes less time to set a power limit than to activate xmp. If you are doing one then you should be doing the other.

Fact is, the only people that run a cpu at 240w in all core workloads are those that only care about performance, and those that want to complain that its not efficient.

And yes, im pretty confident that a 12900KS is the most efficient cpu in most workloads. Would gladly buy one if i didnt have the 12900k, just because it would most likely outperform my 12900k at 180 watts that im running.
 
Last edited:
Back
Top