• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Nvidia Reportedly Readies 800W RTX 4090 Ti With 18,176 Cores, 48GB GDDR6X

  • Thread starter Thread starter Deleted member 6693
  • Start date Start date
Its obvious im talking about all cores.

Why would i care what power limit intel amd or nvidia puts in their products. That's the most useless criteria to me. It literally takes less time to set a power limit than to activate xmp. If you are doing one then you should be doing the other.

Fact is, the only people that run a cpu at 240w in all core workloads are those that only care about performance, and those that want to complain that its not efficient.

And yes, im pretty confident that a 12900KS is the most efficient cpu in most workloads. Would gladly buy one if i didnt have the 12900k, just because it would most likely outperform my 12900k at 180 watts that im running.
What you say literally makes no sense and contradicts with logic. 12900KS is advertised as 241Watts max boost and as such it should be evaluated in that range. Saying that it is efficient at 100Watts rage is simply put stupid.
I bought a Hammer H2 6l displacement 330PS (323HP) at 5400RPMs advertised as 14l/100 on a producers site. Which is inefficient by any metric and comparison to other cars used. Now you come along and..... I disagree. It is very efficient when you drive 25MPH 110PS at 1500RPMs. That is exactly what you are saying with the 12900KS.
You should care about the limit because that criteria corresponds to max power and performance (number of cores for instance) bracket and price that a product will have.
 
What you say literally makes no sense and contradicts with logic. 12900KS is advertised as 241Watts max boost and as such it should be evaluated in that range. Saying that it is efficient at 100Watts rage is simply put stupid.
I bought a Hammer H2 6l displacement 330PS (323HP) at 5400RPMs advertised as 14l/100 on a producers site. Which is inefficient by any metric and comparison to other cars used. Now you come along and..... I disagree. It is very efficient when you drive 25MPH 110PS at 1500RPMs. That is exactly what you are saying with the 12900KS.
You should care about the limit because that criteria corresponds to max power and performance (number of cores for instance) bracket and price that a product will have.
If your hammerh2 is actually more efficient at 25mph than every other car - and you do want to drive at 25mph,then congratulations, you got yourself a deal.

Why would you care whether or not its efficient at 500 mph when you are only going to drive 25? This makes absolutely no sense.

I dont care what amd intel or nvidia advertises their products for. That's marketing fluff and I don't give a damn. 12900ks can be advertised for 5 billion watts and i still dont give a damn, if at 180w its more efficient than my 12900k then thats what matters for me.

To stay on topic, the 4090ti even if advertised as 800w will absolutely annihilate my 3090 at the same wattage, so why would i possibly care about what it does at 800w,since im not going to use it at that wattage

I mean let me ask you this way. Say I wanted the most efficient card at 400w (which basically means the fastest), and let's say the 4090ti is exactly that, would you say dont buy it cause at stock its inefficient? Who cares...
 
Why holding back to 800W NVIDIA just go for the KW.
RTX Kilowatt has a ring to it.
 
I mean let me ask you this way. Say I wanted the most efficient card at 400w (which basically means the fastest), and let's say the 4090ti is exactly that, would you say dont buy it cause at stock its inefficient? Who cares...
Dude you are talking about your preference here. If you want a card that draws 800w for whatever reason but it is not in the market you get 400w and OC it to a degree that it will draw 800 or close. I literally don't care about your preference but you can't say the 12900KS is efficient at 100w or 35w (yeah there were those people who brought 35w into the picture) for a product that is advertised differently. The hammer thing was to show you flawed logic but your arrogance and ignorance is beyond believe.
To stay on topic, the 4090ti even if advertised as 800w will absolutely annihilate my 3090 at the same wattage, so why would i possibly care about what it does at 800w,since im not going to use it at that wattage
on topic. Wishful thinking is not facts and it will require testing to say how efficient or inefficient 4090ti is within ITS ADVERTISED POWER CONSUMPTION RATING.
 
Dude you are talking about your preference here. If you want a card that draws 800w for whatever reason but it is not in the market you get 400w and OC it to a degree that it will draw 800 or close. I literally don't care about your preference but you can't say the 12900KS is efficient at 100w or 35w (yeah there were those people who brought 35w into the picture) for a product that is advertised differently. The hammer thing was to show you flawed logic but your arrogance and ignorance is beyond believe.

on topic. Wishful thinking is not facts and it will require testing to say how efficient or inefficient 4090ti is within ITS ADVERTISED POWER CONSUMPTION RATING.
Is that how you test coolers or fans as well? You realize thay if you dont test nornalized for noise your testing is absolutely useless, right? Its the same with cpus or gpus, if you dont run nornalized then whatever you are testing for is pointless.

There is no wishful thinking, its painfully obvious that 4090ti, or any card for that matter which is the point, is going to be really inefficient at 800 watts. Nobody whos going to buy it though will care about its efficiency at 800w. They will either buy it for the performance, or for its efficiency at SANE wattage.

Architectural efficiency is what matters, cause you as a user, can't change that. Power limits though, you can change those with a single button. And the only way to test architectural efficiency is normalized for performance or wattage.

The phanteks t30 btw is one of the loudest 120 fans out there. Its advertised as a 3000 rpm fan. Going by your logic, no one should buy it cause its so inefficienct in terms of noise to performance. Yet, if you limit it to X dba, it outperforms EVERYTHING at same noise levels, making it the most efficienct fan by far. Yet by your logic, its very noisy and inefficient.... Okay buddy
 
If the xx90 Ti stays in the "this is a Titan but we don't want to call it a Titan 'cause then gamers with tons of cash and impulse control deficiencies won't buy it" category, absurd amounts of VRAM are reasonable. It's not like the 3090 Ti ever makes use of its 24GB outside of productivity applications (or games modded so stupidly and poorly they really aren't worth considering), so unreasonable is par for the course for this product segment. 2x whatever the 4090 has seems "reasonable" to expect in this scenario.



This isn't too unreasonable IMO: we've seen similar things before with the higher end GPU being a cut-down, lower clocked wide GPU while the lower end GPU is a higher clocked narrower GPU. RX 6700 vs 6800 isn't too far off. Given that the rumors don't mention clock speeds at all, it's entirely possible that the 4090 is (relatively) wide-and-slow to keep power """in check""" (as if 400W-ish could ever reasonably be called that), while the Ti is a "fuck it, let it burn" SKU with clocks pushed far past any kind of efficiency range.

Of course, there are major questions regarding whether 800W can actually be dissipated from a GPU-sized die quickly enough for it to not overheat without sub-ambient liquid cooling or similar. Heatpipes are definitely out the window, and even a huge vapor chamber with a massive fin stack will struggle unless you're hammering it with extreme airflow.
Euh... what.

The situation with this info is that the higher end GPU is wider and if the lower version had higher clocks it would not stick to 420-450W. Plus they even specify how wide it is and there is no such situation. We are looking at bullshit plain and simple. If they bump clocks for a good 250W worth, they will have completely lost the plot. You have mentioned the impracticalities yourself.

There is speculation and then there is this lol
 
On topic, with these being the "Halo" product, will they be able to run in SLi?

290px-Kernkraftwerk_Grafenrheinfeld_-_2013.jpg
 
I fundamentally disagree when it comes to products like k and ks series which are meant to be tinkered with. If you don't tune the crap out of them you are spending money for nothing, you can go for the non k versions. Its the same with ram for example, you have to at least activate xmp at the very least.

What matters to me at least is architectural efficiency, not what settings intel or amd decided to put out of the box. If we go by that logic, intel or amd can sell a cpu with a 30w power limit and hurray, suddenly they'll have the most efficient cpu on planet earth. Do they though? Nope, the determination has to be done with normalised values
Cool. Now go form a consortium of reviewers to put into place a standard for testing this. 'Cause without that, the result would be an arbitrary and entirely useless collection of reviews basing their findings on different test points and methodologies, providing borderline garbage data.

The only sane way of going about this is exactly what reviewers currently do: test at stock + do some simple OC testing. Some UC/UV testing, or some form of efficiency sweep (the same workload(s) across a range of clockspeeds with performance and power logging) would be great bonuses, but this quickly turns so labor intensive as to be impossible for essentially any current publication. Heck, just imagine the work required to run something simple like Cinebench across an architecturally relevant span of clock speeds, with voltages monitored to ensure that the motherboard doesn't crap the bed. Say, 500MHz intervals + whatever the peak attainable clock is, from 2GHz and upwards. That's 7-8 test runs for each chip, or at least a full workday - assuming the workload is quick to finish and you're not running them multiple times to eliminate outliers. And now you have the problem of only running a single workload, which makes whatever measurement you're running far less useful, as it's inherently not representative. Change the workload to something broader, like SPEC, and you're probably looking at a week of work for that one suite of tests.

Also: the vast majority of K-SKU CPUs are never meaningfully tweaked. They have that ability, and it's a marketing and selling point for them, but the vast majority of buyers buy them because they're the fastest, coolest, highest end SKU, and nothing else. Heck, given that most people dont' even enable XMP, how on earth are you expecting them to tune their CPUs? Remember, us hardware enthusiasts represent a tiny fraction of the gaming PC buying public.

Also, you're .... well, just wrong about "spending money for nothing" if you're not tuning - you're paying for the highest clocks and the highest stock performance. That's the main part of the price, to the degree that there's a price difference between a non-K and K SKU to begin with.

I agree entirely that architectural efficiency is important, and very interesting. I just disagree fundamentally that any review save a supplementary one should focus on this, because the vast, overwhelming amount of use will always be at stock settings, and thus that is where testing matters. Those of us with the time, resources and knowledge to fine-tune also have the ability to figure out the nuances of architectural efficiency curves - through forums like these, among other things.

Euh... what.

The situation with this info is that the higher end GPU is wider and if the lower version had higher clocks it would not stick to 420-450W. Plus they even specify how wide it is and there is no such situation. We are looking at bullshit plain and simple. If they bump clocks for a good 250W worth, they will have completely lost the plot. You have mentioned the impracticalities yourself.

There is speculation and then there is this lol
I was responding to your example of the 4090 reportedly just being 30W above the 4080, which could be explained by the wide-and-slow vs. smaller-and-faster comparison of RX 6800 vs. 6700 XT - where one is much faster, yet barely consumes more power due to being much wider. You seem to be misunderstanding what I'm saying - these 4090 Ti rumors don't indicate it being meaningfully wider than the 4090, so there's no way it could be a wide-and-slow design in comparison. 11% more shaders could be enough for that, but ... nah. Not in an extreme situation like this. It's a minor difference overall. The RX 6800 has literally 50% more shaders than the 6700 XT after all. That's how it manages much better performance at nearly the same power.

So, to clarify, from your own examples, a possible explanation would be:
RTX 4080: "small" (in this comparison) and high clocking
RTX 4090: larger, clocks not crazy high, barely more power than the 4080
RTX 4090 Ti: larger still than the 4090, clocked to the gills, bordering on catching fire at 2x 4090 power.

I'm obviously not saying this is true, but it would be a technically reasonable way of things shaking out this way in terms of power consumption and relative positioning.
 
Cool. Now go form a consortium of reviewers to put into place a standard for testing this. 'Cause without that, the result would be an arbitrary and entirely useless collection of reviews basing their findings on different test points and methodologies, providing borderline garbage data.

The only sane way of going about this is exactly what reviewers currently do: test at stock + do some simple OC testing. Some UC/UV testing, or some form of efficiency sweep (the same workload(s) across a range of clockspeeds with performance and power logging) would be great bonuses, but this quickly turns so labor intensive as to be impossible for essentially any current publication. Heck, just imagine the work required to run something simple like Cinebench across an architecturally relevant span of clock speeds, with voltages monitored to ensure that the motherboard doesn't crap the bed. Say, 500MHz intervals + whatever the peak attainable clock is, from 2GHz and upwards. That's 7-8 test runs for each chip, or at least a full workday - assuming the workload is quick to finish and you're not running them multiple times to eliminate outliers. And now you have the problem of only running a single workload, which makes whatever measurement you're running far less useful, as it's inherently not representative. Change the workload to something broader, like SPEC, and you're probably looking at a week of work for that one suite of tests.

Also: the vast majority of K-SKU CPUs are never meaningfully tweaked. They have that ability, and it's a marketing and selling point for them, but the vast majority of buyers buy them because they're the fastest, coolest, highest end SKU, and nothing else. Heck, given that most people dont' even enable XMP, how on earth are you expecting them to tune their CPUs? Remember, us hardware enthusiasts represent a tiny fraction of the gaming PC buying public.

Also, you're .... well, just wrong about "spending money for nothing" if you're not tuning - you're paying for the highest clocks and the highest stock performance. That's the main part of the price, to the degree that there's a price difference between a non-K and K SKU to begin with.

I agree entirely that architectural efficiency is important, and very interesting. I just disagree fundamentally that any review save a supplementary one should focus on this, because the vast, overwhelming amount of use will always be at stock settings, and thus that is where testing matters. Those of us with the time, resources and knowledge to fine-tune also have the ability to figure out the nuances of architectural efficiency curves - through forums like these, among other things.
I don't know why you bring reviews into this. I didn't. I never suggested anything about how reviewers should do what. Reviewers do the tests that will bring them the most viewers. Which makes absolute sense. I'm a consumer, not a reviewer, so I don't really care about stock out of the box settings. Those stock out of the box settings that reviewers test with are great to be used as a baseline, for me at least. For example if I see the 4090ti performing 2x3090 with 2xpower usage, I can extrapolate that in same wattage the 4090ti will be way more efficient than the 3090. So you don't need reviewers to do the whole work, you need users to be able to extrapolate based on the data. And although I agree with you that most users don't tune anything, I'm not that certain that this applies to people buying the i9s and the 4090's.

Now when it comes to reviewers, if they actually want to test efficiency, then yes, absolutely they should normalize, else their tests are just useless. The same applies to, for example, cooler and fan reviews. It serves absolutely no point to compare CPU coolers or fans with the default fan curves. Cooler A produces 50db of noise and keeps the cpu at 80c while cooler B produces 40db and keeps the CPU at 85c. Which one is better, nobody can know unless you normalize for something.

I don't see why CPUs and GPUs should be an exception.
 
So, to clarify, from your own examples, a possible explanation would be:
RTX 4080: "small" (in this comparison) and high clocking
RTX 4090: larger, clocks not crazy high, barely more power than the 4080
RTX 4090 Ti: larger still than the 4090, clocked to the gills, bordering on catching fire at 2x 4090 power.

I'm obviously not saying this is true, but it would be a technically reasonable way of things shaking out this way in terms of power consumption and relative positioning.
Gotcha, now I'm back on track :) And yes, agreed on the assumptions.

Technically reasonable however is doubtful. Economically it certainly isn't or they have some magical version of silicon that will happily take another 300W on the chin and be kept under 80 C. It sure as hell won't exist. Why is this true? Because the entire stack below a 4090ti would then suffer greatly in terms of profitability - after all, if there IS headroom on the silicon to push even a fraction of that extra wattage, say 100W, through it and be kept within limitations, the entire stack could easily go smaller die for the same performance. Its not like the rest is mighty low in TDP as it is with 450W on the sub top as well, we have yet to see how that'll be kept frosty and how it even clocks. But if there is such a wide range to be had (we're in fantasy land here-) throughout the stack, we should be seeing superb overclocks and that would kill Nvidia's margin in another direction; say you can clock something just over a full tier above what it is, that'll cannibalize tiers throughout. Its impossible any way you twist it, unless they deploy massive artificial limitations and differences in boards. And that, again, is absolute not profitable because more different products means GPU spaghetti at every AIB and vendor, it'll be an utter mess for all stakeholders and consumers, plus costly to distribute and produce, or even service.

I'm on board with rumors saying 450W on the top end is likely. But 800W in that same stack simply has no place or logic to it.

Another careful conclusion I'm drawing right here and now, is that Nvidia hasn't got a lot of new power under the hood architecturally, just incremental bumps again much like Ampere, further optimization of the game / render path between RT and raster, and more tricks to deploy for FPS boosts by capitalizing on (hidden) quality reductions. This has been the name of the game lately in both camps, too. I'm sure there is some completely agnostic DLSS/scaling engine in the works... That would explain the requirement to bump power so dramatically and still call it new. Nvidia's move into RT so far is heading for an uncertain future if this is what they need to feed it. And to be fair, I think AMD is watching on the sidelines while they work and improve their own version of it, it is becoming ever clearer why they are careful with deploying it in earnest.
 
Last edited:
I don't know why you bring reviews into this. I didn't. I never suggested anything about how reviewers should do what. Reviewers do the tests that will bring them the most viewers. Which makes absolute sense. I'm a consumer, not a reviewer, so I don't really care about stock out of the box settings. Those stock out of the box settings that reviewers test with are great to be used as a baseline, for me at least. For example if I see the 4090ti performing 2x3090 with 2xpower usage, I can extrapolate that in same wattage the 4090ti will be way more efficient than the 3090. So you don't need reviewers to do the whole work, you need users to be able to extrapolate based on the data. And although I agree with you that most users don't tune anything, I'm not that certain that this applies to people buying the i9s and the 4090's.
If you weren't speaking about reviewers, I have absolutely no idea who you were referring to when you were saying that efficiency comparisons need to be normalized. Who are doing these efficiency comparisons at any type of meaningful scale, if not reviewers? If you're talking about enthusiasts, well, then everything is essentially random, as access to hardware, testing tools and technical knowledge is so variable as to render the tests and their outcomes incredibly haphazard. If you're talking about the collective efforts of enthusiasts testing and tuning their hardware, then ... well, again, normalization doesn't have much of a purpose once again, as the differences in hardware configurations and surrounding variables are so great as to render that normalization moot. Heck, the best the vast majority of us can do are software-based power draw measurements, which are notoriously unreliable, so ... yeah. Bigger problems there. And there are extremely few enthusiasts with the time, energy, skills and tools to do any kind of representative efficiency analysis of even a single component in their specific system.

Now when it comes to reviewers, if they actually want to test efficiency, then yes, absolutely they should normalize, else their tests are just useless. The same applies to, for example, cooler and fan reviews. It serves absolutely no point to compare CPU coolers or fans with the default fan curves. Cooler A produces 50db of noise and keeps the cpu at 80c while cooler B produces 40db and keeps the CPU at 85c. Which one is better, nobody can know unless you normalize for something.
That's (relatively) easy for relatively simple things, i.e. things with few variables. A cooler has a few: ambient temperature; surrounding airflow/case; fan speed; mount quality/fitment/base plate flatness; input power. (And to some degree thermal density in the CPU being cooled). Many of those can be controlled for with high precision quite simply - yet most cooler reviews don't even bother to test at multiple input power levels, which shows just how challenging this stuff can be. Now add the complexity of full-range efficiency testing of a complex multi-purpose component, where you'd need many different tests (FurMark behaves extremely differently from Unigine Heaven, and both are wildly different from any reasonable modern game, etc.), while keeping as many other variables as possible under control. It's still doable, but the number of variables and the number of test passes needed (and the time spent on said tests) makes the workload insurmountable even for professional reviewers outside of the odd specific article.

Other than that, there is a standard for testing: out of the box, as the product works for users (including all BIOSes if relevant), in whatever test setup that publication uses as standard.

Anything else, and you're just talking about some random enthusiast's tinkering with their specific silicon on their specific setup, which can at best represent a single data point which may or may not be representative of anything at all.

Gotcha, now I'm back on track :) And yes, agreed on the assumptions.

Technically reasonable however is doubtful. Economically it certainly isn't or they have some magical version of silicon that will happily take another 300W on the chin and be kept under 80 C. It sure as hell won't exist. Why is this true? Because the entire stack below a 4090ti would then suffer greatly in terms of profitability - after all, if there IS headroom on the silicon to push even a fraction of that extra wattage, say 100W, through it and be kept within limitations, the entire stack could easily go smaller die for the same performance. Its not like the rest is mighty low in TDP as it is with 450W on the sub top as well, we have yet to see how that'll be kept frosty and how it even clocks. But if there is such a wide range to be had (we're in fantasy land here-) throughout the stack, we should be seeing superb overclocks and that would kill Nvidia's margin in another direction; say you can clock something just over a full tier above what it is, that'll cannibalize tiers throughout. Its impossible any way you twist it, unless they deploy massive artificial limitations and differences in boards. And that, again, is absolute not profitable because more different products means GPU spaghetti at every AIB and vendor, it'll be an utter mess for all stakeholders and consumers, plus costly to distribute and produce, or even service.

I'm on board with rumors saying 450W on the top end is likely. But 800W in that same stack simply has no place or logic to it.

Another careful conclusion I'm drawing right here and now, is that Nvidia hasn't got a lot of new power under the hood architecturally, just incremental bumps again much like Ampere, further optimization of the game / render path between RT and raster, and more tricks to deploy for FPS boosts by capitalizing on (hidden) quality reductions. This has been the name of the game lately in both camps, too. I'm sure there is some completely agnostic DLSS/scaling engine in the works... That would explain the requirement to bump power so dramatically and still call it new. Nvidia's move into RT so far is heading for an uncertain future if this is what they need to feed it. And to be fair, I think AMD is watching on the sidelines while they work and improve their own version of it, it is becoming ever clearer why they are careful with deploying it in earnest.
Yeah, I agree that 800W sounds entirely bonkers - I was just highlighting that (assuming they're able to cool it and want to pay for whatever insane cooling solution that would require) it could technically fit within the rumored lineup when taking into account how configuration differences play out across different silicon. Do I think we'll see an 800W SKU? I have no idea, but I really hope not. I sincerely doubt we'll see an FE significantly above 450W, as even that is a massive challenge with air cooling, as you say.

Oh, and one additional variable to your cost calculation there: cooler cost. That's a significant brake on the "could we charge more for this chip if we pushed 100W more through it?" line of thinking, as cooler costs scale rapidly with power requirements above a certain level, and are especially sensitive now that material prices are so high. Of course the card would also need a beefier VRM, higher quality PCB, etc., so this kind of stuff is only really possible on the high end where these things are already taken into account - otherwise you might be bringing your $300 GPU that you want to charge $400 for into the cost range of a $500 GPU just through the extra requirements for that additional power.
 
Other than that, there is a standard for testing: out of the box, as the product works for users (including all BIOSes if relevant), in whatever test setup that publication uses as standard.

Anything else, and you're just talking about some random enthusiast's tinkering with their specific silicon on their specific setup, which can at best represent a single data point which may or may not be representative of anything at all.

_______

Oh, and one additional variable to your cost calculation there: cooler cost. That's a significant brake on the "could we charge more for this chip if we pushed 100W more through it?" line of thinking, as cooler costs scale rapidly with power requirements above a certain level, and are especially sensitive now that material prices are so high. Of course the card would also need a beefier VRM, higher quality PCB, etc., so this kind of stuff is only really possible on the high end where these things are already taken into account - otherwise you might be bringing your $300 GPU that you want to charge $400 for into the cost range of a $500 GPU just through the extra requirements for that additional power.
On the first point, I think what @fevgatos is getting at, is that we're seeing more and more popularity in even non-enthusiast communities for undervolting. Its the new overclock, really. People want efficiency for all sorts of reasons and having a CPU 'neutered' to fit your use case certainly is a comforting idea. After all, you have more perf under the hood if you want it, and you can run cool and still have great perf by locking TDP royally under its cap.

For GPU, similar things apply. Vega was the first one, in fact, it was highly recommended to undervolt that product, and ever since we have also seen reviews mention it with numerous GPUs.

The simple fact is, if products get clocked way above their comfort zone to please shareholders, why can the consumers not act on that by making efficient settings a common, 'normal thing' to do? Its one way to exercise consumer power, and another one is when reviewers take that into account. That is signalling that we don't really care about marketing or the way products are pushed. To be honest, I'm all for normalized CPU testing for efficiency purposes. Yes, its very neat to know what 100~125W worth of power can produce on every CPU, because that is realistically something most air towers and climates are capable of cooling on air. That won't change either - air has been air for decades, the only path up is go bigger which kills use cases on the way up. Similarly, show us what 200W can do under water! Its much more valuable than 'testing stock'. Stock now becomes more interesting for comparison purposes, and it really already is. The trickery with power limits and TDP bumps won't stop, and every releases stresses how important it is for users to intervene. Even on boards: auto OC's, tweaked clocks and excessive power delivery settings..., etc. They happened at stock.

Heck, a consumer efficiency push is even a way to 'save the planet' a little bit. I'm game.

_______
Second, yes, cooler cost, but what the market shows us is vendors will happily re-use and massively oversize a block of alu and pipes to cater to several tiers of product already. Even across generations. What's new here, is you'd also want several types of boards, something any slab of aluminium pales to in comparison. Its those board components that are under high pressure globally.
 
Last edited:
On the first point, I think what @fevgatos is getting at, is that we're seeing more and more popularity in even non-enthusiast communities for undervolting. Its the new overclock, really. People want efficiency for all sorts of reasons and having a CPU 'neutered' to fit your use case certainly is a comforting idea. After all, you have more perf under the hood if you want it, and you can run cool and still have great perf by locking TDP royally under its cap.

For GPU, similar things apply. Vega was the first one, in fact, it was highly recommended to undervolt that product, and ever since we have also seen reviews mention it with numerous GPUs.

The simple fact is, if products get clocked way above their comfort zone to please shareholders, why can the consumers not act on that by making efficient settings a common, 'normal thing' to do? Its one way to exercise consumer power, and another one is when reviewers take that into account. That is signalling that we don't really care about marketing or the way products are pushed. To be honest, I'm all for normalized CPU testing for efficiency purposes. Yes, its very neat to know what 100~125W worth of power can produce on every CPU, because that is realistically something most air towers and climates are capable of cooling on air. That won't change either - air has been air for decades, the only path up is go bigger which kills use cases on the way up. Similarly, show us what 200W can do under water! Its much more valuable than 'testing stock'. Stock now becomes more interesting for comparison purposes, and it really already is. The trickery with power limits and TDP bumps won't stop, and every releases stresses how important it is for users to intervene. Even on boards: auto OC's, tweaked clocks and excessive power delivery settings..., etc. They happened at stock.
I agree with this, but then they came along saying they weren't talking about reviewers, which ... again, who is doing this testing? And, once again: while I would love to see this testing, it is so labor intensive (especially for something like a GPU, with 15+ game test suites often used) that expecting any reviewers to actually do such testing is just way outside of the realm of what is realistic. We do see articles like this pop up from time to time - whether it's memory speed scaling, efficiency scaling, or other complex tests of a single product - but they are incredibly labor intensive and far too complex for a general audience, meaning that there's very little return on that time expenditure for the reviewers - who have to eat, after all, and thus have to focus on things that are actually read/watched. We are seeing increasing automation of testing, which diminishes the labor involved, but the savings from that take time to manifest, and the skills required to program this are relatively rare. And, of course, we see enthusiasts posting week or month-long journeys of various testing of their specific setups, but again, any thought of normalizing that type of testing is ... well, ludicruous. They're not professionals, and they're doing this for their own benefit, i.e. they're going to be making choices based on their own preferences, tools, skills and desires. Attempting to normalize that (outside of very general recommendations) is just wildly unrealistic.
 
I agree with this, but then they came along saying they weren't talking about reviewers, which ... again, who is doing this testing? And, once again: while I would love to see this testing, it is so labor intensive (especially for something like a GPU, with 15+ game test suites often used) that expecting any reviewers to actually do such testing is just way outside of the realm of what is realistic. We do see articles like this pop up from time to time - whether it's memory speed scaling, efficiency scaling, or other complex tests of a single product - but they are incredibly labor intensive and far too complex for a general audience, meaning that there's very little return on that time expenditure for the reviewers - who have to eat, after all, and thus have to focus on things that are actually read/watched. We are seeing increasing automation of testing, which diminishes the labor involved, but the savings from that take time to manifest, and the skills required to program this are relatively rare. And, of course, we see enthusiasts posting week or month-long journeys of various testing of their specific setups, but again, any thought of normalizing that type of testing is ... well, ludicruous. They're not professionals, and they're doing this for their own benefit, i.e. they're going to be making choices based on their own preferences, tools, skills and desires. Attempting to normalize that (outside of very general recommendations) is just wildly unrealistic.

You only need one test to compare and get an idea of the relative performance to the rest of the tested suite on the product at its stock performance. That is, if you normalize on wattage. You just run the same thing on the same wattage every time and put it on a relative scale to the average perf. I mean sure, you won't have 'the entire picture of everything' but its more than enough to make an educated guess on what that rest will look like.

And if you normalize on wattage below the maximum spec (which is obviously what you'd do), the other variables also fall off quickly: cooler required, impact of test bed performance, etc.
 
If you weren't speaking about reviewers, I have absolutely no idea who you were referring to when you were saying that efficiency comparisons need to be normalized. Who are doing these efficiency comparisons at any type of meaningful scale, if not reviewers? If you're talking about enthusiasts, well, then everything is essentially random, as access to hardware, testing tools and technical knowledge is so variable as to render the tests and their outcomes incredibly haphazard. If you're talking about the collective efforts of enthusiasts testing and tuning their hardware, then ... well, again, normalization doesn't have much of a purpose once again, as the differences in hardware configurations and surrounding variables are so great as to render that normalization moot. Heck, the best the vast majority of us can do are software-based power draw measurements, which are notoriously unreliable, so ... yeah. Bigger problems there. And there are extremely few enthusiasts with the time, energy, skills and tools to do any kind of representative efficiency analysis of even a single component in their specific system.


That's (relatively) easy for relatively simple things, i.e. things with few variables. A cooler has a few: ambient temperature; surrounding airflow/case; fan speed; mount quality/fitment/base plate flatness; input power. (And to some degree thermal density in the CPU being cooled). Many of those can be controlled for with high precision quite simply - yet most cooler reviews don't even bother to test at multiple input power levels, which shows just how challenging this stuff can be. Now add the complexity of full-range efficiency testing of a complex multi-purpose component, where you'd need many different tests (FurMark behaves extremely differently from Unigine Heaven, and both are wildly different from any reasonable modern game, etc.), while keeping as many other variables as possible under control. It's still doable, but the number of variables and the number of test passes needed (and the time spent on said tests) makes the workload insurmountable even for professional reviewers outside of the odd specific article.

Other than that, there is a standard for testing: out of the box, as the product works for users (including all BIOSes if relevant), in whatever test setup that publication uses as standard.

Anything else, and you're just talking about some random enthusiast's tinkering with their specific silicon on their specific setup, which can at best represent a single data point which may or may not be representative of anything at all.
You are still focusing on reviewing stuff. I'm not. That is not the point at all. My point is, when you see a review, and you see for example X product having 100 performance at 50w, and then product Z having 80 performance at 10w, you can't then be like "don't buy X cause its less efficient". That's what im trying to get all along. On topic, yes if a 4090ti is released at 800w it's going to be completely inefficient at stock settings, that's obvious. That doesn't mean someone that cares about efficiency shouldn't buy it, cause again - as i've repeated, power limiting it to 400w it will absolutely annihilate my 3090.

This is a forum with enthusiasts and everyone keeps commenting on that 800w, like it even matters. Who cares , and more importantly why, how many watts it draws at stock? It could be 5 billion watts for all I care, it's still irrelevant. At 400w it will probably scorch anything that exists today at the market. And it literally takes 1 click to power limit a graphics cards through AB. Literally 1 click.

Right now im running a 12900k and a 3090 playing FC6 at 4k ultra. Both of them combined are pulling under / almost 300watts. COMBINED (25w for the CPU and 250 to 300 for the GPU).
 
You are still focusing on reviewing stuff. I'm not. That is not the point at all. My point is, when you see a review, and you see for example X product having 100 performance at 50w, and then product Z having 80 performance at 10w, you can't then be like "don't buy X cause its less efficient". That's what im trying to get all along. On topic, yes if a 4090ti is released at 800w it's going to be completely inefficient at stock settings, that's obvious. That doesn't mean someone that cares about efficiency shouldn't buy it, cause again - as i've repeated, power limiting it to 400w it will absolutely annihilate my 3090.

This is a forum with enthusiasts and everyone keeps commenting on that 800w, like it even matters. Who cares , and more importantly why, how many watts it draws at stock? It could be 5 billion watts for all I care, it's still irrelevant. At 400w it will probably scorch anything that exists today at the market. And it literally takes 1 click to power limit a graphics cards through AB. Literally 1 click.

Right now im running a 12900k and a 3090 playing FC6 at 4k ultra. Both of them combined are pulling under / almost 300watts. COMBINED (25w for the CPU and 250 to 300 for the GPU).
The jump to reviewing products is relevant though because how else will we know what different setups can achieve?
 
The jump to reviewing products is relevant though because how else will we know what different setups can achieve?
Well yeah, you can't exactly know without someone actually testing the product, but at least you should realize that obviously - a cpu at 240w PL will seem very inefficient compared to a CPU with a PL at 125w, even though in reality it might not be. We already know that a CPU - no matter which one and from which brand - hits a wall when it starts clocking above 4ghz. And the closer it gets to 5ghz the more bonkers the power consumption will be. So basically anything above 150w for CPUs and around 300w for GPUs starts becoming rapidly inefficient - cause they are way outside the efficiency curve. That's why im very confident the 4090ti will slap my 3090 a new one in terms of efficiency, even before any tests are done on the bloody thing.
 
I have a solution, and it's called Radeon :D
 
4090Ti will never have 48gb vram. leakers exaggerate vram for attention.

4080 WILL NOT have 16gb vram. I promise.

Kopite7kimi said 3080Ti will have 20gb vram.
he better turn around his ice bucket on his own head:

View attachment 256257

Kopite7kimi and OneRaichu are the 2 on Twitter I follow and they usually get most things right not 100% who does?

Sometimes it feels like even official vendors almost are ready to annouce something like the 20GB version of the RTX 3080 sku and everything was made almost as far as packaging but then Nvidia pulled the plug and dropped the RTX 3080 Ti 12GB a little later.

1659130220149.png
 
when it's 2x Faster than a 400W Card... why not.
it's a crazy enthusiast product and not a thing for the average gamer.
Because there's no way your ATX standard case will be able to vent that much heat. Speaking from experience here with an already absurd 450W TDP 3090 Ti.

800W RTX 4090 Ti :laugh:


The people who bought 3090ti will be sure to buy these, as it is the only upgrade for their 3090ti, can't show off on forums with anything less can you.
No I won't.
 
Id take it only if it has 48GB. I could live with 800W, it would still be more efficient compared to a 3090 / ti nvlink setup as with rendering a scene on shared memory will enable a 40GB scene to be rendered on both 3090s but at the speed of a single card. So if you currently want to render large scenes (and yes there are a lot of use cases where a 3090 is still a bottleneck, it cant even run certain benchmarks due to VRAM limit) you would either need 2x 3090s (3.000$+ and 800W) and get the performance of a single one or buy a pro card for 6.000$ - 15.000$.
Having even more VRAM in the 'regular' RTX line-up would be another huge step forward for certain professional or even hobbyist use cases. Just not for 420-blaze-it ULTRA GamingTM setups. So a 4090 ti in the 3.000$ - 4.000$ range would be reasonable from my perspective.
 
Id take it only if it has 48GB. I could live with 800W, it would still be more efficient compared to a 3090 / ti nvlink setup as with rendering a scene on shared memory will enable a 40GB scene to be rendered on both 3090s but at the speed of a single card. So if you currently want to render large scenes (and yes there are a lot of use cases where a 3090 is still a bottleneck, it cant even run certain benchmarks due to VRAM limit) you would either need 2x 3090s (3.000$+ and 800W) and get the performance of a single one or buy a pro card for 6.000$ - 15.000$.
Having even more VRAM in the 'regular' RTX line-up would be another huge step forward for certain professional or even hobbyist use cases. Just not for 420-blaze-it ULTRA GamingTM setups. So a 4090 ti in the 3.000$ - 4.000$ range would be reasonable from my perspective.
How on earth do you cool that kind of heat though? Guessing water?
 
How on earth do you cool that kind of heat though? Guessing water?
I have been heavy on water cooling for a while now, so naturally I would go with water. However I'm still a bit doubtful in regards to the leak numbers with power consumption. Or we might see the first founders edition with watercooling.

EDIT: I mean just think about 800W. Current 4-Slot air-cooled cards struggle with 450W even outside of a case. Will they go 8-Slots deep or 80cm long outside the front of the case??
 
Guys, what PSU to buy if i plan on 4090, 12900K maybe upgraded to 13900K, 5 M.2, 3-4 sata ssd, 3-4 HDD's, dual pumps and 15 LianLi fans
Im sure 850W will be borderline and 1000W is on the egde? no? 1200W and above?
 
Back
Top