I fundamentally disagree when it comes to products like k and ks series which are meant to be tinkered with. If you don't tune the crap out of them you are spending money for nothing, you can go for the non k versions. Its the same with ram for example, you have to at least activate xmp at the very least.
What matters to me at least is architectural efficiency, not what settings intel or amd decided to put out of the box. If we go by that logic, intel or amd can sell a cpu with a 30w power limit and hurray, suddenly they'll have the most efficient cpu on planet earth. Do they though? Nope, the determination has to be done with normalised values
Cool. Now go form a consortium of reviewers to put into place a standard for testing this. 'Cause without that, the result would be an arbitrary and entirely useless collection of reviews basing their findings on different test points and methodologies, providing borderline garbage data.
The only sane way of going about this is exactly what reviewers currently do: test at stock + do some simple OC testing. Some UC/UV testing, or some form of efficiency sweep (the same workload(s) across a range of clockspeeds with performance and power logging) would be great bonuses, but this quickly turns so labor intensive as to be impossible for essentially any current publication. Heck, just imagine the work required to run something simple like Cinebench across an architecturally relevant span of clock speeds, with voltages monitored to ensure that the motherboard doesn't crap the bed. Say, 500MHz intervals + whatever the peak attainable clock is, from 2GHz and upwards. That's 7-8 test runs for each chip, or at least a full workday - assuming the workload is quick to finish and you're not running them multiple times to eliminate outliers. And now you have the problem of only running a single workload, which makes whatever measurement you're running far less useful, as it's inherently not representative. Change the workload to something broader, like SPEC, and you're probably looking at a week of work for that one suite of tests.
Also: the
vast majority of K-SKU CPUs are never meaningfully tweaked. They have that ability, and it's a marketing and selling point for them, but the vast majority of buyers buy them because they're the fastest, coolest, highest end SKU, and nothing else. Heck, given that most people dont' even enable XMP, how on earth are you expecting them to tune their CPUs? Remember, us hardware enthusiasts represent a
tiny fraction of the gaming PC buying public.
Also, you're .... well, just wrong about "spending money for nothing" if you're not tuning - you're paying for the highest clocks and the highest stock performance. That's the main part of the price, to the degree that there's a price difference between a non-K and K SKU to begin with.
I agree entirely that architectural efficiency is important, and very interesting. I just disagree fundamentally that any review save a supplementary one should focus on this, because the
vast, overwhelming amount of use will always be at stock settings, and thus that is where testing matters. Those of us with the time, resources and knowledge to fine-tune also have the ability to figure out the nuances of architectural efficiency curves - through forums like these, among other things.
Euh... what.
The situation with this info is that the higher end GPU is wider and if the lower version had higher clocks it would not stick to 420-450W. Plus they even specify how wide it is and there is no such situation. We are looking at bullshit plain and simple. If they bump clocks for a good 250W worth, they will have completely lost the plot. You have mentioned the impracticalities yourself.
There is speculation and then there is this lol
I was responding to your example of the 4090 reportedly just being 30W above the 4080, which could be explained by the wide-and-slow vs. smaller-and-faster comparison of RX 6800 vs. 6700 XT - where one is much faster, yet barely consumes more power due to being much wider. You seem to be misunderstanding what I'm saying - these 4090 Ti rumors don't indicate it being meaningfully wider than the 4090, so there's no way it could be a wide-and-slow design in comparison. 11% more shaders
could be enough for that, but ... nah. Not in an extreme situation like this. It's a minor difference overall. The RX 6800 has literally 50% more shaders than the 6700 XT after all. That's how it manages much better performance at nearly the same power.
So, to clarify, from your own examples, a possible explanation would be:
RTX 4080: "small" (in this comparison) and high clocking
RTX 4090: larger, clocks not crazy high, barely more power than the 4080
RTX 4090 Ti: larger still than the 4090, clocked to the gills, bordering on catching fire at 2x 4090 power.
I'm obviously not saying this is true, but it would be a technically reasonable way of things shaking out this way in terms of power consumption and relative positioning.