• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core i9-14900KS

Come on W1zzard!!! AMD putting 3D V Cache on BOTH CCD's of a Ryzen 9 X3D part would NOT HELP GAMING PERFORMANCE or make them better parts! It would actually make it WORSE due to the cross-CCD I/O die out & back latency penalty, which almost always negates any & all advantages of running a game on >8-cores.

It absolutely will, the cross-CCD latency penalty affects the current hybrid system and this is why AMD literally had to write a custom scheduler driver. Having dual X3D CCDs would correct this scheduling problem by nature of the processor. Ryzen currently lacks hardware thread scheduling capability of Alder and Raptor Lake.

The true reason we don't get dual X3D Ryzens is that AMD wants to protect their high-end server business. Processors with that much cache fetch thousands upon thousands on the EPYC segment.
 
It absolutely will, the cross-CCD latency penalty affects the current hybrid system and this is why AMD literally had to write a custom scheduler driver. Having dual X3D CCDs would correct this scheduling problem by nature of the processor. Ryzen currently lacks hardware thread scheduling capability of Alder and Raptor Lake.

The true reason we don't get dual X3D Ryzens is that AMD wants to protect their high-end server business. Processors with that much cache fetch thousands upon thousands on the EPYC segment.
You'd still have the latency issue even with cache on both chiplets. It would help sure, but what they've done now is probably the best they can get with their basic chiplet approach.

AMD Ryzen 9 7950X Core to Core Latency Final.jpg
 
It absolutely will, the cross-CCD latency penalty affects the current hybrid system and this is why AMD literally had to write a custom scheduler driver. Having dual X3D CCDs would correct this scheduling problem by nature of the processor. Ryzen currently lacks hardware thread scheduling capability of Alder and Raptor Lake.

The true reason we don't get dual X3D Ryzens is that AMD wants to protect their high-end server business. Processors with that much cache fetch thousands upon thousands on the EPYC segment.
Are you saying that AMD was lying when they said that there was no improvement in performance with both CCDs having Vcache?
 
Does anyone still make 3/8(1/2) water cooling parts?
 
You'd still have the latency issue even with cache on both chiplets. It would help sure, but what they've done now is probably the best they can get with their basic chiplet approach.

View attachment 339162

Yes but in general this has always been the case with dual CCD Ryzens. The cache size mismatch only complicates things further, particularly since the OS technically has no clue regarding the internal topology without the custom scheduling driver. It's completely transparent otherwise.

Are you saying that AMD was lying when they said that there was no improvement in performance with both CCDs having Vcache?

It wouldn't be the first time. I'm sure you watched GN's documentary on AMD's lab, they actually went as far as completing a "5950X3D" ES and fully completed the unreleased Zen 3 Threadripper, yet simply opted not to bring them to market. It's obvious that this was a decision made taking into account both market conditions and to avoid being predatory towards their own product lines. Why make a 16-core processor that has 192 MB of L3 cache and sell it for $700 if you can ask $6,000 for it in the server segment? Likewise why sell a Threadripper at $3000 even if they can market the exact same processor as an EPYC for $10000+? AMD's a business, their primary objective is to make money after all.
 
Makes sense for specific work related tasks that it excels at if you don't mind the steep power draw, but otherwise really doesn't make sense to recommend to anyone with the draw backs on both value and efficiency. It is defiantly better at a handful of tasks though. From a profitable business standpoint it has it's usage cases though also still has to compete with stuff like ThreadRipper and Epyc and other more Workstation and Server market hardware that's might be better or worse in some of those scenario's.

I think for consumers it's hard to recommend, but maybe for more upstart indie developers in certain fields.
 
As someone who has 7800X3D, yeah. It usually goes around 35-55W in gaming. Amazing CPU! This just makes Intel look bad, at least as far as gaming goes. This year alone, we will get a 30 % increased power bill. Starting from next month. It was already approved and announced. Funny thing is, i hear Korea will also get a similar one... lol. It's insane. I don't even wanna mention how bad the situation is in UK. People literally prefer to freeze in the winter than pay these outrageous prices. People with decent jobs too. This energy issue is hitting everyone/will. You really gotta start thinking of this stuff now.

No more power hungry tech for me (and many others)
 
I'll say it again: It's the first time I even hear about building a render farm out of desktop parts for at least the last 10 years. Most, if not all, render work has moved to AWS and similar services because owning a render farm is pointless for, again, nearly all use cases.
You must be looking at this from the point of individuals or very small teams. I'm looking at it from the point of companies with several hundred employees. There's some downtime on the renderfarm but it's not sitting there idle very often - I can wait upwards of a week for maintenance windows on individual nodes and I don't have the stats on hand but I'd wager the monthly utilisation of the farm as a whole is 40-75% depending on how busy the vis department is at any given time of year.

I don't think it's unusual for AEC firms to have a vis department with several animators as we poach staff from other AEC companies occasionally and have had staff poached by other AEC companies too. That wouldn't happen if we were doing something abnormal and lends support to the argument that most AEC companies will have a viz department with multiple full-time staff who will be animating/rendering full-time.

Points to note:
  • Rendering animations is an application with near-perfect scaling because the job can split into frames and divided between renderboxes.
  • 6 render nodes of 7950X is 96 cores averaging about 4.8GHz all-core when limited to eco-mode
  • 1 render node of TR 7995WX is 96 cores averaging about 3GHz because its power budget per core is far lower than even 6x 7950X eco-mode
  • 4 render nodes of 7950X roughly matches the performance of one TR 7995WX, at about $4000
  • $4000 gets you ~800-hours of AWS 96-core time, excluding other necessary AWS org-wide and storage costs that I'm totally ignoring for this, but are actually quite high.
  • 800 hours of rendering can be used by an individual animator in 2-6 months; We had two animators trialling AWS for a month and they used ~350h of 3rd-gen 64-core each.
  • Projects typically last 3-6 days with 1 or 2 animators on them, but can scale to 5-6 people for a month for our largest ones.
If you disagree with any of those points, then please let me know where I've gone wrong. Best case scenario every time I've mathed this for project costing is that renting compute for a single project is often more expensive than purchasing additional hardware to keep in-house. That's literally the justification I use to expand the renderfarm and we've only outsourced or rented when there's very little time, as we can comfortably go from approval > order > next day delivery > build > deploy software image in around 3-4 days, and the renderfarm is already sizeable enough that we can usually just pause less critical stuff briefly to something urgent rendered for a final submission deadline.

I'm way off-topic now, but I think the original point was that rendering isn't something you need a lot of locally unless you're animating video rather than still images and then it's an easily offloaded to whatever distributed/cloud service you have available to you, whether that's AWS or in-house is kind of irrelevant WRT the 14900KS
 
If you disagree with any of those points, then please let me know where I've gone wrong. Best case scenario every time I've mathed this for project costing is that renting compute for a single project is often more expensive than purchasing additional hardware to keep in-house. That's literally the justification I use to expand the renderfarm and we've only outsourced or rented when there's very little time, as we can comfortably go from approval > order > next day delivery > build > deploy software image in around 3-4 days, and the renderfarm is already sizeable enough that we can usually just pause less critical stuff briefly to something urgent rendI'm way off-topic now, but I think the original point was that rendering isn't something you need a lot of locally unless you're animating video rather than still images and then it's an easily offloaded to whatever distributed/cloud service you have available to you, whether that's AWS or in-house is kind of irrelevant WRT the 14900KS
What I's saying is my very original point: You don't need a TR machine for CPU rendering since you can offset any rendering production peak to AWS.

You keep going back to your company's unique use case because you don't want to acknowledge any other scenario which is nearly all the market. I bet that R34 porn alone is making more money than the whole AEC industry combined. Also, fire the fuck out of those animators using an average of 10 hours of 64C rendering a day, they are going to run your company into the ground. My god...
 
This is specifically for the temp tests on air, so I can get you actual numbers that you can put into perspective


Definitely not on my 14900K sample, but you can get close of course. 14900K with 6.2 is ultra-rare
I guess I'm lucky then bc my 14900k can do 6.2 on 2 cores and 5.9 on the other 6.
 

Attachments

  • 20240229_154342.jpg
    20240229_154342.jpg
    4.8 MB · Views: 95
This situation has now surpassed even the FX 9590, which was the disaster of its time. What is the power consumption of 500W? Even NH-D15 does not cool.o_O
 
At 500W i'm worried it could produce burns on the socket and CPU, i'll be happy and run at 1/4 the power eg. 125W. Nothing wrong with unlimited power.
 
Last edited:
Nice swansong for monolithic.

View attachment 338997


That was referencing when tuned.

https://www.techpowerup.com/review/...ke-tested-at-power-limits-down-to-35-w/8.html TPU testing shows tuned 14900K indeed being the most efficient. Note this is a simple tune, power limit only. More involved tuning together with a per core overclock and specified voltages will offer good frequencies and in many cases better than stock performance, while also improving efficiency. View attachment 338998
Do you use those cherries in pie, or are they still too green and bitter?



Well damn, when the 7950X is 35W limited it kicks Intels ass again...

Try again?
 
Do you use those cherries in pie, or are they still too green and bitter?



Well damn, when the 7950X is 35W limited it kicks Intels ass again...

Try again?
Slower in multi faster in single? Wow, not that surprising.

I'm interested to see the total system power draw when both CPUs tested at "35 W". 7950X 106 W at idle compared to 67 W for the 13900K, which isn't exactly a 14900K but close enough.

I wonder if that "35 W CPU" limit works out to the same total system draw on both platforms, since it seems the AMD Chipset does draw quite a bit. Total system power with one core load being 123 W on Intel and 135 W on AMD, strange results. 375 W vs 319 W system power draw for full MT load is also a lot closer than I think people expect. What I'm saying is, comparing 35 W to 35 W for CPU, isn't really "35 W" if the Chipset on one platform is pulling x amount of W more than the other platform, because that won't show up in "CPU" power figures, but it's an inherent part of using the CPU.

idlenumbers.png


7950X goes up in power by 29 W when loaded one core fully compared to idle.

13900K goes up by 56 W. So it's clear a lot of the power draw on the Zen platform isn't related to CPU load, it's there whether CPU is loaded or not, since single threaded power consumption as wireview tested by TPU has the 7950X using more (41 vs 33 W).

power-singlethread.png
 
Last edited:
Just speculation, but I feel like Intel might've pushed Raptor Lake a bit beyond its limits this time around, 500W and all..
 
Another Emergency Edition like every KS model before this. I know that these are binned chips as a 14900KS has broken some extreme OC WRs, but I practically don't see any sense for these for consumers.
 
If price and power consumption weren't issues people would look at it more objectively fair.
 
Hi guys,

I'm just curious on how did you guy manage to cool 500watts? I have a quite serious cooling solution here and there is a hard wall for me at 330Watts.
 
Hi guys,

I'm just curious on how did you guy manage to cool 500watts? I have a quite serious cooling solution here and there is a hard wall for me at 330Watts.
Fill out your system specs in your account profile so people can help you.

TPU used a standard AIO with no motherboard, CPU or other relevant hardware modifications to cool the 500 W OC test.

For the stock tests a standard air cooler was used.

Screenshot_20240317-091004_Opera.png
 
Fill out your system specs in your account profile so people can help you.

TPU used a standard AIO with no motherboard, CPU or other relevant hardware modifications to cool the 500 W OC test.

For the stock tests a standard air cooler was used.

View attachment 339338

I see, thank you for your quick answer.
I will update with all my specs and then have some experts to guide me further.

Thank you.
 
Fill out your system specs in your account profile so people can help you.

TPU used a standard AIO with no motherboard, CPU or other relevant hardware modifications to cool the 500 W OC test.

For the stock tests a standard air cooler was used.

View attachment 339338
A 420mm AIO is not standard. Those Noctua coolers are both over $100 and the NH-D15 is one of the best Air coolers you can buy. What AIO is summarily better than the Arctic Freezer? If we use one of those cheap units from Thermaltake like the A30 would it work?
 
kapone32 said:
A 420mm AIO is not standard. Those Noctua coolers are both over $100 and the NH-D15 is one of the best Air coolers you can buy. What AIO is summarily better than the Arctic Freezer? If we use one of those cheap units from Thermaltake like the A30 would it work?
The definition of standard/off the shelf would disagree with you. 420mm Arctic is quite cheap too at $92. Both Noctua coolers used have been on the market for a while, and cost ~ $80-$109 depending on what you go for, but shop around/wait for a deal and you can find them cheaper.

I'm not going to theorize about what AIOs would be good, because personally I don't consider them to be good options at any price, my thoughts are that either air or custom liquid cooling are the options that make sense. Regardless, for the target audience of this CPU, ~$100 on cooling isn't a problem.

1710673895148.png


Fun direct die block der8auer is prototyping (not implying this is necessary, but a cool option if you're into tuning).

 
Hi guys,

I'm just curious on how did you guy manage to cool 500watts? I have a quite serious cooling solution here and there is a hard wall for me at 330Watts.
You're not going to find it easy to do quietly without a custom water loop.

Even good AIOs are going to struggle beyond a certain point, because their pumps are relatively small, likely optimised for noise and cost at up to around 250W because that's the most that 99% of their target customers will ever need.

You'd probably need a D5 on full speed to get over 330W, and that would need to be coupled with a high-flow block and some very fat radiators I think. My D5 is loud at full tilt, but it's also old and I'm not running half-inch tubing or anything particularly high-flow.

Actually, watching that Der8auer video, the 360AIO on the 14900KS is already struggling to match a custom-loop 14900K (253W) after just 5 Cinebench23 runs, demonstrating (for that 360 AIO, at least) that unless you have a capable custom loop, there's no point buying a KS, hell - even the regular 14900K might be too much for AIOs, since they're already struggling with the "stock" 253W PL2 burst before you even try to overclock!
 
Last edited:
Slower in multi faster in single? Wow, not that surprising.

I'm interested to see the total system power draw when both CPUs tested at "35 W". 7950X 106 W at idle compared to 67 W for the 13900K, which isn't exactly a 14900K but close enough.

I wonder if that "35 W CPU" limit works out to the same total system draw on both platforms, since it seems the AMD Chipset does draw quite a bit. Total system power with one core load being 123 W on Intel and 135 W on AMD, strange results. 375 W vs 319 W system power draw for full MT load is also a lot closer than I think people expect. What I'm saying is, comparing 35 W to 35 W for CPU, isn't really "35 W" if the Chipset on one platform is pulling x amount of W more than the other platform, because that won't show up in "CPU" power figures, but it's an inherent part of using the CPU.

View attachment 339285

7950X goes up in power by 29 W when loaded one core fully compared to idle.

13900K goes up by 56 W. So it's clear a lot of the power draw on the Zen platform isn't related to CPU load, it's there whether CPU is loaded or not, since single threaded power consumption as wireview tested by TPU has the 7950X using more (41 vs 33 W).

View attachment 339286


So you want to see what expensive chips are capable of when coupled with expensive RAM, a lot of connectivity (whats the PCIe difference between AMD and Intel again?*) then make a whiny statement about "muh 35W"" and when its shown that AMD still kicks ass at the specific tasks these CPU's are meant to do, your knee jerk reaction is to cry about whole platform power at 35W, while ignoring total stock platform power?


I wish I could see the world how you do, really, just to know what it looks like behind those eyes.

Its the equivalent of comparing two super-cars limiting them to idle and watching fuel consumption due to tire resistance as the metric of who wins... if it works for you, great, I'm glad you have that passion, but I want you to know it looks strange to most.

*



Connectivity AMD wins, but I must confess, it costs them the "race" if the CPU is limited to 35W and you are looking at whole system power draw by 8 whole watts according to your math based on two separate reviews of a 13900 and a lot of assumptions.


Did I get any of this wrong?
 
Back
Top