• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-9900K

So, we're back to Pentium vs Athlon? Intel's hot, power hungry chips with lots of gigglehurtz are slightly faster than AMD's cheaper, more efficient offerings.

Not exactly, Yes Intels is Hot hungry chips but back then there CPU's was also slower, today they are Hot hungry but faster.
 
Not exactly, Yes Intels is Hot hungry chips but back then there CPU's was also slower, today they are Hot hungry but faster.

Also back then Intel's chips cost the same amount of money to make as AMD's.... Now they cost substantially more.

AMD is making chips 80% as powerful as Intel's, and they use almost half the energy... and they cost 30%+ less to produce. Intel is in worse shape than before overall.

P.S. Oh, and that is just talking about Desktop. On Server AMD is whipping the floor with Intel in top "halo" performance, price/perf, efficiency, AND security. It's a bloodbath.
 
Also back then Intel's chips cost the same amount of money to make as AMD's.... Now they cost substantially more.

AMD is making chips 80% as powerful as Intel's, and they use almost half the energy... and they cost 30%+ less to produce. Intel is in worse shape than before overall.

P.S. Oh, and that is just talking about Desktop. On Server AMD is whipping the floor with Intel in top "halo" performance, price/perf, efficiency, AND security. It's a bloodbath.
That's the best of this, Intel's only possible answer is to fight back, making cheaper and better products. Dead are the days of selling the same quad core over and over again.
 
at least Intel sold a 8 core, 16 thread monster for the mainstream market. Highly unlkely that new owners of this part will need an upgrade for the next 3-5 years, considering this beast clocks at 5GHz comfortably on air, probably will sustain that with water-cooling.
 
at least Intel sold a 8 core, 16 thread monster for the mainstream market. Highly unlkely that new owners of this part will need an upgrade for the next 3-5 years, considering this beast clocks at 5GHz comfortably on air, probably will sustain that with water-cooling.
Nope, to keep it cool you will need water cooling, air cooler only when the ambient temps are real low & the case airflow is good.
Screenshot (17).png

The 9900k is good till 4.7~4.8 GHz all core, on air, the 9700k though does seem be much better till 5GHz on all cores. The HT really limits 9900k max core clocks & OC, not to mention the temps & power consumption run away after 4.7GHz :ohwell:
 
Last edited:
Very nonplussing, especially for gaming, but then it's been that way for a while now when it comes to CPUs. Even Bloomfield with a 980Ti or 1070 is plenty for 1080p/1440p gaming @very high or ultra settings. The one comfort I take in this is that I can still get by on 4C/8T for a very, very, very long time.
 
at least Intel sold a 8 core, 16 thread monster for the mainstream market. Highly unlkely that new owners of this part will need an upgrade for the next 3-5 years, considering this beast clocks at 5GHz comfortably on air, probably will sustain that with water-cooling.

Well it's technically 'mainstream' but at £600 priced in the ultra high-end range like they used to price their HEDT CPUs (Haswell-E).
 
at least Intel sold a 8 core, 16 thread monster for the mainstream market. Highly unlkely that new owners of this part will need an upgrade for the next 3-5 years, considering this beast clocks at 5GHz comfortably on air, probably will sustain that with water-cooling.

Its FX 9590 all over again.Back then when I use that chip, AMD stated that TDP was 220W. Turn out that was a lie, TDP skyrocketting to 225W in Prime 95 Small FFT, and i'm having trouble to keep Crosshair V VRM temp at bay, though a mere Corsair H50 are more than sufficient to handle CPU.Short story my system constantly showing BSOD's after a year of usage, although both CPU and motherboard doesn't have any physical damage.
Now let's flip the story...what IF someone make 95W CPU and full load are unknown, need hefty 360mm or 420mm cooling, do you believe this chip last longer than a year?
 
It's not intel's fault that today's games do not fully utilize an eight core CPU.

same for the 2700X.

there is nothing intel can do about gaming perfomance at this point, but 9900K will show its true benefits for gaming over years.
It actually kind of is Intel's fault. They were the ones that continued to pressure devs to focus on quad-cores for gaming.
To the both of you, let's put this one to rest once and for all.
Multithreading for games doesn't work that way at all. We will not get games which fully utilizes 6+ cores for rendering. The direction in game development is less CPU overhead and more of the heavy lifting on the GPU.

Most people misunderstand the features of Direct3D 12. While it is technically possible to have multiple CPU threads build a single queue, the added synchronization and overhead in the driver would be enormous. For this reason, we're not going to see more than 1 thread per workload that can be parallelized, which means separate rendering passes, particle simulation, etc. So games having 6+ threads for rendering is unlikely, and even for games having 2-3, all the main rendering will be done by the main rendering thread.

Intel or AMD is not to blame here, not the developers either, just forum posters and tech journalists driving up expectations without any technical expertise.
 
Great review. Even though this is bad buy the fanboys will love it anyway :D
 
Also back then Intel's chips cost the same amount of money to make as AMD's.... Now they cost substantially more.

AMD is making chips 80% as powerful as Intel's, and they use almost half the energy... and they cost 30%+ less to produce. Intel is in worse shape than before overall.

P.S. Oh, and that is just talking about Desktop. On Server AMD is whipping the floor with Intel in top "halo" performance, price/perf, efficiency, AND security. It's a bloodbath.
I doubt that's the case, Intel owns their fabs while AMD uses TSMC & GF along with Sammy. If you're talking about operational costs, or MCM approach by AMD, then that's a separate issue.
I don't believe though that Intel chips cost more to produce, in fact it might well be the exact opposite.
 
@R0H1T yep. Still, reaching 5 on all cores is still a feat, considering the R7 2700X still struggling to reach such clocks.
@Shatun_Bear to be precise it's a high-end mainstream SKU. Dunno whether that'll fit such a description for it or not xD
@1d10t I think a 240mm rad in push-pull config with fans set to mild profile & using a really good thermal paste, I think the load temps for the i9 part would/may hover round the mid 70C,
depending heavily on ambient room temps.
 
Wow

Exactly what I expected from the i9
 
To the both of you, let's put this one to rest once and for all.
Multithreading for games doesn't work that way at all. We will not get games which fully utilizes 6+ cores for rendering. The direction in game development is less CPU overhead and more of the heavy lifting on the GPU.

Most people misunderstand the features of Direct3D 12. While it is technically possible to have multiple CPU threads build a single queue, the added synchronization and overhead in the driver would be enormous. For this reason, we're not going to see more than 1 thread per workload that can be parallelized, which means separate rendering passes, particle simulation, etc. So games having 6+ threads for rendering is unlikely, and even for games having 2-3, all the main rendering will be done by the main rendering thread.

Intel or AMD is not to blame here, not the developers either, just forum posters and tech journalists driving up expectations without any technical expertise.
I'm using a 16c/32t cpu with HT off (16c/16t) playing COD BLOPS 4. According to task manager, I'm using all cores incredibly evenly.

I assume this is not rendering on more than 2-3 cores? What are we seeing? What are they all doing?

I agree with what you say, just asking what is going on in that title.
 
Well it seems i will stick another year with my trusty 2500k :D , next spring when zen2 will apear and maby i will buy it it will be 8 years of 2500k ;) , who knows maby i will extend it to 10 years if there isn`t a 2x performance gain and for now i can do anything with it . This 3-5% generational performance gains is getting really boring.
 
I'm using a 16c/32t cpu with HT off (16c/16t) playing COD BLOPS 4. According to task manager, I'm using all cores incredibly evenly.

I assume this is not rendering on more than 2-3 cores? What are we seeing? What are they all doing?

I agree with what you say, just asking what is going on in that title.
You have actually very good questions.

Firstly, it's important to understand that utilization in Windows Task Manager is not actual CPU load, but rather how much threads have allocated in the scheduling interval. Games usually have multiple threads waiting for events or queues, these usually run in a loop constantly checking for work, but to the OS these will seem to have 100% core utilization. There are several reasons to code this way, firstly to reduce latency and increase precision, secondly Windows is not a realtime OS, so the best way to ensure a thread gets priority is to make sure it never sleeps. Thirdly, any thread waiting for IO(HDD, SSD, etc.) will usually have 100% utilization while waiting. It's important to understand that the "100% utilization" of these threads is not a sign of CPU bottleneck.

Secondly, game engines to a lot of things that are strictly not rendering or doesn't impact rendering performance unless it "disturbs" the rendering thread(s).
This is a rough illustration I made in 5 min: (I apologize for my poor drawing)
game_engine.png
Some of these tasks may be executed by the same thread, or some advanced game engines scale this dynamically. Even if a game uses 8 threads on one machine and 5 on a different one, doesn't mean it will have an impact on performance. Don't forget the driver itself can have up to ~four threads on top of this.

Most decent games these days have at least a dedicated rendering thread, many also have dedicated ones for game loop and event loop. These usually have 100% utilization, even though the true load of event loop is usually ~1%. Modern games may spawn a number "worker threads" for asset loading, this doesn't mean you should have a dedicated core for each, since these are usually just IO wait. I could go on, but you should get the point.
There are exceptions to this, like "cheaply made" games like Euro Truck Simulator 2, which does rendering, game loop, event loop and asset loading in the same thread, which of course give terrible stutter during gameplay.

So you might think it's advantageous to have as many threads as possible? Well, it depends. Adding more threads that are synchronized will cause latency, so a thread should only be given a workload it can do independently and then sync back up, or even better, an async queue. At 60 FPS we're talking of a frame window of 16.67 ms, and in compute time that's not a lot if most is spent on synchronization.
 
This processor wattage limit thing in the BIOS... haven't Intel boards always had something equivalent? For example, my IVB motherboard has a setting called "Core Current Limit" which I understood to be the maximum amperes that would be allowed to be drawn by the CPU; multiply that by the vCore and you get the maximum wattage the CPU may draw. Or is this something different?
 
a. intel is flooding the market with patched and IMMATURE power hungry watt cpu's to f*** this planet.
b. when AMD is about to bring 10 watt ! 7nm APU with iGPU = to nVidia gtx 1060 mobile !
then bye bye nVidia and intel !
 
You have actually very good questions.

Firstly, it's important to understand that utilization in Windows Task Manager is not actual CPU load, but rather how much threads have allocated in the scheduling interval. Games usually have multiple threads waiting for events or queues, these usually run in a loop constantly checking for work, but to the OS these will seem to have 100% core utilization. There are several reasons to code this way, firstly to reduce latency and increase precision, secondly Windows is not a realtime OS, so the best way to ensure a thread gets priority is to make sure it never sleeps. Thirdly, any thread waiting for IO(HDD, SSD, etc.) will usually have 100% utilization while waiting. It's important to understand that the "100% utilization" of these threads is not a sign of CPU bottleneck.

Secondly, game engines to a lot of things that are strictly not rendering or doesn't impact rendering performance unless it "disturbs" the rendering thread(s).
This is a rough illustration I made in 5 min: (I apologize for my poor drawing)
View attachment 109032
Some of these tasks may be executed by the same thread, or some advanced game engines scale this dynamically. Even if a game uses 8 threads on one machine and 5 on a different one, doesn't mean it will have an impact on performance. Don't forget the driver itself can have up to ~four threads on top of this.

Most decent games these days have at least a dedicated rendering thread, many also have dedicated ones for game loop and event loop. These usually have 100% utilization, even though the true load of event loop is usually ~1%. Modern games may spawn a number "worker threads" for asset loading, this doesn't mean you should have a dedicated core for each, since these are usually just IO wait. I could go on, but you should get the point.
There are exceptions to this, like "cheaply made" games like Euro Truck Simulator 2, which does rendering, game loop, event loop and asset loading in the same thread, which of course give terrible stutter during gameplay.

So you might think it's advantageous to have as many threads as possible? Well, it depends. Adding more threads that are synchronized will cause latency, so a thread should only be given a workload it can do independently and then sync back up, or even better, an async queue. At 60 FPS we're talking of a frame window of 16.67 ms, and in compute time that's not a lot if most is spent on synchronization.
Thanks!


This is the first time I've seen this cpu even tickled and was surprised to see that much activity across all cores from a game.

Typically we see exactly what you are saying...1/2/3 threads pegged and others tickled with maybe 1/2 at 50%. This just floored me to see the use that high..first title that has done so.
 
Last edited:
Are you drunk?
The 9900K draws over 200W under load, even without overclocking.

I'm just contesting the tests, I didn't do them. I think they're not stressing enough the CPU:

power-stress.png


power-gaming.png
 
That's the best of this, Intel's only possible answer is to fight back, making cheaper and better products. Dead are the days of selling the same quad core over and over again.

I am not sure what Intel can even do though. The 9900K is the best we will see till 2021! 10nm will not be fit for high-end gaming till 2020, and even then it's likely inferior to 7nm and certainly 7nm+.

Furthermore their 14nm capacity problems are a result of Intel's 6 and 8 core chips taking up twice the space on wafers to produce, and also mobile/server buyers only wanting Intel's best yields (No one wants Intel's 25w mobile i3's lol). They can't lower prices on their good products because they can't even make enough of them. Intel is going to be forced to have a ton of $400-$600 good chips that are overpriced, and then a mountain of <$100 quad-cores they try to sell almost at cost just to get rid of them.

In hindsight I wonder if Intel would have done things differently. I wonder if instead of making 6 and 8 cores ASAP, they would have focused on making 5.2GHz quad-core with sTIM. I mean my 4.5GHz 6700K games as well as the 9900K most of the time. Nobody has a use for a $600 8-core.
 
Temps.png


adding more money on cooler, best gaming processor ever .. nice ..
 
I am not sure what Intel can even do though. The 9900K is the best we will see till 2021! 10nm will not be fit for high-end gaming till 2020, and even then it's likely inferior to 7nm and certainly 7nm+.

Furthermore their 14nm capacity problems are a result of Intel's 6 and 8 core chips taking up twice the space on wafers to produce, and also mobile/server buyers only wanting Intel's best yields (No one wants Intel's 25w mobile i3's lol). They can't lower prices on their good products because they can't even make enough of them. Intel is going to be forced to have a ton of $400-$600 good chips that are overpriced, and then a mountain of <$100 quad-cores they try to sell almost at cost just to get rid of them.

In hindsight I wonder if Intel would have done things differently. I wonder if instead of making 6 and 8 cores ASAP, they would have focused on making 5.2GHz quad-core with sTIM. I mean my 4.5GHz 6700K games as well as the 9900K most of the time. Nobody has a use for a $600 8-core.
They will have to go to the drawing board. The monolithic design can't continue without a smaller production process.
 
They will have to go to the drawing board. The monolithic design can't continue without a smaller production process.

Which is what my point is when you say "Intel will have to fight back." Intel has nothing to fight back with... until 2022 when they have a brand new arch on 7nm.
 
Which is what my point is when you say "Intel will have to fight back." Intel has nothing to fight back with... until 2022 when they have a brand new arch on 7nm.
Well, their fault for being lazy bastards all these years.
 
Back
Top