• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i7-10700K Features 5.30 GHz Turbo Boost

Intel has no competition from AMD in PC Gaming you say? :roll::clap::roll:
I recommend you stop drinking OK
 
Two completely blind fanboys screaming at each other. What a fantastic thing the internet has brought to us.
You call a constructive conversation fanboy screaming? Interesting...
I learned something from ToxicTaz as I'm sure he learned something from me. Forums are here for discussion. :D
 
:toast:
You call a constructive conversation fanboy screaming? Interesting...
I learned something from ToxicTaz as I'm sure he learned something from me. Forums are here for discussion. :D


Here here well said!
 
You call a constructive conversation fanboy screaming? Interesting...
I learned something from ToxicTaz as I'm sure he learned something from me. Forums are here for discussion. :D
I dunno about that. You both seemed absolutely determined not to actually absorb each other's points.

For example, you both tried to claim that your respective manufacturer's TDP was somehow less than the others. You stated: "Intel's rated TDPs are always calculated at the base clock, excluding any boost clocks. AMD rates its TDPs more with industry standards. "

Taz proceeded to then state "Intel has Rocket Lake to deal with Zen 3 and yes Rocket Lake is 125w TDP what's your point? TDP means nothing....just like your Tread Rippers now pushing almost 300w TDP going off your statement."

The problem being that:

1 - There are no industry standards on what TDP means. Gamer's Nexus has a great in depth article here: https://www.gamersnexus.net/guides/...lained-deep-dive-cooler-manufacturer-opinions

2 - Taz's statement first dismisses the importance of TDP entirely, and then in the very same statement he uses TDP as a means by which to attack your point.

You're both at best half right and there's certainly no indication either of you is listening to what the other has to say.
 
  • Like
Reactions: bug
There is an industry standard idea of TDP - the maximum amount of heat component generates. As the name says the purpose for it is thermal design - component cooling should have a specific maximum value to be able to handle.
 
There is an industry standard idea of TDP - the maximum amount of heat component generates. As the name says the purpose for it is thermal design - component cooling should have a specific maximum value to be able to handle.
Care to link to this supposed standard?
Because TDP is not as simple as you think it is.
 
What makes it not simple? There is a piece of something - usually metal because that has good thermal conductivity and ability to withstand high temperatures - and that piece radiates heat. The rate of heat transfer is measured or represented in watts (W, a joule of heat in one second). TDP is the maximum designed amount of heat to be radiated from it. This is quite useful for figuring out its need for cooling. Now, this has little to do specifically with CPUs or semiconductors at this point.

In case of a chip, radiated heat for all practical purposes (techically - roughly) equals its power consumption.

When it comes to CPUs, marketing and brand politics comes into play and disrupts process of normal spec creation. Both AMD and Intel currently have convoluted and complex TDP definitions on purpose (or should I say for marketing purposes) involving thermally significant periods and useful work. Surprisingly, at the same time GPUs do adhere to and limit themselves to TDP quite precisely.
 
What makes it not simple? There is a piece of something - usually metal because that has good thermal conductivity and ability to withstand high temperatures - and that piece radiates heat. The rate of heat transfer is measured or represented in watts (W, a joule of heat in one second). TDP is the maximum designed amount of heat to be radiated from it. This is quite useful for figuring out its need for cooling. Now, this has little to do specifically with CPUs or semiconductors at this point.

In case of a chip, radiated heat for all practical purposes (techically - roughly) equals its power consumption.

When it comes to CPUs, marketing and brand politics comes into play and disrupts process of normal spec creation. Both AMD and Intel currently have convoluted and complex TDP definitions on purpose (or should I say for marketing purposes) involving thermally significant periods and useful work. Surprisingly, at the same time GPUs do adhere to and limit themselves to TDP quite precisely.
So basically you don't care to link to that standard. Got it.
 
So basically you don't care to link to that standard. Got it.
This is a cop-out and you know it.
Did you read what I wrote?
 
Industry standard does not always mean a document. It also means a kind of established common-sense in an industry.
Again, what would you say was wrong in my post?
 
Industry standard does not always mean a document. It also means a kind of established common-sense in an industry.
:roll::roll::roll:
Not only is an industry standard always a document, it's also a vetted one.
Basically, you're wishing your common sense was an industry standard.
Again, what would you say was wrong in my post?
It's incomplete, it paints a truncated picture.
 
What kind of truncated picture? Could you please elaborate?

Not only is an industry standard always a document, it's also a vetted one.
Basically, you're wishing your common sense was an industry standard.
OK, we seem to have a linguistic argument here. Sorry, English is not my first language. I could have sworn industry standard is used for other meanings than strictly vetted documents.
 
What kind of truncated picture? Could you please elaborate?
I can. But I will just go get some sleep instead.

What kind of truncated picture? Could you please elaborate?
Ok, here goes: you're disregarding change and thinking of a stationary system.

In practice, if you design a system to be able to sustain 100W TDP, the heatsink, if cool enough, will be able to handle, say 200W for a limited amount of time. It will get hot, but while it can still absorb heat, it will do its job. This is what you see with Intel systems when you conclude their TDP definition is wrong. But the thing is, they can't know how far the CPU will go, because you may have bought a 150W capable heatsink for your system. Or even a 200W heatsink. There's a limit in the silicon, too, that's where Intel sets the cutoff, but until you reach there, the CPU is allowed to go crazy. But how crazy, that will vary from system to system, Intel cannot guarantee that.
OK, we seem to have a linguistic argument here. Sorry, English is not my first language. I could have sworn industry standard is used for other meanings than strictly vetted documents.
It's not my first language either and you're not entirely wrong. There are unratified standards, they are called "de-facto standards" (the others are "de-jure standards"). But when you're talking about de-facto standards, you need to refer to them as such. Also, de-facto standards are seldom binding (e.g. JPEG is the de-facto standard for images on the web, yet there's plenty of other formats used for the same purpose).
 
Last edited:
Ok, here goes: you're disregarding change and thinking of a stationary system.

In practice, if you design a system to be able to sustain 100W TDP, the heatsink, if cool enough, will be able to handle, say 200W for a limited amount of time. It will get hot, but while it can still absorb heat, it will do its job. This is what you see with Intel systems when you conclude their TDP definition is wrong. But the thing is, they can't know how far the CPU will go, because you may have bought a 150W capable heatsink for your system. Or even a 200W heatsink. There's a limit in the silicon, too, that's where Intel sets the cutoff, but until you reach there, the CPU is allowed to go crazy. But how crazy, that will vary from system to system, Intel cannot guarantee that.
Dynamic nature of the load should not be that much of a problem, spec for maximum.

The thing with CPUs as well as other modern semiconductors is that TDP is almost never about the silicon capability - everything is power limited anyway. Assuming (and using) potential additional cooling capacity by default is something I would not say is OK. While that is the official reasoning it does not sound very sincere. It is the "thermally useful" and "useful load" thing right there. With a heavy load (which on lower and midrange CPUs even gaming can provide these days) the boost received is very minor if at all. On the other hand, it helps a lot with benchmarks.

Edit: that last part is assuming the over-TDP part is temporary and uses oversized cooler's ability to absorb heat which is what the theory says. This does seem to be the case for Intel non-K CPUs today but not for Intel K CPUs or Ryzen 3000s, last two will boost happily beyond TDP.

Heatsink TDP has proven to be even much more bullshit than CPUs are, even though it should be simpler when the dynamic variable part of a heatsink is generally limited to temperature and fan speed.
 
Last edited:
Dynamic nature of the load should not be that much of a problem, spec for maximum.

The thing with CPUs as well as other modern semiconductors is that TDP is almost never about the silicon capability - everything is power limited anyway. Assuming (and using) potential additional cooling capacity by default is something I would not say is OK. While that is the official reasoning it does not sound very sincere. It is the "thermally useful" and "useful load" thing right there. With a heavy load (which on lower and midrange CPUs even gaming can provide these days) the boost received is very minor if at all. On the other hand, it helps a lot with benchmarks.

Edit: that last part is assuming the over-TDP part is temporary and uses oversized cooler's ability to absorb heat which is what the theory says. This does seem to be the case for Intel non-K CPUs today but not for Intel K CPUs or Ryzen 3000s, last two will boost happily beyond TDP.

Heatsink TDP has proven to be even much more bullshit than CPUs are, even though it should be simpler when the dynamic variable part of a heatsink is generally limited to temperature and fan speed.
Let's make this simple: Intel tells you what the CPU would pull under constant load and under (some) bursty conditions. They could hard-cap power there, but instead they let systems with beefier cooling to draw more power, as long as they don't get too hot. This last part is not quantifiable, so they cannot put a number on it. If Intel did what you suggest, like they've done for years, there would be some untapped potential in your CPU.
 
- Intel tells you what the CPU would pull under constant load and silently gives you another number for bursty conditions (+25% for 8 seconds). In a silent agreement with motherboard manufacturers limits are not always imposed on K-series CPUs (or use something inane like 210W for 9900K) letting them pull more than they are supposed to for longer than they should.
- AMD does seem to have a proper power limit in place (at 35% more than the stated TDP value for Ryzen 3000). Additional headroom is available as PBO.

Yes, both state some reliance on cooler capabilities but in practice insufficient cooler just triggers thermal throttle and calms CPU down with that. AMD does have dynamic boost mechanism based on temperatures but it does not make that much of a difference in practice.
 
Last edited:
- Intel tells you what the CPU would pull under constant load and silently gives you another number for bursty conditions (+25% for 8 seconds). In a silent agreement with motherboard manufacturers limits are not always imposed on K-series CPUs (or use something inane like 210W for 9900K) letting them pull more than they are supposed to for longer than they should.
- AMD does seem to have a proper power limit in place (at 35% more than the stated TDP value for Ryzen 3000). Additional headroom is available as PBO.

Yes, both state some reliance on cooler capabilities but in practice insufficient cooler just triggers thermal throttle and calms CPU down with that. AMD does have dynamic boost mechanism based on temperatures but it does not make that much of a difference in practice.
Yeah, well, when the architectures are so different, power draw will act differently. What can you do?
 
You say get a better GPU and RAM?? Are you on drugs?
Well you are GPU-limited in most games. You bought a $600 CPU and $700 RTX2080 for $1300.
You would have better gaming performance on a $300 CPU and a $1000 RTX 2080Ti for the same $1300.

It really is that simple;

Whether it's a 9700K or a 3800X doesn't really matter - A 2080Ti is so much faster than a vanilla 2080 so there's GPU performance you denied yourself by spending that money on an overpriced CPU instead.
 
- Intel tells you what the CPU would pull under constant load and silently gives you another number for bursty conditions (+25% for 8 seconds). In a silent agreement with motherboard manufacturers limits are not always imposed on K-series CPUs (or use something inane like 210W for 9900K) letting them pull more than they are supposed to for longer than they should.
- AMD does seem to have a proper power limit in place (at 35% more than the stated TDP value for Ryzen 3000). Additional headroom is available as PBO.

Yes, both state some reliance on cooler capabilities but in practice insufficient cooler just triggers thermal throttle and calms CPU down with that. AMD does have dynamic boost mechanism based on temperatures but it does not make that much of a difference in practice.

Stock 9900KS is less than 200w @stk

Any 200w air or liquid cooler works fine.

10700K is using less power then my 9900KS with similar performance just above the 3800X

Well you are GPU-limited in most games. You bought a $600 CPU and $700 RTX2080.
You would have better gaming performance on a $300 CPU and a $1000 RTX 2080Ti

It really is that simple. Whether it's a 9700K or a 3800X doesn't really matter. You don't have a 2080Ti so there's performance left on the table that you chose to plough into your CPU that would have served you better on the GPU instead.

Did you read what I said? “RTX 2080 NVlink" to unknowledge people that's "2080 SLI" and yes all my SLI profiles are working....SLI takes alot of work to work....... But performance wise 2080 SLI is faster than 2080Ti

3840x1600 is what I'm driving (LG UltraGear 38GL950G-B) if you're concerned.

Black Friday 2018 my RTX 2080 NVlink setup was $200. Less then one RTX 2080Ti

Playing COD Modern Warfare at the moment maxed out 120+fps with Raytracing off..... With Raytracing on its around 85+fps hit.
 
- Intel tells you what the CPU would pull under constant load and silently gives you another number for bursty conditions (+25% for 8 seconds). In a silent agreement with motherboard manufacturers limits are not always imposed on K-series CPUs (or use something inane like 210W for 9900K) letting them pull more than they are supposed to for longer than they should.
- AMD does seem to have a proper power limit in place (at 35% more than the stated TDP value for Ryzen 3000). Additional headroom is available as PBO.

Yes, both state some reliance on cooler capabilities but in practice insufficient cooler just triggers thermal throttle and calms CPU down with that. AMD does have dynamic boost mechanism based on temperatures but it does not make that much of a difference in practice.
Not how TDP works.
Let's pretend you have a heatsink large enough to keep a heat-emitting object at 50C in a 20C room, and lets say that you're dealing with 100W in those circumstances.

The exact same heatsink with absolutely no changes, will also be able to keep a more substantial heat emitting object, at a steady temperature somewhere upwards of 50C (Since temperature scales are arbitrary, it won't be just twice as many C) in a 20C room, and it will be dissipating, say, 200W of heat the entire time it does so.

So is this imaginary heatsink a 100W TDP heatsink or a 200W TDP heatsink? It can dissipate both figures as long as you're prepared to accept a higher delta over ambient, but claiming either one is the Rated TDP of the cooler is meaningless unless you are also specifying a temperature delta over ambient.

No manufacturer provides this info.
 
Last edited:
Did you read what I said? “RTX 2080 NVlink" to unknowledge people that's "2080 SLI" and yes all my SLI profiles are working....SLI takes alot of work to work....... But performance wise 2080 SLI is faster than 2080Ti
Ewww! SLI...
I'll quote TechPowerup, since you're using these forums:

"So the burning question: should you spend $1,680 on RTX 2080 SLI? Absolutely not. Averaging all our tests, RTX 2080 SLI is within single-digit percentage performance of the RTX 2080 Ti"

Since that article was written 18 months ago, even fewer games have decent SLI support - Nvidia have dropped SLI for nearly all of their cards so most developers aren't bothering to optimise for it.

I mean, if you're happy - great, but I certainly wouldn't go around preaching its merits.
 
Yeah, well, when the architectures are so different, power draw will act differently. What can you do?
Either set the TDP to the where the power limit is or limit the power where TDP is set to. Different architectures do not really play a large part in this.
Stock 9900KS is less than 200w @stk
At stock, yes it is. It tends to have a measured power consumption of 160-170W.
Anandtech noted in their 9900K review that PL2 for it is set to 210W and the same has been found by other reviewers with different motherboards. Note that this is PL2 - power limit for the temporary boost. Based on Intel's own documentation, this should last 8 seconds at maximum and be 125% TDP. In theory.
 
Last edited:
Not how TDP works.
Let's pretend you have a heatsink large enough to keep a heat-emitting object at 50C in a 20C room, and lets say that you're dealing with 100W in those circumstances.

The exact same heatsink with absolutely no changes, will also be able to keep a more substantial heat emitting object, at a steady temperature somewhere upwards of 50C (Since temperature scales are arbitrary, it won't be just twice as many C) in a 20C room, and it will be dissipating, say, 200W of heat the entire time it does so.

So is this imaginary heatsink a 100W TDP heatsink or a 200W TDP heatsink? It can dissipate both figures as long as you're prepared to accept a higher delta over ambient, but claiming either one is the Rated TDP of the cooler is meaningless unless you are also specifying a temperature delta over ambient.

No manufacturer provides this info.

Not really. If a heatsink is designed to handle 100W, that's how much it will dissipate continuously. If you feed it 200W constantly, if will start heating up faster than it can dissipate heat and temps will spiral up from there. Luckily, CPUs are designed to step back from dissipating 200W when they detect this.
So no, the CPU will not get just slightly warmer if it goes beyond what the heatsink is designed to handle.
Either set the TDP to the where the power limit is or limit the power where TDP is set to. Different architectures do not really play a large part in this.
Man, have I been talking to myself all this time? The power limit is dictated by the CPU, the heatsink and the airflow in your case. How would you put a number on that as the manufacturer of the CPU?
 
Back
Top