That's with custom loop? Oh god.
A custom loop, yes, but with a single 240mm rad for both CPU and GPU, and a quasi-AIO CPU DDC pump-block combo that isn't particularly good thermally. Also, the loop is configured for silence and not thermals, with fans ramping slowly and based on water temperatures rather than component temperatures.
Well, I'm not really impressed by thermals of Ryzen chips. You could cool FX chips at 5GHz and keep them under 62C with just big air cooler. Stock 95 watt FX chips could be passively cooled with same air cooler, but with fans removed. And now you need big water cooler just to keep Ryzen working at stock clocks. That's a fail to me. The last time AMD needed water cooler was with FX 9590 and it was just 120mm AIO.
Apparently you didn't read what I wrote whatsoever. Oh well.
Keeping CPU at 100C or 90C isn't acceptable for it. That it can survive such temperatures, means that it won't have any lasting effect if it reaches such temperatures occasionally. I remember some Intel thermal engineer posting that their 14nm chips could survive 1.4 volts at up to 80C in long term, but violate that voltage or cooling and electromigration will be bad.
Sorry, but that's nonsense. Silicon is perfectly fine running at 90-100°C for extended periods of time. As I've said before here, look at laptops - most laptops
idle in the 60s-70s and hit tJmax at any kind of load as they prioritize keeping quiet + accept that running hot doesn't do any harm. It won't be if you
also ramp voltages high while loading the CPU heavily, but advanced self-regulating CPUs like Ryzens don't allow those in combination unless you explicitly disable protections and override regulatory mechanisms. Heck, Buildzoid once tried to intentionally degrade his 3700X, and after something like a continuous 60 hours at >110°C (thermal limits bypassed) and 1.45V under 100% load he lost ... 25MHz of clock stability. So under any kind of regular workload degradation is never, ever happening, as that combination of thermals, voltage and load over time is utterly absurd for real-world workloads. Sure, his sample might be very resistant to electromigration, but even accounting for that there's no reason to worry at all.
Never, Intel's PL1 is how they define TDP. For the first time they finally got their shit together in this one aspect.
PL1 is absolutely not how Intel defines TDP. PL1 is defined
from TDP, TDP is defined as a thermal output class of CPUs towards which CPUs are tuned in terms of base clock and other characteristics. Power draw is only tangentially related to TDP.
Well that's obvious, but it matters now what they will do with Alder Lake.
It's not going to change. The 65W TDP tier is utterly dominant in the OEM space, which outsells DIY by at least an order of magnitude. 65W TDPs for midrange and lower end chips aren't changing. If you want more for DIY, they have a K SKU to sell you to cover that desire - for a price, of course. You, and us DIYers overall, are not first in line for things being adjusted to our desires, and never will be.
First, I highly doubt that T chips are actually a better bins of non T chips and BIOSes often allow you to set your own PL values.
They are supposed to be better binned - whether they are in real life is always a gamble, as there's a lot of overlap between different bins, and some are interchangeable depending on the application.
DIY market was just fine without TDP shenanigans. Even chips with one clock speed were decently acceptable and didn't have problems. I'm not a fan of turbo and other power tweaking. One static clock with downclocking for power savings seems to be the best design so far.
Again: it seems like you haven't read the rest of this thread at all. I'll just point you to
this post. Though especially this part:
you're approaching this from the wrong angle, which either stems from a fundamental misunderstanding or from wanting something that doesn't exist. The issue: TDP is not a consumer-facing specification denoting power draw. It never has been. Historically it has been roughly equivalent to this, but this is more coincidental than intentional. TDP is a specification for SIs and cooler OEMs to design their cooling solutions and system designs around. If TDP was meant to denote power draw directly, it would for example be a guide for motherboard makers in designing their VRM setups - but it's not, and there are specific specifications (dealing with the relevant metrics, volts and amps) for that. You can disagree with how TDPs are used in marketing with regards to this - I definitely do! - but you can't just transfer it into being something it isn't.
Saying "DIY market was just fine without TDP shenanigans" is such an absurd reversal of reality that it makes it utterly impossible to actually discuss the issues at hand. TDPs have
never been directly related to power draw, nor has it ever been intended for the DIY market beyond a product class delineation.
As for abandoning boost: well, if you'd be happy with ~2.5GHz CPUs, then by all means. Because that's what we'd get if there wasn't boost - we'd get base clock at sustained TDP-like power draws. The 65W TDP tier isn't going anywhere, again, as OEMs buy millions of those CPUs, and changing it would be extremely expensive for them.
I know full well that it's not exactly a throttle in legal terms, but realistically you lose performance, because your cooler can't keep up. You sacrifice performance to not damage the chip.
Yes. But that's not throttling. That's part of tuning a DIY system. Nobody has ever promised 100% boost clock 24/7 under 100% all-core load, or even 1-core load. You really need to be more nuanced in your approach to this.
Obviously at below maximum manufacturer specified temperature, maximum clock speed and at whatever my ears tell me is acceptable noise level, which tends to be somewhere at up to 1200 rpms most of the time, while preferably at no more than 1000 rpm. Power draw depends on chip and is generally not a concerns, unless it's very high. Your partner's TR system would have failed this test spectacularly.
"At below maximum manufacturer specified temperature" ... okay ... so, anything below 100°C-ish? Because above you seemed to say 80°C was unacceptable. Yet that's quite a bit below maximum. Also, 1200rpm ... of which model of fan, how many fans, which case, which cooler? And
obviously the TR system would have failed,
it had a clogged AIO cooler. My point was: you're making generalizing claims without defining even close to a sufficient amount of variables. Your criteria still make it sound like my cooling setup is well within your wants, yet you're saying above that it's unacceptable, so ... there's something more there, clearly.
prime95 is a perfectly realistic workload, some people calculate primes for weeks. And let's not get into Furmark shit again. I will be very clear, if card can't handle some type of workload, then it's either badly tuned or has an inadequate cooling solution. I don't care that it kills some badly engineered cards, as no properly made card should die in Furmark. Also judging by power figures, running Furmark is not much different than mining or running MilkyWay@Home. My RX 580 can handle Furmark just fine with vBIOS mods. It now can't reach 80s and barely breaks into 70s in Furmark. RX 560 that I have in other machine, fails to reach 70s.
Prime95 not "realistic". Yes, some people calculate primes for weeks. Some people calculate the changes in molecular or cell structures of complex organisms when subjected to various chemicals. That doesn't make either a relevant end-user workload. If you're doing workstation things, get a workstation, or accept that consumer-grade products aren't designed for that and you need to overbuild to match. As for FurMark, whether a GPU can "handle" it is irrelevant. It is a workload explicitly created for maximum heat output, which is
dangerous to run. It doesn't matter what thermals your GPU reads (heck, the very fact that you're saying "it can handle it with BIOS mods!" says enough by itself!), the issue is that it creates extreme hotspots away from the thermal sensors on your GPU. Most GPUs - all of them pre RDNA - have their thermal sensors along the edge of the die. Under normal loads there's easily a 10-20°C difference in thermals between the edge and centre of the die under full load. Furmark exaggerates that - so if your edge thermal sensor is reading 70-80, the hotspot temperature might be 110 or higher. If your hardware doesnt die that's good for you, but please stop subjecting it to unnecessary and unrealistic workloads just for "stress testing".
And watts are amps*volts, therefore VRMs care about watts. And no those Athlons didn't run at 1.5 volts. Athlon X4 870K and Athlon A4 845 are both limited to 1.5V or 1.485V. No Athlon came out with more than that. Also, most of that voltage is needed to turbo to work, so if you disable turbo, you can get massive voltage reductions.
Jesus christ, man, come on. No. VRMs care about watts
only as expressed in amps. That was the
entire point of what I said. And while it's true I cited the voltage of the highest running Athlons, they're still much higher than current CPUs. (Current-gen Ryzens report very high core voltages in software, but from what AMD's engineering teams has said those voltages are read before stepping down to what the core actually demands, so it's not actually running 1.4V or higher during boost despite what software might say.)
And yes, of course you get voltage reductions if you disable boost. That's ... rather obvious, no? Go below stock behaviour, and you'll get lower voltages and power draws. Not quite surprising.
Nah, it's new stock. I have loads of chips for FM2+ boards. Athlon 760K is just one of them. I bought it for unique reasons:
www.overclock.net
No. Old stock = old, unsold products that have been sitting on shelves for a long time. That CPU was launched in October 2012, and while production of course ran for several years after that, it definitely wasn't recently manufactured when you bought it. And even if it was, it was still ancient tech at that point. Which is fine, but please don't try to say that it wasn't old.
Previously that computer had 870K, which was made in 2015 and Athlon 845 was made in 2016. Both are nowhere near being 10 years old. Several motherboards had an extended manufacturing for some reason and thus you could buy them even in 2018 and probably in 2019. Athlon 845 is an unicorn chip, which is somewhat rare as it was released at the end of lifespan of FM2+ platform and it had Carrizo core, the last architectural improvement on AM3+ and FM2+ platforms. Athlon 870K is also late production model, but is a better binned 860K. Availability of it was poor and it mostly sold after FM2+ becoming obsolete. There were bunch of other rare CPUs released in 2016 for FM2+ platform, like A6 7470K or A10 7890K.
Not that those CPUs aren't interesting, but they're still old tech. My A8-7600 that I just retired from my NAS was just as old. Sure, AMD iterated upon its 'large machinery' cores for quite a few years, and even launched Carrizo very close to Ryzen, but the actual changes generation-on-generation were pretty tiny. And that a five or six-year-old CPU is less old than a 10-year-old CPU is ... not that interesting?
If I had to write a review, I'd try to do it both ways - like the guys here at TPU do. When reviewing, you need to consider that not everyone who reads your review will want the same out of their system.
Absolutely. Though that's a lot of work - more than most reviewers probably have time (or get paid) for. IMO, reviewers ought to have at least two test systems per generation, one high end and one midrange, and compare the two at spec and stock settings. That would be near ideal.
Exactly. Throttling means dropping below base clock, which (coming back to the original topic) only that one ASRock motherboard does in HU's latter video. All the rest are within spec, however vague that spec is.
Yeah, that's pretty atrocious. This is why this discussion is getting so muddled though - people mix up annoyance at Intel for being vague AF and not enforcing their specs with OEMs partially making use of that to effectively OC their parts, and partly just making cheap shit and selling it as if it was good enough. Both sides need addressing, and need addressing specifically for what they're messing up. But that's tricky.
I saw a 4750G on ebay a couple weeks ago for about £450. As an OEM CPU, it comes with no box and no warranty. I got the Asus B560M TUF motherboard and the i7-11700 for the same price brand new. We'll see what happens when the 5000G/GE series come out for DIY. I might buy one just to test it, and sell the Core i7 if it's any good.
Whoa! I paid €225 for my 4650G. I don't care much about the warranty - I've never had a CPU fail, and stories of that are rare enough that I can't imagine needing it.
Oh no, I'm definitely not gonna run a 224 W PL2.
I intend to do as much tweaking as necessary to make it work in my thin SFF case. I want to find the perfect balance.
Sounds interesting! Let me know if you make a build log?
I'm not quite sure that's the case. My Ryzen 3 3100 basically runs at 3.85-3.9 GHz all the time, independent of workload, as it never maxes out its power limit. Hungrier chips with more cores could do the same with cTDP. If you want full power, set cTDP to the highest, and enjoy maximum clock speed all the time. You want low thermals? Just turn your cTDP down to have your clocks and voltages decrease too. You don't even need different SKUs with different TDP ratings for this.
Well, the 3100 is a "low end of its TDP tier" SKU, i.e. it's likely overspecced in terms of TDP. They could probably make its base and boost clocks match if they wanted to, but probably leave some room between them to add leeway for utilizing garbage-tier bins of chips if they want to. (You often see the same on older i5s and i3s too.) Each tier must include a range of products after all. But without modern boosting systems, we'd either need SKU-specific TDPs or we'd get a
much smaller range of chips to choose from as the power draw would limit differentiation.
The general rule for stability testing is to get an idea whether system is stable at maximum imaginable load, it doesn't matter if it's realistic or not, because one day you might need a similar load to work perfectly. And once stability testing is done and thermals are in check, it's still advisable to increase voltage a bit to leave some room for any unexpected voltage fluctuation or just aging of chip.
That's a commonly held enthusiast belief, but it's a rather irrational one. Power viruses and unrealistic heat loads can be beneficial if you're
really pushing things and still want 24/7 stability, but for anything else they're both rather useless, potentially misleading, and possibly harmful to your components. What is the value of keeping CPU temps while running Prime95 under a given level if the CPU is never going to see a workload similar to that? Etc.
Ryzens just can't match FX in terms of their prices. I still remember 130 Euros for 6 cores and 180-200 Euros for 8 cores, Ryzen never had value close to FX and they don't really overclock, unless you deal with lame turbo.
Value is relative. You clearly value overclocking for its own sake. Which is of course fine if that's what you like to spend your time doing! But your conception of value handily overlooks the fact that FX (and Bulldozer derivatives in general) performed rather terribly. They were fun from a technical and OC perspective, and they were cheap, but they were routinely outperformed by affordable i5s (and even i3s towards the end) with half the cores or less. Ryzen gen 1 and 2 delivered
massive value in terms of performance/$, but as you said, they never really OC'd at all. I prefer the latter, you prefer the former - to each their own, but your desire is by far the more niche and less generally relevant one.