• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

The misconception of CPU multitasking

Joined
Mar 20, 2022
Messages
213 (0.18/day)
Processor Ryzen 5 5600X
Motherboard Asus B550-F Gaming WiFi
Cooling Be Quiet! Dark Rock Pro 4
Memory 64GB G.Skill Ripjaws V 3600 CL18
Video Card(s) Gigabyte RX 6600 Eagle (de-shrouded)
Storage Samsung 970 Evo 1TB M.2
Display(s) 2 x Asus 1080p 60Hz IPS
Case Antec P101 Silent
Audio Device(s) Ifi Zen DAC V2
Power Supply Be Quiet! Straight Power 11 650W Platinum
Mouse JSCO JNL-101k
Keyboard Akko 3108 V2 (Akko Pink linears)
With the CPU loaded up at 100% on all cores doing a render task in the background, you can still play games without dropping any frames. The rendering task receives less processing time and will take longer but the primary application is completely unaffected.

Maybe this is the result of Windows task scheduling improvements over the years?

I feel like there's still a general misconception that the performance of all active applications will drop off a cliff when the processor is at 100% usage on all cores. And that you need a high core count processor to be immune to this problem. At least with modern versions of Windows (even on >15 year old hardware) that's not how the processing time allocation actually works.

It's news to me, at least.
 
Why would you run a game when you're benchmarking all cores, though? Defeats the purpose of benchmarking. I can see this being a problem with mining, but that's primarily GPU.

But yeah, Windows actually handles dedicating cores much better than it did in the old days.
 
And to think AMD already has 200+ core CPU's in the works coming out in a few years.
 
Those are for datacenters and such, though. Insane core count CPUs have always been a thing there. You will neither need nor make use of it in your everyday PC, even if you could have one.
 
With the CPU loaded up at 100% on all cores doing a render task in the background, you can still play games without dropping any frames. The rendering task receives less processing time and will take longer but the primary application is completely unaffected.

Maybe this is the result of Windows task scheduling improvements over the years?

I feel like there's still a general misconception that the performance of all active applications will drop off a cliff when the processor is at 100% usage on all cores. And that you need a high core count processor to be immune to this problem. At least with modern versions of Windows (even on >15 year old hardware) that's not how the processing time allocation actually works.

It's news to me, at least.
performance does tank?

I've tested it and tried it with various programs, tried them at lower priority and such... and they slow down. a lot.
 
performance does tank?

I've tested it and tried it with various programs, tried them at lower priority and such... and they slow down. a lot.

What combination of programs are you running? Definitely some of them will take a hit but the foreground task is unaffected.
 
I feel like there's still a general misconception that the performance of all active applications will drop off a cliff when the processor is at 100% usage on all cores.
Not sure who would think that, or why. Does a race car engine stop running when it is maxed out? No. It just doesn't go any faster.

Running at 100% is not necessarily a bad thing. Same goes for RAM, btw. If your system never utilizes all its resources, then you bought more than you need.

Buying a little extra headroom is fine. It leaves room for possible unknown future demands, and it may mean cooler (and thus quieter) operation. But a lot of extra headroom that is never utilized is just a waste of resources and money.

Why would you run a game when you're benchmarking all cores, though? Defeats the purpose of benchmarking.
^^^This^^^

Going back to the race car analogy - you would not hook up your Airstream to that car if you are running a race.
Maybe this is the result of Windows task scheduling improvements over the years?
But yeah, Windows actually handles dedicating cores much better than it did in the old days.
Contrary to what many believe, and worse, what many want others to believe, Microsoft has not been sitting on their thumbs the last 20 - 30 years. So yes, Windows knows how to manage CPU and memory resources very well these days. That is precisely why, for most users, it is best to just leave the defaults alone! This is especially true if they are not true experts in Windows resource management.
 
What combination of programs are you running? Definitely some of them will take a hit but the foreground task is unaffected.
The very fact that turbo mode exists just throws your entire idea out the window

Your view on how this works will only work on systems with a static all core overclock, and that is not a lot of systems
 
The very fact that turbo mode exists just throws your entire idea out the window
I agree - mostly. But "Turbo" mode (or should I say "non-turbo" mode?) also exists as a means of reducing energy consumption and reducing heat - especially with portable devices.

Some may see "Turbo" mode as a "reverse phycology" marketing "gimmick". That is, it could have gone the other way around where Turbo mode would be "Normal" and non-turbo would be "Economy" or "Battery Saving" mode. So instead of marketing the product as having the capability of providing an extra boost when you need it, it is marketed as an energy thrifty product.

But in terms of marketing a "fun" machine, "Turbo" sounds better than "Economy".

Regardless, I fail to see the point the OP is making.
The rendering task receives less processing time and will take longer but the primary application is completely unaffected.
Which is exactly how it should work, and again, pretty sure this is how most people (who give a sh!t about it) see it. You can look at foreground and background processes in terms of cooking. A background processes is one you, as the cook, decide to put on "the back burner" because you need to concentrate your attention on what's happening up front. The back burner stuff is still cooking, just at a simmer and not a full burn.
Your view on how this works will only work on systems with a static all core overclock, and that is not a lot of systems
It would also suggest the goal is to have tasks and processing demands equally distributed among all the cores. That's not how it works either.
 
I agree - mostly. But "Turbo" mode (or should I say "non-turbo" mode?) also exists as a means of reducing energy consumption and reducing heat - especially with portable devices.
Yeah theres a 600Mhz difference with my CPU between all core and boost, it's not small - and it's not a portable part, either. (4.35GHz -> 5.05GHz, with PBO on)

You can fire up the CPU-Z stress test or benchmark and see most systems lag out trying to do two things at once, no matter how powerful they are
Wanna see a supercomputer stutter? Eject a DVD
 
Hi,
Depends on the hardware

You'd take a hit on any hareware now it's just a matter of how much of a hit and what you'd call significant which is a circular argument that will go on forever :laugh:
 
Depends on the hardware
It depends on many factors with hardware being just one. And even the hardware can be divided up with areas of responsibility that depend on the specific tasks/programs running.

For example, different CPUs have different capabilities and manage resources. Some programs are more CPU intensive, others more GPU intensive. So, for example, if GPU intensive, the CPU can quickly hand off more tasks to the GPU - and it takes very little CPU horsepower to hand off tasks.

Some programs are disk intensive - which is dependent, in large part, to how much RAM is installed, as well as how big and how fast the drive's integrated buffer is - not to mention the type of drive itself.

And then of course, the OS matters too.

circular argument that will go on forever
Yeah, unless and until this comparison/testing environment is spelled out with specific hardware, specific OS configuration and specific applications in specific configurations, with specific testing criteria, we will indeed just be spinning our wheels, going round and round in that circular argument forever and ever, getting no where.
 
Back
Top