• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Squabbling cores

Status
Not open for further replies.
Joined
Mar 21, 2021
Messages
5,557 (3.61/day)
Location
Colorado, U.S.A.
System Name CyberPowerPC ET8070
Processor Intel Core i5-10400F
Motherboard Gigabyte B460M DS3H AC-Y1
Memory 2 x Crucial Ballistix 8GB DDR4-3000
Video Card(s) MSI Nvidia GeForce GTX 1660 Super
Storage Boot: Intel OPTANE SSD P1600X Series 118GB M.2 PCIE
Display(s) Dell P2416D (2560 x 1440)
Power Supply EVGA 500W1 (modified to have two bridge rectifiers)
Software Windows 11 Home
How intelligent is the OS?

Here is the scenario I have in mind: 4 independent processes, each taking 4GB in a machine that only has 8GB of RAM
  • If all 4 cores take up the task, one now has paging as the 4 cores squabble over using 16GB in an 8GB machine
  • If the OS limits the process to just 2 cores, then the squabbling and paging will be avoided
Is the OS intelligent enough to impose the second, more efficient scenario?
 
No but most likely scenario seen will be 4 cores + 2GB of real ram each and paging on all of them
 
Just set paging to 4096MB or get more ram
 
Or just let Windows manage the paging. It does know how. Then you don't put an artificial limit on it. It remains dynamic. And that is important because, except, maybe, for a Point of Sale (single-purpose/one-task) system, we are constantly changing the demands we put on our systems. If a larger PF is needed to prevent "squabbling", assuming free disk space is available, it can be dynamically allocated.

And do note that while the system may only have 8GB of physical RAM installed, the OS knows how to use the PF to create virtual memory greater than 8GB.

My advice - bump the installed RAM to 16GB and use SSDs.
 
And never remove the page file no matter your RAM amount. Even if you don't "need" one per se, even if you have 2 TB of RAM... the kernel does like having a page file.

It baffles me that people still think Windows doesn't know how to manage RAM. It does. Despite all the hate it gets, it is incredibly intelligent with its RAM management, utilization and reallocation. Just let it do its thing.
 
Is it so intelligent?

Let's say I have 4 workers and I set all 4 on the same task, but then notice they are getting in each others way. I respond by withdrawing 2 and leaving just 2 to perform the task, which now happens much faster. Is the OS this intelligent? that was my question.

It goes beyond memory management, the OS must withdraw 2 cores, which at first seems illogical.
 
Last edited:
Hi,
Machine likely just froze soon after.
 
You crashed my paradox!
 
Hi,
Overload pc plus wanting to do something else like visit tpu or simply move the mouse and boom bsod
 
Is it so intelligent?

Let's say I have 4 workers and I set all 4 on the same task, but then notice they are getting in each others way. I respond by withdrawing 2 and leaving just 2 to perform the task, which now happens much faster. Is the OS this intelligent? that was my question.

I see two problems with that scenario.

First, you are assuming that one task is the only task needed to be done at that point in time. That is nothing close to reality with a computer today. The OS is also running other programs - including security software, monitoring networks, checking firewall ports, handing off graphics, and tons more.

Second, what were those other workers doing during that time? Nothing?

Then let's say your 2 workers finish that task in 1 hour. Then what? You assign another task to all 4. But with your scenario, YOU, as the manager must stop what you are doing to do another analysis of their work. That takes up your time and resources. You discover 3 workers can do it better. So you then have to reconfigure your resources, and restart the task.

They finish that in 30 minutes. Then what? You assign another task and then have to do another analysis. Then you determine it works best when the entire team of 4 work together. So then you have to reconfigure your resources again.

Then in 2 hours, another task. Or maybe three more tasks. Then what happens tomorrow?

This is exactly why the engineers at Microsoft have, by default, set the PF to a dynamic PF - so it can, if necessary adjust its size dynamically - as needed - much more efficiently than you can.

Are you that intelligent?

Are you truly an expert at memory and resource management? Because there are teams of real experts - teams of PhDs, computer scientists, and professional programmers (with super computers at their disposal to run scenarios) at Microsoft, with decades of experience and exabytes of empirical data who have done the math - over and over again.
 
Last edited:
Are you that intelligent?

Are you truly an expert at memory and resource management?

I am after learning, not being told they know better than myself; this I already know.
 
Last edited:
I am asking if the OS does worker management.
Hi,
Yes the OS of course manages the workload best until it runs out of resources then it's anyone's guess what happens next
"
Slow as a snail until it completes in the best case
"
BSOD in the worst case.
 
And that is where the paradox kicks in; does the OS realize that one needs to withdraw workers in this scenario?
 
I am asking if the OS does worker management.
Yes! Constantly. It is called resource management. The OS is constantly managing memory, priorities, and tasks. It is constantly moving tasks forwards and into the back ground - depending on the priority and what the user is asking.

For example, if you are idle, Windows may move defragging to the foreground - or check or updates. Then if you start doing something, push those into the back ground again.

Lets not forget that the programs themselves need to support multiple cores. Not all do.

What has this got to do with
"
Are you that intelligent?

Are you truly an expert at memory and resource management?
"

I am not asking what I would do, I am asking what say Windows does.

If I was an expert I would know.
:( I was not speaking to you literally.

For sure, I've got degrees, certs, and decades of experience and I sure would not pretend or try to fool myself into thinking I'm smarter than the folks in Redmond when it comes to resource management.

It is important we don't confuse (or even associate) some of the downright stupid and horrendous policies and decisions made by the marketing weenies and execs at Microsoft. The developers at Microsoft know what they are doing. And as long as the greedy marketing weenies keep their grubby fingers out of the pie, Windows does fine. For example, the Windows 8 UI was a HUGE "marketing" blunder. Another example, the original version of Edge was not included because the developers wanted it in there.
 
Totally unacceptable! I insist on a Green Screen of Death.
I'd accept any color if it simply tossed up an error message that actually made sense and was usable! But of course, often that is putting the cart in front of the horse.

Pretty hard to determine why a halt occurred ,create an event log entry, and post the reason to the screen after the system has already halted. :(
 
Totally unacceptable! I insist on a Green Screen of Death.
Hi,
Sorry but it did go from blue to black now it's best ms can do although not so sure it's politically correct to use black as a bad thing to get :laugh:
 
Last edited:
Is it so intelligent?

Let's say I have 4 workers and I set all 4 on the same task, but then notice they are getting in each others way. I respond by withdrawing 2 and leaving just 2 to perform the task, which now happens much faster. Is the OS this intelligent? that was my question.

It goes beyond memory management, the OS must withdraw 2 cores, which at first seems illogical.
The paging rubber bands and only goes up to 4096MB anyway, best to manually set it. So that way you never run out of paging space.
 
he paging rubber bands and only goes up to 4096MB anyway
That's not true. That is commonly dynamically set as the maximum, but it can grow to within 1 GB of free space on the volume if required for crash dump settings. (Source)

And once again - it is not a set and forget thing - unless you set it way high and that can result in a waste of disk resources, and potential performance problems should you run low on disk space.

Just about every time some one claims it is better to set it manually (or that one is not needed if you have lots of RAM), I try to find a source that collaborates that claim. And never, not once have I. That is, I have never seen one expert who recommends manually setting a size (or disabling it). There are several that say if you are going to anyway, do it the right way to determine the correct setting - don't just pull some arbitrary number based on something read 20 years ago with XP.

Got a link to any expert source that says it is better to manually set it?
 
The OS just manages resources, if I write an application that spins up 4 threads to do anything, it will schedule and run those threads like everything else in the OS. Now, if the developer who wrote the application chose a bad way to scale out the task, that's not a problem that you or the OS can solve. If you go into task manager and make my 4 thread task use only two cores/threads, it will still run all 4 threads, it will just schedule them differently. Also more threads doesn't necessarily mean a linear increase in memory usage. That really depends on the task at hand and if they're all working on the same dataset, it's probably sharing that memory.

With that said, the proposed situation sounds weird and obtuse. I'd like a more concrete example.
 
I understand OP's question as academic, not about an issue he's trying to solve on his PC. Which, by the way, has 16 GB of RAM.

And I am quite sure that the answer would be: no, the OS is not that intelligent, whether you leave it to manage the memory (physical, page file, and virtual) automatically or waste your time by tweaking the PF size.

To make the option #2 ("the OS limits the process to just 2 cores") possible, the OS would have to determine for certain that the processes are truly independent. And that's basically impossible. The processes can communicate in various ways: shared memory, shared files, pipes, network protocols (like HTTP), locks, mutexes. (Sure there are more, but I'm the wrong kind of engineer and can't list the next six.) While the OS enables this communication, it can't know if a process is waiting for another process to produce some data or send a signal. The scheduler could pause some processes in order to reduce swap file use, but this could make some other process wait forever for that signal.

Your example would be an example of poorly written code, just like a program that reads a lot of data from random locations in memory - something that dynamic RAM handles extremely badly.

Also, as @Aquinus said, two cores can still run all four processes, requiring all 16 GB. Even a single core can do that. The potential solution to the problem would be to pause two processes, not limit their execution to fewer cores, and it would only work if they are indeed independent.
 
Your question is predicated on the assumption that your second scenario is going to be more efficient.

Have you proven that it is?

Until you do that, your question is founded on a faulty premise, and as such there's no point in wasting time attempting to answer it.
 
Fair enough; I withdraw my question.
 
I think it is really important to, not just understand, but to believe that Microsoft really does want our systems to run optimally. It just makes no sense, from a business aspect, or any other aspect, for Microsoft to make a default setting that is detrimental to performance (except, maybe, if necessary to enhance security). This means they would not make the PF dynamic and with the settings they have if those were not the best settings for the vast majority of users. And contrary to what some seem to think, their systems are not so unique they don't fit into that majority.

And why does Microsoft want our systems to run optimally? Because it is in their best interest! Otherwise, all the MS bashers and haters in the IT media, on these forums, and in the blogsphere would relentlessly bash MS, just as they do now every opportunity, whether justified, or not.

The developers at Microsoft are not stupid. Those on that side of the house at least, should be given some credit.

Fair enough; I withdraw my question.
Nah! I think it is a fair question. It brings up a good debate and is worthwhile, as long as folks with long set notions have an open mind and are willing to accept that what may have been true long ago, may no longer be - if it ever was!

I think it says a lot when "tweaker" programs that promised to make our computers run "better than new" by changing all those default settings have never succeeded in doing that.
 
Just playing with ideas and I intentionally initially phrased it as

"How intelligent is the OS?"

knowing this was ridiculous but didn't want it to look like I was picking on any particular OS.

It was not meant as a criticism, but rather a means to learn more about how intelligent OSs have got.

I'd expect that excessive paging would more than half the speed of a process; my Son recently went wild on Minecraft mods and I could see the hard drive go nuts and things could take up to 10 mins before Minecraft would even respond (he had 12GB of RAM, since upgraded to 16GB).

Trust Microsoft is all well and good, but I am after knowing if an OS can do what some human managers could not.
 
Last edited:
Status
Not open for further replies.
Back
Top