• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Squabbling cores

Status
Not open for further replies.
If the page file is fixed, what happens when it is full? does the OS kill the offender? as before I am intentionally a bit ambiguous about which OS

In the 1970s, a programming language called "C" was invented.

In "C", when you need more memory, you call a function called "malloc(size)" (memory-alloc), and ask the system for "size" more memory. For example, malloc(4) gets you 4 more bytes. malloc(4096) gets you 4kB (1kB == 1024 bytes). Etc. etc.

In the 1980s+, techniques were made for the OS to lie to programs about the location of memory. Its called the "virtual memory" system. The gist is that every program has its own, unique, "virtual memory" that it cannot escape from. This allows the system to isolate programs from each other, increasing security. The OS also can detect when different programs have "the same memory" (such as DLLs), and in these "shared memory" cases, the OS only has one copy of the memory (when the programs all think they have their own copy).

Now consider the case of an "all zero" page of memory (ie: the program hasn't touched the memory yet). Instead of physically using that memory yet, the OS maps "all zero" to... the same memory location. This is both Linux and Windows.

In this case, an "all zero" page of memory takes up... zero bytes. Its fully virtual. When the program reads from it, the OS just returns zero.

--------------

In Linux, malloc always succeeds. It turns out that Linux made the explicit decision that "malloc" doesn't actually allocate memory, it just pretends to. Really, its just a block of "all zero" memory, because it hasn't been touched yet.

The minute a program "touches" the memory, Linux "detects" the write, and splits up the virtual memory to actually point to physical memory.

So you see, the problem is one of history. The 1970s model was "malloc() gets us more memory", but a set of lies / changes made it more convenient for malloc() to just... not do that anymore. malloc() gets more virtual memory, non-existent memory that the OS then fills in later at its leisure.

Virtual memory is also how the OS represents talking to GPUs (Shared Virtual Memory). When you write to some sections of "virtual memory", it doesn't go to DDR4 RAM physically anymore, it might go to Ethernet (!!!) or to the GPU, or even a file on the system.

---------

All "paging" is, is having a pool of bytes on your drive that interacts with the virtual-system. If the OS decides that some bytes are "rarely used", it punts it out of DDR4 RAM and writes it to disk (or SSD), saving the high-speed DDR4 for programs that are actively using RAM. This feature also was invented in the 1980s when Virtual Memory became popular.
 
'saved back to disk as a normal file' sounds a bit like extended paging to me
:confused: No its not. It just means the open file is closed and now, if the data is needed again, the system will need to locate the file on the disk, read it in from its normal location instead of grabbing the data out of the PF.

what is a bit-bucket?
Bit bucket.

An old-timey term for never never land, the etherspere.
 
And there we have it, data is lost.
I guess I am not explaining myself well. First, and again, it has nothing to do with the PF being fixed or System Managed.

Second, when I said bit-bucket, I did NOT mean it is "lost" - as in unintentionally misplaced and can never be recovered again. I mean the OS discards it - doesn't need it any more, throws it away.
 
If the page file is fixed, what happens when it is full?
That rarely happens but when it does, Windows deletes older, less-used data from the PF. If that data needs to be accessed again, it's loaded from the source files instead of the pagefile. Now if you leave the pagefile in it's default location this means that there will be no performance penalty as the pagefile and source files are on the same drive. And this is another reason why letting Windows manage the pagefile carries zero benefit and can carry a disk space penalty. If the files have to be loaded from disk, it doesn't matter if it's in the pagefile or not. But letting Windows increase the size of the pagefile will needlessly eat up disc space.
what is a bit-bucket?
Deletion. But remember, the pagefile is temp storage, a scratchpad for the OS.
And there we have it, data is lost.
No, it's just loaded from the source files again, see above.
 
Now if you leave the pagefile in it's default location this means that there will be no performance penalty as the pagefile and source files are on the same drive. And this is another reason why letting Windows manage the pagefile carries zero benefit and can carry a disk space penalty.
:(

Everything else you said was right. In the above, the first sentence is inaccurate, the second does not make sense.

First, in this scenario it makes no sense to claim, "letting Windows manage the pagefile carries zero benefit" because your same drive scenario could care less if the page file was fixed or System Managed. So that is just obfuscating the issue and does nothing to answer Andy's question.

And second, operating systems have known how to access two drives simultaneously for decades. Even with IDE drives with two drives on the same cable, the Master and Slave could be accessed simultaneously.

So in your scenario, if the source file is on one drive and the PF is on another drive, Windows can access both at the same time - and that produces a performance gain, not penalty - especially with spinners.

If they are on the same drive, it very well can introduce a penalty because the system has to read from the source location and then stuff it back in to the PF - that is, it has to wait for it to complete the first task before it can even start the next.

****

I thought this thread was about CPU cores, not page files.
 
I thought this thread was about CPU cores, not page files.

It was about demands beyond the RAM availability. Cores was a scenario that was constructed to have an obvious response (withdraw cores) since the cores were squabbling over RAM. I picked 4 active tasks, so that paging out two of the cores would not be the best response.

That not everyone agrees is the cornerstone of progress.
 
Last edited:
That not everyone agrees is the cornerstone of progress.
And seeing everyone's perspective is useful in understanding many schools of thought.

So one last time, and I'll keep it simple so everyone can understand;
Letting Windows manage the pagefile does and WILL carry long-term system performance penalties.
On mechanical hard drives it will lead to fragmentation that Windows itself can not and will not resolve, which will slow down performance overtime.
On SSDs, this can and will lead to increased NAND cell wear.
On both it will lead to over-active disk space usage, even on systems with a large amount of RAM.

Manually setting a pagefile to a fixed size has following benefits;
On HDDs, with the pagefile set to one static size, it will be created and stay in one location and will NOT be fragmented all over the drive, which will prevent drive performance degradation.
On SSDs, with the pagefile set to one static size, there will NOT be over-active NAND cell use.
On both, with the pagefile set to one static size, it will NOT be the cause of over-active disk space use.

Additionally, setting a fixed/static pagefile size AND moving the pagefile to a secondary drive will add the benefit of allowing the system to access the pagefile AND other files on the system drive at the same time without interference, thus allowing for better overall system performance.

To summarize, manually managing the pagefile carries with it MANY benefits and few(if any depending on the system hardware configuration, IE low amounts of RAM) penalties.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top