• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Squabbling cores

Status
Not open for further replies.
Low quality post by Shrek
What's the worst thing that can happen if you have a PF, and the worst thing if you don't?
Sorry I missed this earlier.

The worst thing that can happen if you leave the PF in default config, long term, is that your system will be slower when it needs to access the pagefile and other files on the same drive at the same time. There will also be more wear if the drive is an SSD. On a HDD the pagefile can fragment all over the drive which will severely hamper performance over time, even with Windows running it's default defrag utility.

The worst thing that can happen of you disable the PF is that your programs might run out of RAM and either close or crash. This will only happen if you have <8GB. >8GB or more and the chances of running out of RAM in normal computing tasks is minimal. If you have 24GB of system RAM or more, this will not happen. There is also the possibility that some programs will not run without a PF present. While that is fairly rare, it does happen. Adobe Premier is infamous for this. This is why a lot of people manual configure the PF, set it to a static size and move it to a secondary drive/partition.

Whether you use an SSD or an HDD as a boot drive, manually managing the PF carries benefits that can not and should not be ignored.

As for the point made in the above blustering by Mr Duck, memory dumps are only made when the Windows kernel crashes and then those dumps are dropped into *.dmp files, not the pagefile. Just because microsoft says something DOES NOT mean it's gospel truth. After all, they're the ones trying to force everyone to use SecureBoot & TPM, Windows Defender and Edge and keeps pushing their marketing BS without giving users much of a choice, if any.

Going with everything the almighty microsoft says and recommends is a fools gambit.
 
Last edited:
Sorry I missed this earlier.

The worst thing that can happen if you leave the PF in default config, long term, is that your system will be slower when it needs to access the pagefile and other files on the same drive at the same time. There will also be more wear if the drive is an SSD. On a HDD the pagefile can fragment all over the drive which will severely hamper performance over time, even with Windows running it's default defrag utility.

The worst thing that can happen of you disable the PF is that your programs might run out of RAM and either close or crash. This will only happen if you have <8GB. >8GB of ran and the chances of running out of RAM in normal computing tasks is minimal. If you have 24GB of system RAM or more, this will not happen. There is also the possibility that some programs will not run without a PF present. While that is fairly rare, does happen. Adobe Premier is infamous for this. This is why a lot of people manual configure the PF, set it to a static size and move it to a secondary drive/partition.

Whether you use an SSD or an HDD as a boot drive, manually managing the PF carries benefits that can not and should not be ignored.

As for the point made in the above blustering by Mr Duck, memory dumps are only made when the Windows kernel crashes and then those dumps are dropped into *.dmp files, not the pagefile. Just because microsoft says something DOES NOT mean it's gospel truth. After all, they're the ones trying to force everyone to use SecureBoot & TPM, Windows Defender and Edge and keeps pushing their marketing BS without giving users much of a choice, if any.

Going with everything the almighty microsoft says and recommends is a fools gambit.
This is why I allocate Max amount to paging only or move it to a separate drive (only on HDDs), it keeps it from rubber banding or from the drive being totally occupied by files. I set Recycle bin to 1GB and I turn on system restore ..
 
Unix/Linux/BSD based OSes still use the swapfile as a "scratchpad" to write temp data for referencing various functions.
Only if you run out of system memory. *nix machines will run perfectly fine without swap space so long as there is enough system memory. If the machine runs out of memory and swap space (if any exists,) then the OOM killer will just start killing off processes to free up memory. It's really pretty simple in that regard.
Interesting how, with a massive 64GB installed, you still have 3.8MB swap usage - even if just a drop in the ocean.
This is an area where I'm not as experienced, but it's probably just the memory dedicated to defining the number of pages on disk and information about them, even if they're unused. Some pages are probably stubbed out for purposes of performance so an already allocated and empty page can be used immediately when a swap to disk does come in as opposed to having to allocate the space for the page at the moment of the swap. Swapping is already super expensive in the grand scheme of things. It wouldn't surprise me at all that some OSes do things to try and speed up that process.
 
then the OOM killer will just start killing off processes to free up memory. It's really pretty simple in that regard.

How can one kill a process that is not finished?
 
Yeah, sadly, we see too often that infrequently accessed page count, the various commit rates and other factors will affect two machines (even identical machines) differently. Yet those and other factors are not taken into consideration when arbitrary numbers are thrown out as though one size fits all. :( Or when links are posted with claims they support a position when in fact, they contradict its.

It is also sad when folks insist they are so experienced they don't need to verify their facts and so spew out misinformation. :(

For example, dump information IS "dropped" into the Page File where it is then used to create the dump files - which one could easily verify with just a few minutes with Google.

The true information is, as mentioned before,
Disable the PF and no dumps, unless the admin sets up a dedicated dump file - not a "normal user" task.
And the supporting source is: Memory dump file options where page files are also needed for the 4 types of dumps is further explained and noted here,
Complete Memory dump
A complete memory dump records all the contents of system memory when your computer stops unexpectedly. A complete memory dump may contain data from processes that were running when the memory dump was collected.

If you select the Complete memory dump option, you must have a paging file on the boot volume that is sufficient to hold all the physical RAM plus 1 megabyte (MB).

Kernel memory dump
A kernel memory dump records only the kernel memory. It speeds up the process of recording information in a log when your computer stops unexpectedly. You must have a pagefile large enough to accommodate your kernel memory.

Small memory dump

A small memory dump records the smallest set of useful information that may help identify why your computer stopped unexpectedly. This option requires a paging file

Then the default setting, Automatic Memory Dump - the default setting which, of course, uses a Windows managed page file.

Now of course, if you are smarter than all the PhDs, computer scientists and developers at Microsoft, you will claim they don't know what they are talking about, that all the above is all wrong, and you will just expect everyone to automatically believe you without any supporting evidence just because you said it. :(

My advice - do your own research.
 
How can one kill a process that is not finished?
The Kernel doesn't care. It will just flat out terminate the process and reclaim the memory held by it. It is the OS' job to manage these things, which includes the ability to terminate a process mid-execution.

I can kill a process simply by running `sudo kill -9 <pid>` with -9 being a non-catchable, non-ignorable kill.
Yeah, you don't want to get into OOM situations (out-of-memory) on Linux. Linux just starts deleting things at random in RAM so that things can fit.
Yeah, particularly because what the kernel terminates isn't always predictable. It's not so much deleting random things in memory but rather just terminating processes at random. That random process could be any service running on the machine, which includes your window server. So you might suddenly find yourself without a GUI if the wrong process is killed.
 
Linux just deletes it in the middle of its execution.

Yeah, you don't want to get into OOM situations (out-of-memory) on Linux. Linux just starts deleting things at random in RAM so that things can fit.

You are scaring the kids... you are scaring me!
 
Yeah, you don't want to get into OOM situations (out-of-memory) on Linux. Linux just starts deleting things at random in RAM so that things can fit.
Could be wrong, but I don't think it does that anymore. Torvalds fixed that in kernel years ago, IIRC. The Linux kernel now crashes gracefully when it hits an OOM condition.

As for the comments by Mr Duck, dumps are NOT useful to the end user or even the prosumer. The only reason to have them is if they will be debugged or sent to microsoft to be inspected. And full memory dumps very rarely take place. Why you ask?
WindowsMemoryDumpDefaultSetting.jpg

This is the default Windows setting for memory dumps in all editions of Windows available to the general public and even the LTSB/LTSC versions of Windows 10 Enterprise. This has been the default setting since Windows Vista SP1. A pagefile does NOT need to be present for a small dump. A small dump is generated by the kernel directly to the minidump folder.

EDIT: For the record, I have seen minidumps generated during BSOD crashes on systems where a pagefile was not present. What microsoft states and what actually happens aren't always the same thing. As such, documentation from microsoft isn't always the final word, or even the best word.

Last sentence removed per recommendation.
 
Last edited:
What was that about misinformation? Mr Duck, you're not arguing merit, you're arguing your pride, like you always do. That's on you.
No, I am arguing the facts, after I verified them (a habit I am proud to admit) and something you have claimed your vast experience negates your need to do.

And I provide links to supporting evidence - something you never do.

Case in point,
A pagefile does NOT need to be present for a small dump.
Where's your supporting evidence?

I showed where Microsoft clearly states,

Small memory dump​

A small memory dump records the smallest set of useful information that may help identify why your computer stopped unexpectedly. This option requires a paging file

So if you want to keep saying I am wrong GO ARGUE WITH MICROSOFT!!!!
 
What was that about misinformation? Mr Duck, you're not arguing merit, you're arguing your pride, like you always do. That's on you.
No, I am arguing the facts, after I verified them (a habit I am proud to admit) and something you have claimed your vast experience negates your need to do.

And I provide links to supporting evidence - something you never do.

Case in point,

Where's your supporting evidence?

I showed where Microsoft clearly states,

So if you want to keep saying I am wrong GO ARGUE WITH MICROSOFT!!!!

Make your position clear about pagefiles and leave it there. Keep up this hissy fit contest attacking each other and everyone gets free points. You may be "right" and they may be "wrong", but not everything revolves around you winning internet arguments.
 
Could be wrong, but I don't think it does that anymore. Torvalds fixed that in kernel years ago, IIRC. The Linux kernel now crashes gracefully when it hits an OOM condition.
I actually did oversimplify what Linux does in OOM conditions, so maybe I'll spend a little time talking about what it does.

Linux out of the box will assign an "OOM Score" to each process and that's based on the amount of available system memory it's using with a user tweakable setting to apply an offset to that score. A score of 10000 is like putting a target on its back for the OOM killer and a score of 0 will cause the OOM killer to never kill that process... ever. So out of the box, the process using the most memory is the most likely to get killed. I say most likely because as I understand it, it doesn't always just pick from that with the highest score. There are also various bits that have to do with NUMA where available resources are described based on something like a "core group" (I don't remember the exact name,) which is probably just another term for a NUMA node.

However, with that said, you can also configure the linux kernel to both panic when an OOM event occurs or to do nothing. The doing nothing bit really should never be used because it'll either really screw stuff up, just hang, or it will cause a panic anyways... non-deterministically, because it depends on the state of the machine when the OOM event occurs.
 
Last edited:
Yes.

See, you go with even less than I generally do. I usually alternate between 3GB or 4GB depending on the amount of system RAM installed. >8GB? 3GB. <8GB? 4GB.

I think I'm going to experiment with 2GB just to see the effect, if any..
Oh that's on my little mini PC thing, I've not even bothered messing about with it. Thrown 16GB of RAM in there and forgot it... All important thing I find is SSD and having enough RAM to not really need a page file...

I know it generally takes up your hard drive space as well, so the larger the damn thing is, the more space it eats from the drive... Not worth the hassle and even more so definitely not worth the hassle on a spinny drive... SSDs are really the way forward, even if its a 120GB model... It's like night and day with one in comparison to the HD.....
 
I know it generally takes up your hard drive space as well, so the larger the damn thing is, the more space it eats from the drive...
Windows may be good or bad at managing the size of the PF, which includes shrinking it of course. I don't know. Anybody have a first hand experience with the file just expanding and not shrinking? Or never being defragmented?
 
Windows may be good or bad at managing the size of the PF, which includes shrinking it of course. I don't know. Anybody have a first hand experience with the file just expanding and not shrinking? Or never being defragmented?
As I said above, I literally set in the bigger memory sized machines and leave it, don't even think about it :) This has been the most thought its had, like, ever! :D

As long as you have the RAM in the system for the job it needs, there's no need for a big page file I don't believe. With my rigs with 32GB or even 64GB, its set low as it's only going to slow things down as it's using the drive to access for the file. If its got decent storage, even an older SSD with slower speeds is still miles better than the spinny drives, that's going to do you just fine :) Most importantly, just make sure it has the RAM in the system to do the job it needs to do.

Set it and forget it, move on :)
 
As I said above, I literally set in the bigger memory sized machines and leave it, don't even think about it :) This has been the most thought its had, like, ever! :D

As long as you have the RAM in the system for the job it needs, there's no need for a big page file I don't believe. With my rigs with 32GB or even 64GB, its set low as it's only going to slow things down as it's using the drive to access for the file. If its got decent storage, even an older SSD with slower speeds is still miles better than the spinny drives, that's going to do you just fine :) Most importantly, just make sure it has the RAM in the system to do the job it needs to do.

Set it and forget it, move on :)

Might be an ignorant perspective, but I honestly don't even know how big my PF is or even if it's still there in any non-negligible capacity. Most of it usually disappears after I tweak a new install when I disable hibernate/fast startup.

Granted, I don't max out the RAM I have, but I don't think I've ever crashed from running out of RAM in recent years on 10. Faced with some pretty serious examples of memory leaks, Windows never threw in the towel.
 
Anybody have a first hand experience with the file just expanding and not shrinking?
System managed page files have been around since at least XP - maybe before, not enough coffee yet.

There was a bug in some XP and early Vista systems where the page file size would get "stuck" either too big or too small if the disk ran critically low of free disk space after the size was set. There was a Hot Fix/patch released for it, but the proper permanent solution was to free up disk space. It has always been required and a user responsibility to ensure there is plenty of free disk space (at least on the boot drive - and the drive where the PF is located if not the boot drive) for the OS to operate optimally in. This free space is needed so the OS and applications can store temporary files that have been opened, temporary internet files, temporary copies during copying and moving operations, defragging operations, and of course, the PF.

And of course, contrary to what some want everyone else to believe :( - the folks at Microsoft have learned a thing or two about memory management since XP and early Vista days - and they have implemented what they have learned into today's Windows. Consequently, if there have been cases where the PF just kept expanding and never shrank back (assuming a smaller size became appropriate), those would be rare exceptions, and likely an indication of a different problem that needed to be resolved first.

Or never being defragmented?
It is because there has always been a need for a decent chunk of free disk space that I never understood this worry about the page file being fragmented. This is especially true with recent (since W7) versions of Windows where hard disks are regularly defragged by the system - unless the user dinked with the defaults and disabled defragging :(. And also, in recent years, hard drives have become HUGE, so running out of disk space has become less of a problem with hard drives.

With the acceptance of SSDs, and in particular with small SSDs, running out of free space may be a problem but of course, a fragmented SSD (and thus fragmented PF) is not a problem due to the way data is saved and read on SSDs compared to HDs. And there is still the need to keep a nice chunk of free disk space on SSDs - not just for the OS, but for SSD "housekeeping" chores too - like TRIM and wear leveling.

Point being, if you are worried about fragmentation of your page file on your hard drive - the solution is system. Clean out the clutter with Windows Disk Cleanup, CCleaner or the like and make sure you have lots of free disk space available. I recommend at least 20GB, preferably at 30GB or more of free disk space. If still low, uninstall any programs you installed but don't use. Purge your Download folder and if necessary, consider moving your Documents and Downloads folder to a different drive. Defrag and reboot. If you still don't have enough free disk space, buy more space - preferably a SSD.

Oh, if you are still using a hard drive for your boot/PF drive, and you are using a modern version of Windows, another reason to just let Windows manage the PF (or at least pre-set a big PF) is because Superfetch/Sysmain and other "prefetch" operations use it help those systems boot much faster and to load commonly used applications faster. These features are disabled, however, when Windows detects SSDs are being used - since prefetch is not needed with the much faster SSDs.
 
Might be an ignorant perspective, but I honestly don't even know how big my PF is or even if it's still there in any non-negligible capacity. Most of it usually disappears after I tweak a new install when I disable hibernate/fast startup.

Granted, I don't max out the RAM I have, but I don't think I've ever crashed from running out of RAM in recent years on 10. Faced with some pretty serious examples of memory leaks, Windows never threw in the towel.
Some honestly just ignore it altogether, but I know it eats through your drive if its left alone. I knowingly change it because I don't want half my drive missing :laugh:

I think it does fine whatever its set to, as long as the system is right for the needs of the job its doing. Unless its under powered/not enough RAM and such, these are the things that will hurt performance and cause issues.

But I think that's it really, don't think we need another 4 pages on page files lol God help the Hibernation file if that comes next....
 
Hi,
Looking at win-10 settings on x299 with 32gb memory left on auto manage it's only at 4951 recommended
No sense in messing with it
1648836804732.png


I believe win-7 and older gave auto page file a bad rep seeing page file matches memory total and that's whack :cool:
 
Anybody have a first hand experience with the file just expanding and not shrinking? Or never being defragmented?
Yup. That is one of the reasons people started setting a fixed size and forgetting about it. 1, because it would not grow to consume a serious portion of empty space on the drive and 2, because at a fixed size the pagefile will not fragment the drive to hell and back. This less of an issue with SSDs but it's still a wear and tear thing.

No matter how you look at it, letting Windows manage the pagefile is going to cause avoidable problems long term. It's best to force a fixed size and forget about it. Then it will never be a problem. Even better is to set a fixed size AND move the pagefile to a separate drive where access times will be less of an issue as well.

God help the Hibernation file if that comes next....
Not to start anything, but that is something else that gets configured upon install completion. Hibernation is STILL to glitchy to trust and which systems that have large amounts of RAM it can be a huge file. I always disable it completely.
 
No matter how you look at it, letting Windows manage the pagefile is going to cause avoidable problems long term.
Nah. 20 years ago with XP, that was a problem. Hard disks were small back then and defragging was not automatic. It just is not the problem today. If it was, there would be 100s of millions of users with such problems.

W10/W11 is not XP. We really need to give the new versions a chance instead of assuming nothing has changed in 20 years.

And PFs and SSDs are made from each other.
Should the pagefile be placed on SSDs?

Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well.

In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that

  • Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1,
  • Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB.
  • Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size.
In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD.
Source: Support and Q&A for Solid-State Drives and scroll down to, "Frequently Asked Questions, Should the pagefile be placed on SSDs?" While the article is getting old and was written for Windows 7, it applies to Windows 10/11 too and even more so today since wear problems of early generation SSDs are no longer a problem and each new generation of SSD just keeps getting better and better.

I do agree however, that moving the PF to a secondary drive is a good idea, if a faster drive than the boot drive, and assuming enough space.

As far has hibernation, it was designed for laptops and it is not available or enabled on all PCs by default. Just checking this machine, I have 32GB of RAM installed and the hiberfil.sys file is 12.3GB. A big file for sure, but hardly an issue when there is lots of free disk space available. I see no reason to disable it (if it even is enabled) unless it is giving you problems - which are pretty rare these days.

Most users like the faster boot times hibernation provides - even with a SSD because it saves the state the computer was in. It can slow down shutdown times, however.

That said, if a PC, hybrid mode makes more sense.
 
Last edited:
As usual, I'm here to learn, not stir the pot.

If the page file is fixed, what happens when it is full? does the OS kill the offender? as before I am intentionally a bit ambiguous about which OS
 
If the page file is fixed, what happens when it is full?
Fixed has nothing to do with it. If the PF gets full, the OS will dump lower priority data and keep the highest (next likely to be needed) data.

The data that gets pushed out is either saved back to disk as a normal file, or just dumped into the bit-bucket and will have to be reloaded from a regular file, if needed again.
 
'saved back to disk as a normal file' sounds a bit like extended paging to me.; 'or just dumped into the bit-bucket', what is a bit-bucket?

It is worth saying again; I am trying to learn. Computers are a bit of a novelty to Ogres.
 
Status
Not open for further replies.
Back
Top