• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Pagefile "anomalies"?

:roll: Yes, I am always right - except when I'm wrong. And that happens. But I tend to practice what I preach in this regard and do my homework before posting so it doesn't happen often.

So please note I specifically said, it "should not get larger than... ." And I was citing that Microsoft reference I linked to. It was not a claim I personally was making.

Still, to your example, I see nothing wrong with a 13GB PF on a 1TB SSD - except it does suggest, as your quote so notes, that you have been having some error issues and Windows is preparing for crash dumps. I suggest you keep an eye on your Event Viewer.

I also note if you manually set your PF using the old (and totally obsolete) rule of thumb, then according to your system specs, your PF would be 24GB in size.

I keep getting the impression some feel page files are evil. They're not. They are good things.

:confused: Why would you expect (or like) the Page File on Windows to be sized the same as the swap file on GNU/Linux? They are totally different operating systems and surely you were running totally different programs. Frankly, I would be surprised if they were the same size.

No errors since this PC was turned on, well, yes, event viewer has the odd thingy here and there but this pagefile usage is entirely normal to me. The more than 4GB happens all the time... And no, its set automatically, I know better. Last time I fixed the size was in Windows XP :D

I'm also not implying 'its bad' at all. Just saying 4GB is by no means actual anymore. Sounds like something out of the Windows 7 age.
 
I'm curious if Windows 10 memory management is making use of in-memory compression for the two of you? @Vayra86 @Ryzen_7

For me I haven't ran into any oddball issues with Windows 10 with the initial size for the pagefile set to 16 MB, and maximum size set to 8 GB.
134668
 
I'm curious if Windows 10 memory management is making use of in-memory compression for the two of you? @Vayra86 @Ryzen_7

For me I haven't ran into any oddball issues with Windows 10 with the initial size for the pagefile set to 16 MB, and maximum size set to 8 GB.
View attachment 134668

Here you go. Been idling mostly since startup. All is well in the world...

134669


Now, I'm in-game. Is that pagefile monitoring broken, or how do you explain this

The plot thickens...

134670
 
What does compressed pages data in memory look like for you?
Untitled.jpg
 
For me I haven't ran into any oddball issues with Windows 10 with the initial size for the pagefile set to 16 MB, and maximum size set to 8 GB.
So "haven't ran into any oddball issues" is the rationale you use to justify changing settings? :confused::confused::confused:

What were the "oddball issues" you experienced before you changed the default settings? What problems do you encounter with the default PF settings that make you feel Microsoft doesn't know their a$$es from a hole in the ground?

"Because it didn't break when I changed it" is not a valid reason to change anything.

That just makes no sense to me.

If you were having problems and switching to a manual setting fixed those problems, then that makes sense.
If switching to a manual setting made a noticeable improvement in performance, then that would make sense too

But if switching to a manual setting didn't fix anything or made no noticeable difference in performance, then what makes sense is switching it back!

What methodology did you use to analyze your virtual memory requirements in order to determine the ideal settings for your computer and your computing habits? Surely you didn't just arbitrarily pick 16MB and 8GB out of the air? Why is the nearly 3GB recommended by the system not near enough for you?

And by the way - one the primary reasons Microsoft decided to make that a "dynamic" feature (so it will expand and contract as needed) is because the demands are dynamic. This means it is NOT a set and forget setting.

This means for every major change to the OS, for every new program or major upgrade to your programs, or any other major change you make to your computer, to use a manual setting correctly you need to regularly re-analyze your virtual memory requirements and, if necessary manually change your settings. This might mean you need to do this multiple times a week - or even more often. That's what a system managed PF does. Are you doing that too? If not, why not?
 
What were the "oddball issues" you experienced before you changed the default settings?
I've never had anything happen going all the way back to Windows 8 related to me changing the default settings for the page file. How about this, I'll switch it back to system managed even though it's not going to make any difference.
 
I asked, what were the "oddball issues" you experienced before you changed the default settings?
 
Exactly. So why change it?

If it wasn't broke, why fix it? If there is a reason to change from the defaults, then it makes sense to do so. But so far, there's been no reason.

And I ask again, what methodology did you use to analyze your virtual memory requirements in order to determine the ideal settings for your computer and your computing habits? Were 16MB and 8GB just arbitrary numbers? Why is the nearly 3GB recommended by the system not near enough for you?
 
I recently turned my page file back on. The system runs fine. Let windows manage, and I carry on about my business.
However, it didn't like that my OS drive was nearly full (since freed up some space) and put most page file on my HDD.
Meh I can't tell the difference. Seems to work as it should when windows configures it for itself.
Thanks for all the information in regards to Pagefile Bill_Bright. I've learnt a lot.
 
Just because it isn't broken doesn't mean I'm not going to play with it, but yeah I use system managed or set manual to 16gb, I used to tinker with it more in previous os versions but lately games just need so damn much it's best to leave it on auto.

Does anyone use a drive other than c for their pagefile?
 
Just because it isn't broken doesn't mean I'm not going to play with it
I have no issue with experimenting. I do that all the time. But if my testing doesn't show changes I made brought any improvement, I change it back.

My issue lies in two areas that really makes no sense. The first involves changing it just because people used to do it with earlier versions of Windows, in particular XP and earlier. Modern versions of Windows are not XP.

The second involves changing, or rather leaving it changed because they didn't notice any difference when they disabled the PF, or set it manually. Did they do an analysis of their virtual memory requirements before and after? Do they even know how? What about a month later? What about after installing a service pack or other major upgrade? It is not a set and forget setting. Would they make such changes to their car's emissions control computer? To their HVAC system? To any other high-tech device and then leave them because they noticed no difference? Why would their Windows computer be any different?

Do they really believe they have more expertise with virtual memory management than the teams of PhDs, computer scientists and professional developers at Microsoft who have decades of experience and exabytes of accumulated data to draw from? I mean I've got decades of experience with swap files and virtual memory management going back to DOS days. I've got multiple IT degrees and certs with Windows and computer hardware and no way do I think I am smarter than the developers at MS. I think I'm smarter than some of the marketing weenies and even some of the execs based on some of the misguided marketing and business decisions they've made. But smarter than the developers? No way.
Does anyone use a drive other than c for their pagefile?
Lots of people do. I do on a couple of my systems here that have small SSDs for boot drives. So I moved the PFs to larger secondary SSDs. I would never move the PF to a hard drive unless free disk space on a tiny SSD boot drive was critically low. And that would be a temporary move until I put in a larger SSD boot drive.
 
Where I grew up Home Improvement was always offering great advice on overkilling an upgrade, play with it till it explodes, then ease up a bit. My reasoning behind setting a manual size is that I am reserving the space for use where windows just writes any place it wants when it increases in size.

Was wondering about running it on another drive because I have the option to do so but my c drive is a hefty 2tb so not going to run out of space anytime soon where if you run your c drive full to the brim your pagefile might not have the space required and when that happens windows gives errors.

I haven't borked a system in a long time (I didn't invent water cooling but I did it on an athlon way back in the day and things didn't go well), but I'm the kind of user who pokes my computer in the eye with a stick.
 
My reasoning behind setting a manual size is that I am reserving the space for use where windows just writes any place it wants when it increases in size.
Huh? That's not how locations on drives are selected. The OS does not tell the drive where to put files. When you set the size manually you sure did not tell the drive where to put the file. The controller does that. And for that matter, if the PF is on an SSD (where it should be if you have an SSD) TRIM and wear leveling will move the PF about anyway. So sorry, but your reasoning makes no sense.
Was wondering about running it on another drive because I have the option to do so but my c drive is a hefty 2tb so not going to run out of space anytime soon where if you run your c drive full to the brim your pagefile might not have the space required and when that happens windows gives errors.
That a totally different scenario. But even so, if you run your C drive full to the brim, the best solution is to clean the clutter off your C drive, uninstall unused programs, move space hogging programs and files to your secondary drives, and/or buy a bigger C drive.
 
:confused: Why would you expect (or like) the Page File on Windows to be sized the same as the swap file on GNU/Linux? They are totally different operating systems and surely you were running totally different programs. Frankly, I would be surprised if they were the same size.

Because I could use smaller SSD and have more available space for programs.

Frankly, I would be surprised if they were the same size.

Beside servers and for hibernation (if you fill up all of your RAM, usually for hibernation you could get away with smaller reservation space on disk) I don't think that on desktop computer with let's say 64 GB of RAM I would need 128 GB of pagefile or swap on GNU/Linux, this would be half of my NVMe 256 GB SSD (not to mention if I coud expand RAM to 128 GB, all my drive could be used as pagefile) and I presume following experience with only 16 GB of RAM, my SSD woud be easilly filled up with big pagefile.
 
Because I could use smaller SSD and have more available space for programs.
Okay. That makes sense as far as you wanting or "liking" it that way. That's fine. What I was really questioning was you "expecting" it to act a certain way based on how GNU/Linus acted.
 
The OS does not tell the drive where to put files. When you set the size manually you sure did not tell the drive where to put the file. The controller does that.

On mechanical drives, files are written in random order because of the way how mechanical drive works for the most part.

But there are differences in perfomance of various file systems. NTFS is different than ext4 for example. Some file systems handle large files better like XFS, some are known to handle small files better, like ReiserFS.

What I was really questioning was you "expecting" it to act a certain way based on how GNU/Linus acted.

To some extent and I know it sounds stupid because I am aware both of those systems are designed differently. Even some GNU/Linux distros don't follow traditional Unix principles, like GoboLinux.

I'm curious if Windows 10 memory management is making use of in-memory compression for the two of you? @Vayra86 @Ryzen_7

For me I haven't ran into any oddball issues with Windows 10 with the initial size for the pagefile set to 16 MB, and maximum size set to 8 GB.
View attachment 134668

I don't know.

pagefile.jpg


For me I haven't ran into any oddball issues with Windows 10 with the initial size for the pagefile set to 16 MB, and maximum size set to 8 GB.

It depends what you do and what programs you have running.

I noticed weird behaviour in Windows with fixed custom size of 8192 MB I mentioned in previous posts.
 
On mechanical drives, files are written in random order because of the way how mechanical drive works for the most part.
It may seem random to you and me, but there are protocols and algorithums used. Much depends on how fragmented the drive is and how much free space there is.
 
It may seem random to you and me, but there are protocols and algorithums used. Much depends on how fragmented the drive is and how much free space there is.
The only "known" on where a Kernel puts stuff on mechanical drives is in the outer parts (larger radius = higher linear speed = lower access time).

Regarding swap and Windows:
Windows Kernel is a patchwork, old and not really up to the post-2016 architectures (NUMA is a big issue for example)
I am yet to see Win10 run into "thrashing" scenarios (quickly swaping memory pages from and to storage), but I have seen Win10 load zombies (processes that do not belong to any programm anymore) into RAM. So basically this nice "continue where you left off" filling 32GBs of precious memory with crap! A trip to regedit fixed that, so if you see high memory use without any programm open: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management "ClearPageFileAtShutDown" = 1

I like to tinker with things, I reinstall Manjaro every time it bricks itself, I lost bash to terrible accidents. So why not tinker with Windows? Worst case, you are going to reinstall it.

Edit: If Windows spreads its bits and pieces over all drives introduced to the system, enable SATA hotplug/hotswap in BIOS. Windows will then reconsider and keep itself to C:
 
Last edited:
Huh? That's not how locations on drives are selected. The OS does not tell the drive where to put files. When you set the size manually you sure did not tell the drive where to put the file. The controller does that. And for that matter, if the PF is on an SSD (where it should be if you have an SSD) TRIM and wear leveling will move the PF about anyway. So sorry, but your reasoning makes no sense.

You may be right but, when you specify the file size pagefile.sys stays the same size and when a file has constraints its space is reserved on the drive. Like when you get the metadata for a file before it is downloaded, the computer gets a blueprint of the file it is receiving including size and reserves the space for the file to be written. Does the pagefile not work the same way? Once a size has been specified you're reserving a space for that file to be written.
 
You may be right but, when you specify the file size pagefile.sys stays the same size and when a file has constraints its space is reserved on the drive.
Okay. But this, in no way, suggests setting a manual size is better.
Does the pagefile not work the same way? Once a size has been specified you're reserving a space for that file to be written.
Yes. But it works that way regardless if manually set or system set. So again, this does not suggest a manually set size is better. But it does show how setting too big a size could waste space or how setting too small of a size could mean too small of a PF.

Again remember - manually setting a PF size is NOT a "set and forget" setting. The demands are "dynamic" or constantly changing. If it was "set and forget", it would have been much easier for Microsoft to simply code Windows to set a fixed size (or fixed range) once during installation and leave it forever. But they wisely chose to make system managed PFs dynamic too. :)
The only "known" on where a Kernel puts stuff on mechanical drives is in the outer parts (larger radius = higher linear speed = lower access time).
Umm nope. Not how it works. When the drive is formatted, the location for the file allocation tables and partition tables are established in those outer tracks by the file system doing the formatting - not the "kernel". The kernel is part of the OS, remember. But you can take a hard drive formatted by Windows and use it with Linux. FAT32 is supported by Windows, most Linux distributions, OpenBSD and Mac OS.

All storage locations, whether available or used, are "known" and the files tables keep track of that information.

When the computer needs to store data on the drive, it doesn't just throw that data at the drive to have it saved in "random" locations. No! When new data is to be stored, the file table "map" is accessed to see which sectors are free. The controller is then instructed to move the R/W head to that specific (not random, not unknown) location and in an orderly pattern, write that data. Then the file table maps are updated to show which locations are now unavailable.

"Random" would suggest it would be like throwing a dart at a dartboard after being spun around 3 times while blindfolded and then stuffing the data wherever the dart landed. No. The map is analyzed and then the storage location is selected based on those "known" available locations.

Now reading the various segments of the stored data may be done in a somewhat "random" order, then all the bits reassembled into the correct order in memory. This allows the read/write head to more quickly gather up the fragments based on the proximity of their locations on the disk, rather than sequentially (for example, the next word in a sentence). This means the various "fragments" may be read into memory in no "apparent" order (which is where "random access" comes from). But "accessing" the already saved file segments is not the same thing as "writing" new ones to disk.

So again, it may seem file segments are written randomly, they are not. And they are not stored in previously "unknown" locations either. The file allocation tables are actively and constantly keeping track and mapping each and every available and used location.
 
Okay. But this, in no way, suggests setting a manual size is better.
Yes. But it works that way regardless if manually set or system set. So again, this does not suggest a manually set size is better.
I agree that manual isn't better, in fact it's probably worse because it potentially limits the pagefile to one spot where auto can pick and choose where to put it, that was what I wanted to confirm. I killed 2 ssd's in 3 years which I think part of the blame is on me from setting a manual pagefile in windows 7.
 
I killed 2 ssd's in 3 years which I think part of the blame is on me from setting a manual pagefile in windows 7.
Ummm, I don't see how setting a manual PF on a SSD could cause actual damage to a SSD. I suspect something else killed them - perhaps some power anomaly or some really bad luck.
 
Ummm, I don't see how setting a manual PF on a SSD could cause actual damage to a SSD. I suspect something else killed them - perhaps some power anomaly or some really bad luck.
All ssd's have a block write life which I suspect I exceeded but these were consumer level ssd's and my usage could be considered heavy. The one was an ocz arc which I guess was a crap ssd anyway but I had similar failure with the second drive and it was a different brand. I respect your opinion as you seem to know what you're talking about.
 
All ssd's have a block write life which I suspect I exceeded
Not hardly.
and my usage could be considered heavy.
Also not hardly - not unless you ran a very busy file server that was constantly written to day in and day out. Reads don't count towards wear. Only writes.

And while it is true, SSDs are limited in the number of writes they can support, that number is so high, it is highly unlikely these limits would ever be reached with a consumer computer before the computer itself was long retired due to obsolescence. Perhaps, maybe, if this was years ago with first generation SSDs, but not with later generations. I note more and more data centers are using SSDs as caches for their most commonly accessed data.

Remember, with SSDs, PF locations are not fixed - so even if you set a fixed size. TRIM and wear leveling will still move those locations around to evenly distribute the wear. And besides, noted way back in post #11, SSDs and PFs are ideal for each other.
 
Back
Top