• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Pagefile "anomalies"?

That's the beauty of letting Windows manage it.

Okay, so after reading all your writings/advices it became perfectly clear to me, that the optimal setting for PF is to let windows handle it. But in which way?
A. check "Automatically manage paging file size for all drives"
OR
B. uncheck "Automatically manage paging file size for all drives" and set "System managed size" for C and "No paging file" for D
OR
C. Whatever
 
A and never think about it again. Automatic and forget where the setting is ;)
 
What setting?
Umm, he answered using your own designation for the first option, "A".

Just check the box for "Automatically manage paging files sizes for all drives" then never worry about it again.
 
Umm, he answered using your own designation for the first option, "A".

It wanted to be a joke:
INSTG8R : "Automatic and forget where the setting is"
Me: "What setting?"
 
Ah! Sorry. :oops: My bad! I clearly should have picked up on that. Obviously I am not fully caffeinated yet today so I'm not good at reading facial expressions and body language, or hearing tone of voice via the forums! ;)
 
Personally never understood the rationale behind 'put all data in RAM so you can access it more often, it improves performance'.

Like, wut? Using more resources for the same task is good?
 
I think set it on system partition only and let win manage it then :)

(I always set on system partition only, on other partition or drives there are nothing to do by MS or pagefile, those are my stuff :) )

put all data in RAM so you can access it more often, it improves performance
Usually lower acces time and reaching with higher bandwith than any storage drive.
 
Funny thing is I have 16GB ram and a 16GB pagefile (on a seperate SSD) but I'm yet to see Windows make any meaningful use of it even when gaming meh it doesn't really worry me but I just think it's a little weird is all

also what is the difference between Pagefile.sys and Swapfile.sys (other than size that is)
 
Disabling your PF can also wreak havoc on multiple aspects of your system. Some games in particular, though they escape my memory as I refuse to mess with the PF these days (there's no reason to!) would crash upon launch without a PF. You can't just get rid of something that is there for the system to use.

If memory serves me right, the swapfile.sys is there to store (in their current state)/resume programs as needed.
 
also what is the difference between Pagefile.sys and Swapfile.sys (other than size that is)
Swapfile.sys pertains to the newer Windows Universal Apps.

It first showed up in Windows 8.
The Windows Club said:
The Swapfile.sys in Windows 8 is a special type of pagefile used internally by the system to make certain types of paging operations more efficient. It is used to Suspend or Resume Metro or Modern Windows 8 apps.
 
You can turn off pagefile completely and your computer will still work fine if you have enough ram (did this in windows 7 for a good while). Games may not run though, gtav especially needs a big pagefile to run. If you use an ssd to run windows turning off your pagefile can extend the life of your drive, but may affect the performance of some programs.
 
Last edited:
Personally never understood the rationale behind 'put all data in RAM so you can access it more often, it improves performance'.
That's because that is not rational or true.

You can turn off pagefile completely and your computer will still work fine
But why? That just makes no sense. If it made sense (or improved performance) to turn it off, don't you think Microsoft would code Windows to disable it by default? If the computer ran better with the PF disabled when gobs of RAM was installed, don't you think MS would code Windows to disable it?

Microsoft wants Windows to perform optimally. Why? Because if it didn't, they know all the MS haters and bashers would constantly and relentlessly trash them over it! So Windows is coded to enable the PF by default, even when gobs and gobs of RAM is installed, or when the boot drive is an SSD.

If you use an ssd to run windows turning off your pagefile can extend the life of your drive
That's nonsense too. Many years ago with first generation SSDs, that might have been true. But with modern SSDs, even with a busy computer it would take so many years to reach that limit, all the other hardware would be so obsolete and superseded many times over years ago.

Please go back and see my post #11. Read my last paragraph and follow the link that explains why SSDs and Page Files are ideally suited for each other.
 
Swapfile.sys pertains to the newer Windows Universal Apps.

It first showed up in Windows 8.


Ah I thought as much. So now they're double dipping not good enough to have just a pagefile anymore lucky for me it's easily turned off in the reg
 
On a system with a small amount of RAM, say 1, 2 or 4Gb, what would the ideal pagefile sizes be? Or leave it at Windows managed?
 
But why? That just makes no sense. If it made sense (or improved performance) to turn it off, don't you think Microsoft would code Windows to disable it by default? If the computer ran better with the PF disabled when gobs of RAM was installed, don't you think MS would code Windows to disable it?

Microsoft wants Windows to perform optimally. Why? Because if it didn't, they know all the MS haters and bashers would constantly and relentlessly trash them over it! So Windows is coded to enable the PF by default, even when gobs and gobs of RAM is installed, or when the boot drive is an SSD.
Think of it like a temp folder, you don't really need it for most programs to run, but you may run into issues if you disable it.

Turning it off doesn't improve performance, but it doesn't negatively impact performance either.

On a system with a small amount of RAM, say 1, 2 or 4Gb, what would the ideal pagefile sizes be? Or leave it at Windows managed?
Windows managed is usually best. I think you're supposed to set a 24gb max for 16gb of ram, I use 8192mb for the initial size and 16384mb for the max and haven't run into any issues, I have 16gb of ram. I don't think it matters how much ram you have vs the pagefile size.
 
Last edited:
Hi all,

For now I thought I understand the operation of the pagefile.. I thought that when the physical RAM runs out, then data will be written in part of the hard drive called pagefile. Since I have 16GB RAM I thought I can completely turn off pagefile, but I have set a fix 4GB space for it just for sure.

Yesterday just for my curiosity I monitored the pagefile used via MSI Afterburner & Riva Tuner Statistic Server while gaming. Looking at the log file it moved between 2147MB and 9331MB, while the physical RAM usage maximum was 5872MB. Wtf? :(

Yap, my experience exactly.

On Windows you need pagefile because that's how Windows works since dawn of time. If you make custom size pagefile like me (e.g. initial 4096 - maximum 8192 MB), you could experience weird behaviour of Windows, even crashes, black screen etc. Opera browser is crashing tabs despite my RAM is only 50% full, so I need to close some of the tabs and reload again.

Moral of story, have large enough SSD and let Windows automatically manage pagefile so you will not experience weird behaviour of Windows and applications. And never put pagefile on mechanical disk because it is terribly slow.

The pagefile size should not be messed with, I don't know why MS still lets people try and change it. Paging is a critical component for virtual memory. Contrary to popular belief that if you have enough RAM you don't need it, this is wrong, any OS will always use virtual memory and therefore it will always need paging.

Every OS (some of them from same branch) more or less behave differently despite having similar logic. In Linux you could be perfectly fine (of course, if having large ammount of RAM, like 8 GB and more) without dedicated swap partition or swap file (similar how Windows have its pagefile).

But you need swap for hibernation or when you don't have enough RAM on Linux but it is worst perfomance if swap is on mechanical drive.

Bill_Bright said:
Its not like page files fill up and stays filled up with old data.

It does, in my case. If I let Windows automatically manage pagefile, it grows like mad, no shrinking or flushing garbage from pagefile and this is maybe because I use S3 state, suspend to RAM so if I have long uptime, Windows don't flush pagefile without restart.
I have 16 GB of RAM and it is not enough. I have many tabs on browsers, memory usage from browsers about 2 GB or more and pagefile keeps growing.

Bill_Bright said:
So I agree with those recommending leaving the defaults alone (all the defaults) and to just let Windows manage your page file(s). Contrary to what some seem to think, the army of PhDs, computer scientists and developers at Microsoft are not stupid. Those developers definitely want our systems to run optimally. And even the oft misguided marketing weenies and execs at Microsoft truly want our computers to run optimally too - if for no other reason than it would be bad publicity (thus bad for business) if they didn't.

Microsoft has decades of experience and exabytes of empirical data to draw from. And they have super computers to analyze that data to ensure Windows (7/8/10) uses the PF in the most efficient manner. I highly doubt any of us here are more knowledgeable in virtual memory management than the development team at MS.

Argument from authority. There are different approaches to manage memory, partitions, files etc.

Anyway, there is still old Microsoft/Windows recommended tradition everything to install, at least their stuff on C:, and even if you have option to install on different drive or partition, some of that stuff would install on C: no matter what.
I know there are reasons (pros and cons) behind it but it is not flexible (as on Linux) especially if you have small SSD system drive, let's say 60 GB.
 
Think of it like a temp folder, you don't really need it for most programs to run,
No! That's not the right way to think about it. Regardless what your running programs need or do, the OS will use it to its advantage. And that's a good thing.
but you may run into issues if you disable it.
Right! You may - or may not. So since you "may", there's no reason to disable it.

And certainly, disabling it because "I didn't notice any difference" doesn't make sense either. I say use that same logic to leave it alone, enabled and system managed.
It does, in my case. If I let Windows automatically manage pagefile, it grows like mad, no shrinking or flushing garbage from pagefile and this is maybe because I use S3 state, suspend to RAM so if I have long uptime, Windows don't flush pagefile without restart.
Define, "grows like mad". Surely you are not suggesting if you don't reboot, the PF will eventually consume all your free disk space? If that is the case, then you are already critically low on free disk space, or there is a fault with your system somewhere that needs to be corrected. Either way, the solution is not circumvention.

You say the PF keeps growing. I say that's what it is supposed to do! But as seen here with Windows 10, it should not get larger than 4GB unless your system is experiencing a bunch of crash dumps - in which case you have other issues to deal with.

FTR, I also use S3 and I only reboot when some Windows or security program update forces me to. That means I could easily go several weeks without an actual reboot.

Argument from authority. There are different approaches to manage memory, partitions, files etc.
I agree. But what I am saying is, unless a true professional with advanced training in virtual memory management, it is highly unlikely one would know a better approach than the default - which is already totally capable regardless how unique any particular scenario may be. In fact, the system-managed PF assumes each and every computer (even with identical hardware running identical programs) will have unique PF requirements, and then deals with them accordingly.
 
Define, "grows like mad". Surely you are not suggesting if you don't reboot, the PF will eventually consume all your free disk space? If that is the case, then you are already critically low on free disk space, or there is a fault with your system somewhere that needs to be corrected. Either way, the solution is not circumvention.

You say the PF keeps growing. I say that's what it is supposed to do! But as seen here with Windows 10, it should not get larger than 4GB unless your system is experiencing a bunch of crash dumps - in which case you have other issues to deal with.

FTR, I also use S3 and I only reboot when some Windows or security program update forces me to. That means I could easily go several weeks without an actual reboot.

Now when I changed PF to automatic, currently I have 12800 MB (and I expect it would grow further to no expected size) of PF and RAM is 64% full (10.3/16), browsers are eating memory like mad by the way. It seems like Windows follows old Linux rule of thumb, 2*RAM = swap/pagefile. So 32 GB pagefile would be safe bet. :)

True, there could be some issues I am unaware of, either way, just as OP I expected different behaviour of Windows managing PF, and one of reasons, because on GNU/Linux I experienced different memory management where I did not experience weird behaviour despite having small swap partition just for the sake of it or tradition.

I agree. But what I am saying is, unless a true professional with advanced training in virtual memory management, it is highly unlikely one would know a better approach than the default - which is already totally capable regardless how unique any particular scenario may be. In fact, the system-managed PF assumes each and every computer (even with identical hardware running identical programs) will have unique PF requirements, and then deals with them accordingly.

Fortunately I have 60GB free space. Problem is, the way how Windows manage memory is not enough flexible, you are forced to have big system drive (C:). In Linux, system, (/) root partition could be about 20 GB top, /temp files and /swap could be on another drive or you could use soft/hard links if you don't have enough space on main drive. I know you can use soft and hard links on Windows but I don't know and did not try how efficient they are for system stuff. Windows is very picky and sensitive about system stuff.
 
It seems like Windows follows old Linux rule of thumb, 2*RAM = swap/pagefile.
No, it does not follow any rule of thumb. If it did, my page file would be a whopping 64GB. But it is currently set to 4186MB
Problem is, the way how Windows manage memory is not enough flexible, you are forced to have big system drive
Ummm, not true. First, the size needed is not determined by the way the OS "manages" memory. The size needed is determined simply by the size of the files that make up the OS.

Technically, you only need 20GB for 64-bit Windows 10 (though a minimum of 32GB is recommended). However, I would never recommend anything less than a 128GB drive for the OS as that gives Windows enough room for drivers, temporary files and the PF. But I would recommend a secondary drive for all installed applications if the boot drive is less than 250GB.

That said you can easily move your temp files location, Documents folder, and the PF to a different drive in Windows too. So your point there is invalid.

And for the record, a quick bit of homework with Google shows the minimum system requirements for Ubuntu Linux calls for 25GB of disk space - 5 more than W10!
Windows is very picky and sensitive about system stuff.
No its not! A little bit of homework and setting aside of biases is needed here.

Consider this. There are over 1.6 billion (with a "b") Windows computers out there. Virtually each and every one became a unique system within the first few minutes of being booted up the very first time! Users setup their accounts, personalization, networking, security apps, personal apps, and peripherals. And Windows supports them all.

With Windows, you can buy an ASUS motherboard, AMD processor, MSI graphics card, Western Digital hard drive, 8GB of Kingston RAM, put them in a Corsair case, power them with a Seasonic power supply, connect it to a 27" Acer monitor and Epson laser printer, connect to your network via Ethernet, install AVAST security, and install Microsoft Office on it to create your resume/CV , and it will work.

Or you can buy a Gigabyte motherboard, Intel processor, XFX graphics card, Samsung SSD, 16GB Crucial RAM, and put them in a Fractal Design case, power them with an EVGA power supply, connect them to two 24" LG monitors and HP ink jet AiO, connect to your network via wifi, use Windows Defender and install LibreOffice on it to create your resume/CV, and it will still work.

If you are a Toyota mechanic working in a Toyota dealer's service center, and you see nothing but broken down Toyota's all day, if you don't keep an open mind and set aside any preconceived notions, you could easily start to believe Toyota makes lousy cars.
 
No, it does not follow any rule of thumb. If it did, my page file would be a whopping 64GB. But it is currently set to 4186MB
Ummm, not true. First, the size needed is not determined by the way the OS "manages" memory. The size needed is determined simply by the size of the files that make up the OS.

It was a joke from my part, that's why smilie to not make confusion.

Recap, after my last reply, my pagefile grow to 18003 GB, RAM is 71% full. I don't have anything else but browsers running with more or less tabs open and running in background, I should check extensions or add-ons hibernating or suspending tabs in background like Vivaldi browser is doing out of the box I think. I would expect Windows would flush unused garbage from pagefile, but no, it keeps growing.

Beside crash dumps or something and hibernation there is no reason to fill swap or pagefile if there is enough RAM, because RAM is the fastest memory available and in old times when there were only mechanical disks it would be terrible to use HDD as RAM alternative, it would be like a crutch RAM. If you have large enough RAM I don't see a reason memory management would not manipulate everything in RAM instead on more slower mechanical drive or SSD.

AmigaOS and Atari TOS could use RAM disk for a reason and I remember by the way, at least on Atari ST, you could use hardware reset button and program (Macintosh emulator if memory serves me well) would stay resident in memory and you need to completely turn off computer (hardware button) to clear off program from memory.

Speaking of which Gigabyte released i-RAM long time ago, it was a PCI card with DDR RAM slots serving as RAM disk, way ahead of its time considering low density or low capacity of DDR.

I thought that 16 GB of (DDR 4) RAM would be enough for browsing and using of computer casually when I saw how 8GB of (DDR 3) RAM on my other computer configuration was not enough. I tend to have many tabs open in browsers and propably many casual users never experienced my case scenario.

Desktop PC is not a server, one size does not fits all. For servers swap or pagefile have a meaning, on desktop (or workstations) I dare to say, not as much. I don't know how many desktop users and professionals would check out logs and crash dumps for debugging or something.

Technically, you only need 20GB for 64-bit Windows 10 (though a minimum of 32GB is recommended). However, I would never recommend anything less than a 128GB drive for the OS as that gives Windows enough room for drivers, temporary files and the PF. But I would recommend a secondary drive for all installed applications if the boot drive is less than 250GB.

You said it yourself. You recommend at least 128GB drive for the Windows and I say 40-60GB SSD or even lower is enough for any GNU/Linux or FreeBSD distibution. I was forced to buy 256GB SSD because everything less was pain in the ass for using Windows 10 and now, even 256 GB is to small. If you need Visual Studio and you have 256 GB SSD you are screwed, 500 GB (NVMe) SSD here I come.

That said you can easily move your temp files location, Documents folder, and the PF to a different drive in Windows too. So your point there is invalid.

I moved default \Documents, \Downloads etc. folders to another D: drive. Having permission issues for some of those folder or files inside of them after fresh install is another problem despite using same account. Maybe I did something wrong, can't remember and it's another story.

And for the record, a quick bit of homework with Google shows the minimum system requirements for Ubuntu Linux calls for 25GB of disk space - 5 more than W10!
No its not! A little bit of homework and setting aside of biases is needed here.

Firstly, it was not my intention going into GNU/LInux/Unix/FreeBSD vs Windows flame war. It was just example how things works or it could work as alternative point of view.
GNU/Linux got its own fair share of stupidity and complications, such as SysV vs systemd and forkifications for the sake of it and ego trips.

Considering Ubuntu recommendation, it is propably safe bet for noobs, because I know my / (root) partition never get bigger from 20GB or less no matter what distribution I use, not to mention FreeBSD.

And you can be sure those 25GB will never be a problem for GNU/Linux user, because Windows user will need more than 20GB of space for Windows normally to run if everything is on default after installation.

Consider this. There are over 1.6 billion (with a "b") Windows computers out there. Virtually each and every one became a unique system within the first few minutes of being booted up the very first time! Users setup their accounts, personalization, networking, security apps, personal apps, and peripherals. And Windows supports them all.

Argumentum ad populum.

If we count smartphones, supercomputers and servers, Linux or Unix like OSs serve even more users but this is irrelevant for discussion.

But I get your point, it is complex to make one size fits all solution.

By the way, Microsoft had opportunity with their WP (I own Lumia 640) to be good alternative for iOS and Android, but they blew this opportunity up by own idiocy and this is problem for big companies, they become slow to change some things and lose their focus. Just like IBM management did not know what to do with PC and Atari or later Commodore blew up Amiga project.

With Windows, you can buy an ASUS motherboard, AMD processor, MSI graphics card, Western Digital hard drive, 8GB of Kingston RAM, put them in a Corsair case, power them with a Seasonic power supply, connect it to a 27" Acer monitor and Epson laser printer, connect to your network via Ethernet, install AVAST security, and install Microsoft Office on it to create your resume/CV , and it will work.

Or you can buy a Gigabyte motherboard, Intel processor, XFX graphics card, Samsung SSD, 16GB Crucial RAM, and put them in a Fractal Design case, power them with an EVGA power supply, connect them to two 24" LG monitors and HP ink jet AiO, connect to your network via wifi, use Windows Defender and install LibreOffice on it to create your resume/CV, and it will still work.

You mentioned apples and oranges. Some of this stuff is about standards and some of them are about compatibility. TCP/IP, CUPS, .DOC you know.
I don't use any other antivirus program beside default Windows Defender since it became an option. I used Avast and Avira (on XP and Win 98) long time ago, AVG never liked it and others more known (NOD32, Kaspersky, BitDefender, you name it) were shareware or something with less options available.

By the way, Apple before Steve Jobs came in charge again, tried this approach, by making Macintosh clones. You've had Radius (specialized in Macintosh peripherals and accessory equipment) Macintosh clones among others.

Same apply for GNU/Linux and FreeBSD and some other more "exotic" OSs, like HaikuOS. You need to comply with certain standards and you don't have a problem with compatibility.

And you could buy AMD (there were also Cyrix and NexGen in old times) instead of Intel to run x86 instructions. Fortunately, thanks to advancing in process, technology and software development there is rise in OpenRISC architecture and similar so people/users will have another alternative.

There is a reason PC is/was called IBM/PC compatible. All those manufacturers follow certain standards required to run IBM/PC compatible computers.

Cases are irrelevant here (you can have computer (hardware) without them but it is not practical and safe), they need to follow certain formats, ATX, EATX, ITX etc. They are just boxes where you put your hardware, no pun intended. :)

If you are a Toyota mechanic working in a Toyota dealer's service center, and you see nothing but broken down Toyota's all day, if you don't keep an open mind and set aside any preconceived notions, you could easily start to believe Toyota makes lousy cars.

Usually automotive and computer analogy are awkward and lousy at best but are useful or simple to describe something to technologically inept people.

Cars don't change like hardware and software IT industry, especiallly not as in Moore's law otherwise we would have flying cars. You have the same working principle in engines be it petrol, diesel or electricity. There are small variations, but you don't have so much space in advancing as in IT industry.

For sure if you are Toyota mechanic repairing only Toyota cars every day you could get wrong impression, but you could share experience with other mechanics repairing other cars so you could make some kind of comparision.

It is same with laptops. Some laptops could have more returns than others, but you need to see if this is result of more usage (more buying and consequently more returns) or really they are more prone to failure.
 
Last edited:
The pagefile size should not be messed with, I don't know why MS still lets people try and change it. Paging is a critical component for virtual memory. Contrary to popular belief that if you have enough RAM you don't need it, this is wrong, any OS will always use virtual memory and therefore it will always need paging.

When the OS needs to allocate memory to a program it does not provide a physical address to it but rather it does this through these "memory pages" which have a different address space much larger than the physical one.

In other words, any software will always interact with the memory through these pages and never directly to the physical locations no matter how much memory is available, you can see why there is no point in trying to mess with it.

So it doesn't matter what it does, just let it do it's thing.

A lot of us learn this the hard way. There will be some random thing that seems to benefit, like I recall turning off the pagefile greatly improved one specific aspect of Skyrim performance, but then you slowly discover all these little things that are dependent on it. I also use to think you could get away with shrinking it down in the days of limited SSD size, but ultimately it's best to not touch it at all.
 
You say the PF keeps growing. I say that's what it is supposed to do! But as seen here with Windows 10, it should not get larger than 4GB unless your system is experiencing a bunch of crash dumps - in which case you have other issues to deal with.

You're always right but this is plain wrong.

I see much bigger page files on a daily basis in my own rig. And MS also agrees. I'm frequently looking at 11-13 GB page files. My OS disk is a 1TB SSD.

134665
 
Just a little update, when my RAM was about 11 GB, pagefile did whooping 20 GB. I closed some tabs and when RAM was at 8 GB, pagefile shrinked to 12800 MB, so I will correct myself, Windows dinamically change size of pagefile in every session but not as I expected or would like considering experience on GNU/Linux.

I really don't like how pagefile is managed because I am all for efficiency and against waisting resources in this case valuable free space on storage drives no matter how big they are.

A lot of us learn this the hard way. There will be some random thing that seems to benefit, like I recall turning off the pagefile greatly improved one specific aspect of Skyrim performance, but then you slowly discover all these little things that are dependent on it. I also use to think you could get away with shrinking it down in the days of limited SSD size, but ultimately it's best to not touch it at all.

Correct. In case of Windows it is the best practice not to mess with default system settings, you could make your system unstable or experiencing weird behaviour.
 
You're always right but this is plain wrong.
:roll: Yes, I am always right - except when I'm wrong. And that happens. But I tend to practice what I preach in this regard and do my homework before posting so it doesn't happen often.

So please note I specifically said, it "should not get larger than... ." And I was citing that Microsoft reference I linked to. It was not a claim I personally was making.

Still, to your example, I see nothing wrong with a 13GB PF on a 1TB SSD - except it does suggest, as your quote so notes, that you have been having some error issues and Windows is preparing for crash dumps. I suggest you keep an eye on your Event Viewer.

I also note if you manually set your PF using the old (and totally obsolete) rule of thumb, then according to your system specs, your PF would be 24GB in size.

I keep getting the impression some feel page files are evil. They're not. They are good things.

but not as I expected or would like considering experience on GNU/Linux.
:confused: Why would you expect (or like) the Page File on Windows to be sized the same as the swap file on GNU/Linux? They are totally different operating systems and surely you were running totally different programs. Frankly, I would be surprised if they were the same size.
 
Back
Top