• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Pagefile "anomalies"?

I've never seen so much complete and utter crap about memory management and the pagefile/swap use on a single forum.

Let's go through all the falsehoods in this thread:

1) "Windows cannot work without pagefile as such a configuration causes BSODs and other bad things".

I've been running Windows/Linux without swap since I got a gig of RAM back around 2000 and I've never had a single issue because of that. None. Ever. Maybe my own example is not enough? OK, over the past 20 years I've managed over 200 workstations (including the ones used for 3D modelling/CAD/rendering/authoring) and over two dozen Windows servers most of which ran without a pagefile. Zero issues.

2) "Pagefile presence will always make your computer work/run faster".

Windows and other OSes may page out the applications which you currently use. Imagine you've put some of them in background and once you switch back to them, Windows will have to read their code back from the pagefile - as a result you get delays and sometimes mild stuttering.

3) "The pagefile grows because ... reasons even when you have gigabytes of RAM still free".

No, no, no! The reason it grows is because Windows and other OSes may prioritize disk cache over running applications which means if you run a game which reads gigabytes of data from the disk (textures, levels, animations, sounds, etc), Windows often decides to ... page out other running applications and your pagefile use will grow.

Probably I haven't covered everything in this thread but the bottom line is, you can perfectly run your PC without pagefile if you have enough RAM.

In Linux you need swap if you use hibernation. Other than that again there's no need to have it if you have enough RAM.

For me I haven't ran into any oddball issues with Windows 10 with the initial size for the pagefile set to 16 MB, and maximum size set to 8 GB.

4. If you're hellbent on having a pagefile, you must set minimum and maximum sizes to the same value to avoid rampant fragmentation which causes slow downs and reduces the chances of restoring data successfully.

5. Also, you must create pagefile right after Windows installation because in this case Windows will most likely allocate one continuous chunk of disk space for it.
 
Sorry but the complete and utter crap is most of what you just said! :(

1. No one said Windows cannot work without a PF.
2. No one said the PF will make the computer work/run faster.
3. I don't know your reason for 3 but again, no one said you cannot run without a PF.
4. Well, there we agree - you don't need to set a "fixed" size.
5. For one, Windows will create it by default. And for another, a contiguous chunk only matters with hard drives.
 
It may seem random to you and me, but there are protocols and algorithums used. Much depends on how fragmented the drive is and how much free space there is.

You are correct. Anyway, I don't remember last time I defragmented drive (considering the size of drives we all have it would be ludicrous, and I don't even partition my drives anymore, maybe if I would have one large drive), NTFS and Unix based or inspired file systems are more resistant to fragmentation or they work in a way you don't need to defragment drive as was recommended for example for FAT32.


5.10.11. Fighting fragmentation?
When a file is written to disk, it can't always be written in consecutive blocks. A file that is not stored in consecutive blocks is fragmented. It takes longer to read a fragmented file, since the disk's read-write head will have to move more. It is desirable to avoid fragmentation, although it is less of a problem in a system with a good buffer cache with read-ahead.

Modern Linux filesystem keep fragmentation at a minimum by keeping all blocks in a file close together, even if they can't be stored in consecutive sectors. Some filesystems, like ext3, effectively allocate the free block that is nearest to other blocks in a file. Therefore it is not necessary to worry about fragmentation in a Linux system.

In the earlier days of the ext2 filesystem, there was a concern over file fragmentation that lead to the development of a defragmentation program called, defrag. A copy of it can still be downloaded at http://www.go.dlr.de/linux/src/defrag-0.73.tar.gz. However, it is HIGHLY recommended that you NOT use it. It was designed for and older version of ext2, and has not bee updated since 1998! I only mention it here for references purposes.

There are many MS-DOS defragmentation programs that move blocks around in the filesystem to remove fragmentation. For other filesystems, defragmentation must be done by backing up the filesystem, re-creating it, and restoring the files from backups. Backing up a filesystem before defragmenting is a good idea for all filesystems, since many things can go wrong during the defragmentation.

The only "known" on where a Kernel puts stuff on mechanical drives is in the outer parts (larger radius = higher linear speed = lower access time).

That's why GNU/Linux distributions are recommending /boot, /swap at beginning of disk drive.

It is different for CD-ROM if memory serves me well. Inner radius are faster for reading or writing and there were many approaches in technology to optimize this, CAV vs CLV, Kenwood's TrueX.

So why not tinker with Windows? Worst case, you are going to reinstall it.

I don't want to bother with phone calls for Windows and keys for various software, and don't want to bother with images. But I could play with Windows on VBox, VMware and similar. I like Windows 10 reset option though.

I like to tinker with things, I reinstall Manjaro every time it bricks itself

For rolling release distro I expect to brick itself. :)

Edit: If Windows spreads its bits and pieces over all drives introduced to the system, enable SATA hotplug/hotswap in BIOS. Windows will then reconsider and keep itself to C:

I enabled hotplug for safety reasons in case if I ever need to change drive when computer is on or whatever.

Umm nope. Not how it works. When the drive is formatted, the location for the file allocation tables and partition tables are established in those outer tracks by the file system doing the formatting - not the "kernel". The kernel is part of the OS, remember.

Kernel is the brain of the OS. The most critical and important part of the OS as you know it.

But you can take a hard drive formatted by Windows and use it with Linux. FAT32 is supported by Windows, most Linux distributions, OpenBSD and Mac OS.

You can read and write (not recommended because it is still in experimental phase I think but I did not experience problems but you need to be careful about names to not write illegal characters) NTFS with NTFS-3G thanks to reverse engineering. FAT32 is so old and irrelevant for Microsoft that every OS and than some could read/write and support FAT32.

By the way, Linux (as kernel for those who don't know disctinction, not userspace - GNU) supports many other file systems which you can use as default file system, such as XFS originally for IRIX by Silicon Graphics, JFS for AIX by IBM and there are others file systems made specifically for Linux in mind like ReiserFS (official file system for SuSE long time ago and they changed it to btrfs), Reiser4, btrfs and most used and default ext (currently ext4) family of file systems.

When the computer needs to store data on the drive, it doesn't just throw that data at the drive to have it saved in "random" locations. No! When new data is to be stored, the file table "map" is accessed to see which sectors are free. The controller is then instructed to move the R/W head to that specific (not random, not unknown) location and in an orderly pattern, write that data. Then the file table maps are updated to show which locations are now unavailable.

Because of file size and defined cluster size, one file could be scattered around the disk. That's why fragmentation. Large file and small cluster size makes fragmentation.

"Random" would suggest it would be like throwing a dart at a dartboard after being spun around 3 times while blindfolded and then stuffing the data wherever the dart landed. No. The map is analyzed and then the storage location is selected based on those "known" available locations.

Maybe I am wrong, but when I say random, I mean the closest cluster usable for writing so head would not move around to much.

Now reading the various segments of the stored data may be done in a somewhat "random" order, then all the bits reassembled into the correct order in memory. This allows the read/write head to more quickly gather up the fragments based on the proximity of their locations on the disk, rather than sequentially (for example, the next word in a sentence). This means the various "fragments" may be read into memory in no "apparent" order (which is where "random access" comes from). But "accessing" the already saved file segments is not the same thing as "writing" new ones to disk.

So again, it may seem file segments are written randomly, they are not. And they are not stored in previously "unknown" locations either. The file allocation tables are actively and constantly keeping track and mapping each and every available and used location.


In computing, Native Command Queuing (NCQ) is an extension of the Serial ATA protocol allowing hard disk drives to internally optimize the order in which received read and write commands are executed. This can reduce the amount of unnecessary drive head movement, resulting in increased performance (and slightly decreased wear of the drive) for workloads where multiple simultaneous read/write requests are outstanding, most often occurring in server-type applications.

The OS does not tell the drive where to put files. When you set the size manually you sure did not tell the drive where to put the file. The controller does that.


Input/output (I/O) scheduling is the method that computer operating systems use to decide in which order the block I/O operations will be submitted to storage volumes. I/O scheduling is sometimes called disk scheduling.
 
4. If you're hellbent on having a pagefile, you must set minimum and maximum sizes to the same value to avoid rampant fragmentation which causes slow downs and reduces the chances of restoring data successfully.
On a hard disk drive your correct, on a Solid State Disk the page file can get away with expanding, and contracting without any slow down caused by fragmentation. That was the main reason I set it to 16 MB although someone wanted to argue with me about why I was messing with the default system managed settings.
 
Sorry but the complete and utter crap is most of what you just said! :(

1. No one said Windows cannot work without a PF.
2. No one said the PF will make the computer work/run faster.
3. I don't know your reason for 3 but again, no one said you cannot run without a PF.
4. Well, there we agree - you don't need to set a "fixed" size.
5. For one, Windows will create it by default. And for another, a contiguous chunk only matters with hard drives.

Ah, the guy who perpetuates falsehoods and generally talks complete nonsense has replied.

Everything that I contradicted is here in this topic in one way or another. Can you even read?

1. https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4135172 https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4134233
2. https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4134469
3. Again, no one has said anything remotely correct.
4/5 https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4137004

Again, what's your education? How many OSes have you ever run? I bet you started with Windows 10, right? Can you at least explain the meaning of all the columns in Detailed View in the Task Manager? Have you written a single line of code? Does malloc() say anything to you? Or new?

I'll just laugh and leave this idiotic thread which is rife with "insightful" comments and excitement.

On a hard disk drive your correct, on a Solid State Disk the page file can get away with expanding, and contracting without any slow down caused by fragmentation. That was the main reason I set it to 16 MB although someone wanted to argue with me about why I was messing with the default system managed settings.

Which part of "reduces the chances of restoring data successfully" didn't you understand? Should I paraphrase it?

When your files are all over the disk (NTFS stores them using 4KB sectors, i.e. a 4MB file can have up to 1024 (!) fragments), you'll have a next to zero chance of restoring them once your MFT goes kaput, the disk itself dies, or you (or some malware) accidentally delete something and you don't particularly care about backups.

Also, with SSDs Windows disables defragmentation completely and for a fun I'd recommend that you run defrag in console (e.g. defrag C: /A /V) and assess the level of fragmentation. You're in for a very big very unpleasant surprise.


Enough with this circus. Keep on inventing wild theories and saying absolutely ridiculous things.
 
Last edited:
I've never seen so much complete and utter crap about memory management and the pagefile/swap use on a single forum.

Bill_Bright already answered but I will response to some of it.

1.) I could believe you past Workstation part.

I have 16 GB of RAM, it's not to much for this day and age but not small either, and I experienced serious stability issues for just running browsers with multiple tabs open, some of them were multimedia based and it was main reason for using so much RAM/pagefile I presume, Opera, Brave, Firefox you name it, not that it matters so much what browsers I run although some of them "eat" more RAM than others. I did not even run any resource hungry game or any video game for that matter. Some DAW's with VST's here and there but nothing serious.

As I already said, Windows and Linux behave differently on memory management part and everything else for that matter. It is apples and oranges. Pagefile (Windows) and swap (Linux) behave differently no matter how similar they are.
By the way, I believe you if you used GNU/Linux sans swap without issues if having enough RAM, but not using Windows without pagefile for any serious or demanding stuff even if enough RAM.

No, no, no! The reason it grows is because Windows and other OSes may prioritize disk cache over running applications which means if you run a game which reads gigabytes of data from the disk (textures, levels, animations, sounds, etc), Windows often decides to ... page out other running applications and your pagefile use will grow.

You could be right and this is what I think about Windows and pagefile. Currently, RAM is 74% full, 11.9 GB, pagefile is 21988 MB.
And this is my pet peeve about Windows and pagefile.

you can perfectly run your PC without pagefile if you have enough RAM.

How much is enough, 640kB or 64-128 GB? I tell ya, if you even have 128 GB of RAM, you need pagefile, it is how Windows works. At least this is my case.

In Linux you need swap if you use hibernation. Other than that again there's no need to have it if you have enough RAM.

I agree, if 8-128GB of RAM and not server or using memory intensive programs surpassing your RAM and even then SSD is safe bet, because using mechanical drive would be the worst option as backup RAM.

As I said, mechanical drive is the worst RAM backup (swap) or hibernation option. I tried it and it is terrible no mater how fast mechanical drive you have. Maybe some RAID option would help in this case but this is way beyond of this issue.
 
"I experienced serious stability issues"

This sounds like crap, sorry. Either your system works or it doesn't. If you don't have enough virtual RAM (RAM + pagefile), Windows will show you a yellow alert in the notification area and it will close the applications which use too much RAM in case you don't have any free virtual RAM left.

Again, I see that many participants of this thread have a severe form of ADHD, so I will repeat: I've been running pagefile-less/swapless for over 15 years now and I started with just 1 gig of RAM. And now I don't have 128GB of RAM. My laptop has 16 and my desktop up to two months ago also had just 16. I've now upgraded my desktop to 32 but not because I run out of memory but because I want to have a larger RAM disk.

On laptop my memory use rarely goes above 6GB since I only use it for light web browsing. Again, I have 16 installed because I love having my temporary files (including browser cache) on a RAM disk. Also RAM disk allows you to compile applications a lot faster.
 
"I experienced serious stability issues"

This sounds like crap, sorry. Either your system works or it doesn't. If you don't have enough virtual RAM (RAM + pagefile), Windows will show you a yellow alert in the notification area and it will close the applications which use too much RAM in case you don't have any free virtual RAM left.

I get black sreens or my apps shut down unexpectedly. Maybe I have some issues beside Windows, compatibility, bad coding, whatever. By the way, I've never experienced problem with any of my Intel based rigs previously, even with s939 Athlon64 + DFI LANParty nF4 SLI-DR as with my Ryzen 7 + (Asus)B350/(MSI)B450 chipsets/UEFI based boards. With higly praised MSI B450 GAMING PRO CARBON AC I've got S3 issues with official v16 UEFI (not to mention fiasco powered by AMD AGESA removing Click BIOS 5 bling), and Asus with B350 TUF B350M-PLUS I've got problem with running my G.Skill RipJaws V at 3200 MHz speed but my friend run his Kingston DDR4 without problem at 3200 MHz with this board.

By the way, pagefile is not virtual RAM like swap on Linux because in this case if you've had enough RAM nothing would be written on pagefile and this is not the case with Windows. And even if you would have 128 GB of RAM, if you've had 1 MB of pagefile, it would be full.

You are right, when I've had custom sized pagefile, sometimes some of my browsers would crash all of sudden.

I've now upgraded my desktop to 32 but not because I run out of memory but because I want to have a larger RAM disk.

I am big fan of RAM disk, because of speed. And I would like to see some form of PCI-E cards with RAM expansion slots including backup battery in case of sudden loss of electricity.

On laptop my memory use rarely goes above 6GB since I only use it for light web browsing.

No doubt about it.

Again, I have 16 installed because I love having my temporary files (including browser cache) on a RAM disk. Also RAM disk allows you to compile applications a lot faster.

raw
 
Anyway, I don't remember last time I defragmented drive
Well, unless you changed the defaults - and there's no reason to - you don't need to remember because Windows automatically defrags hard drives regularly anyway.
 
GTAV will hard crash if you don't use a pagefile.

I've yet to see a single crash but this thread is full of people making things up, so I'm not surprised.

Well, unless you changed the defaults - and there's no reason to - you don't need to remember because Windows automatically defrags hard drives regularly anyway.

Windows never defrags SSD drives automatically.

Does this thread attract the people who know nothing about OSes/computing and have completely cocked up systems?

I get black sreens or my apps shut down unexpectedly.

Well, here we go. You have issues either with your RAM modules (running memtest86 for a few hours is always a good idea) or GPU, or GPU drivers or some broken Windows drivers. Your issues have nothing to do with the pagefile or its size.

I have never in my entire life seen any "black screens" in Windows.

I wonder how many people in this thread mix up RAM/pagefile issues with something else entirely. Probably most if not all.
 
Last edited:
Ah, the guy who perpetuates falsehoods and generally talks complete nonsense has replied.

Now you have accused me of perpetuating falsehoods - show us where I made any of those claims you made in your #80 or where I perpetuated them.

Your links in post #80 just illustrate the point. It is your comments that are nonsense.

1. Again no one said Windows cannot work without a PF - including the people you accused of saying so in your links
2. HD64G did NOT say the PF will make the computer work/run faster as you accused him of doing.
3. You could not back up your claims so you didn't link to anything
4/5. Again, nobody said you must have a fixed size. And nobody said you must create the PF after installing Windows.

So talk about falsehoods and nonsense. You made up everything in your post #76 because no one made those claims - then in post #80 you linked to several posters posts and claimed they said something they didn't. All the while accusing me of perpetuating falshhoods when you clearly make them up! :mad:

I'm done here.
 
Windows never defrags SSD drives automatically.
Windows does defrag the meta-data files for the file system on a SSD after a fragmentation threshold is reached.
 
1. Again no one said Windows cannot work without a PF - including the people you accused of saying so in your links

From https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4135172

Disabling your PF can also wreak havoc on multiple aspects of your system. Some games in particular, though they escape my memory as I refuse to mess with the PF these days (there's no reason to!) would crash upon launch without a PF. You can't just get rid of something that is there for the system to use.

From https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4134233

The page file is also used for memory dumps. Windows will allocate enough space to do this. Dont mess with it, its part of your system recovery

From https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4138132

GTAV will hard crash if you don't use a pagefile.

I will not reply to any of your complete and utter crap in this thread. Your knowledge of OS internals is minimal if any and your comprehension skills are missing altogether.

You're continuously embarrassing yourself but again, given an utter computer internals illiteracy in this thread (GTAV crashes without pagefile, Windows automatically degrags volumes) , I'm not surprised that no one is trying to contradict you. I will ignore notifications about this thread from now on because I love this saying by Mark Twain, "Never argue with an idiot. They will drag you down to their level and beat you with experience". This thread is just a perfect example of this saying.

Windows does defrag the meta-data files for the file system on a SSD after a fragmentation threshold is reached.

There's no such thing as "meta-data defragmentation". Windows has MFT and again it's not defragmented for SSD disks ever.
 
There's no such thing as "meta-data defragmentation". Windows has MFT and again it's not defragmented for SSD disks ever.
That's not what this implies:
This kind of fragmentation still happens on SSDs, even though their performance characteristics are very different. The file systems metadata keeps track of fragments and can only keep track of so many. Defragmentation in cases like this is not only useful, but absolutely needed.
&
Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.

 
file system doing the formatting - not the "kernel"
Wrong.
Directly from Microsoft:
134787

The only thing that talks to the Hardware is the Kernel or Kernelmodules (like drivers).
The file system only "rarely" gets involved for reasons of latency. All the virtual-physical address abstraction happens just by the kernel. Else every storage call would be 4 context switches (Application->Kernel->FileSystem->Kernel->Application), instead of 2 (Application->Kernel->Application).

Random means the closest known free physical address on the drive at the time of writing to drive. As outer parts are under the RW-Head more often, they are "preferred".
 
Use pagefile....don't use it...who the hell gives a rip? I personal find having it on means no issues, like I've ran into before. You may never use page file and your systems never run into problems - good for you. But systems are all different, the same thing can happen with video drivers. You could have the same build as someone else, but they have constant driver issues using the same GPU driver you have installed and you don't run into any issues.

When I re-installed Windows 7 after picking up a SSD, windows did not default a pagefile, it had it turned off. This is the first time I've ever seen come across this in my experience. I wasn't looking to see what the pagefile was set to after installing Windows 7, but I eventually looked because of an issue I was running into.

I had been doing fine, without issues: streaming, gaming, web browsing and so on for a while.

Fast forward about 6-8 months. I ended up picking up a copy of Shadow of Mordor for cheap, about a year after it came out. I ran the game and played fine for 30 minutes. Game looked good, ran good with all settings maxed out at 5760x1080 on my (still fairly new at the time) 980Ti. I didn't have any good amount of time to put into the game, just wanted to try it out.

The next evening, I start playing. I played for about 2 hours, went to bed.
The next night I started playing, played for about 30 minutes and the game minimized to desktop - low memory bubble on the task bar. I turn the game off and turn on OSD for memory and RAM use.
Started the game again and noticed the memory for the GPU was flirting with the max amount of 6GB, but the RAM wasn't going much over 4GB and I have 16GB installed. I played for an hour and no issue.
Played the next day. About an hour in, minimize to desktop with bubble notification about low memory.....said screw it and went to bed.
The following night, same memory bubble notification after about maybe an hour of playing.....
Pulled up the pagefile and it was turned off.
I turned it on for Windows to manage and my memory issues went away, I had no more problems playing the game.

Long story short, just leave it on. You'll never run into any issues if you do, but there's a chance you could if you have it turned off.
 
I've yet to see a single crash but this thread is full of people making things up, so I'm not surprised.
Psychological projection would have us believe you are the liar in this case since none of your claims can be verified you're just blowing smoke.

Here on the other hand is actual proof of my claim. GTAV will hard crash if you don't use a pagefile. This is 2 seconds after loading single player game with no pagefile.
 

Attachments

  • crash.png
    crash.png
    2.2 MB · Views: 218
Last edited:
Crap continues unabated.

I'm too lazy to search today, so I'll just repost this:

From https://www.tenforums.com/performance-maintenance/80085-why-windows-defragging-my-ssd.html

I've read a lot of articles and threads on this and they all reference this one guy's blog post who works/worked at Microsoft and he's not in the storage division or really knows anything about storage, also this was posted over two years ago. Not one other person, article or employee at Microsoft has ever stated something similar (to my knowledge) and have always stated that Windows doesn't defragment SSDs, full stop. I now have first hand knowledge that it does. This is the one guy I've seen that actually has a clue: Why Windows 10, 8.1 and 8 defragment your SSD and how you can avoid this – Вадим Стеркин . Notice the tweet near the bottom that states " I just talked to that team. Bad message but no actual defragging happens." Yup, the same guy that everybody is referencing to explain that defrag does happen on SSDs that ended up changing his tune. It's my belief that Windows defrags SSDs just like HDDs and this "intelligent defragging" he posted is just pure BS.​
I've been running Windows 10 on two PCs both with SSDs for over a year now and none of their volumes have ever been defragmented.

Meanwhile, can anyone here find a confirmation on Microsoft.com that Windows 10 indeed defrags SSDs or you'll keep citing random dudes who ostensibly worked for the company in the past?

Psychological projection would have us believe you are the liar in this case since none of your claims can be verified you're just blowing smoke.

Here on the other hand is actual proof of my claim. GTAV will hard crash if you don't use a pagefile. This is 2 seconds after loading single player game with no pagefile.

Amazing! An anectdotal evidence from a single person in the entire world serves as a confirmation that pagefile is always necessary. I've LOLed.

Meanwhile the fact that I've had over 200 PCs under my command (and over two dozen servers) most of which never had pagefile enabled and everything worked perfectly is nothing to write home about.
 
Crap continues unabated.
My thoughts exactly, can you not shut up? People been calling you out on your nonsense and you just keep coming back. I'm done here since you're blind as well as stupid.
 
People with just 8 gigs of RAM say gta5 runs better without pagefile but what do I know?


Again, it's funny how people who have no IT knowledge whatsoever, who've never coded, who've never run high load servers, who don't quite understand how virtual memory works try to argue with me. Never laughed so much, keep it up)))

@oobymach

So, instead of arguing you've resorted to calling names? How fitting for this discussion. Lol.

Meanwhile no relevant links have posted to counter any of my arguments. What a lovely, sorry inane discussion.
 
From what I had gathered from researching and asking some questions (all the while page file off) I did quite a bit of learning. Mr. Bright had made some very valid points actually. He's not perfect, but the way it's described, it made me wonder.

So really it was the answer "why turn it off if the system runs as intended with it on" is what actually caught my attention to actually research this a bit along with some good information, perhaps not all on the money, but let's see if I did learn something!!!

Firstly, Page file is reserved for when programs use up all the system RAM and has a place to put additional information, which by the way, isn't going to be in a normal to real time priority.
So while your game uses up a bunch of RAM, the operating system will allocate the Page File to move lower than normal priority programs here instead of using system memory.

After all Page File is refered to as Virtual memory, and is treated as such. Because the slow access times from HDDs and such, this is the reason why lower priority programs will be moved to page file.

The interesting part.....

While doing some research, I found that if you have enough RAM installed, the page file may not really be used at all. The system never wants to use virtual memory unless it really really needs to do so.

Either way, it seems with the ability to have 64GB of memory on a gaming rig is way more than enough, you could turn off page file and it won't make any difference solely because you won't need that extra virtual memory space.

However, I did a little testing myself and found some interesting facts and issues that did occur, but it took some effort. Essentialy I opened a game in steam and ran it in the background basically idle, but using system RAM, opened up some IE: put on some video and opened another taxing game. NOW, my system RAM was full. GOt a pop up mid game that system memory is full and suggested to close some programs. Now the game was running just fine even on low system memory. So I closed IE and the game in the background, the pop up did not come back.

So then, I turned page file back on realizing that @Bill_Bright was onto something, I was able to replicate running a lot of tasks and never saw a low memory issue. The page file consumed many background tasks and idle programs allowing me to play the game undisturbed.

Other than that, I've had page file off and or configured manually for certain tasks. But generally speaking, have always had plenty of RAM to not need it as long as I wasn't running a bunch of software using memory. Seemed pointless to keep a lot of programs running in the background if I'm off gaming.

Then looking at virtual machines, yea, running Page File is a good thing IMO. it will allow that VM os to use plenty of virtual memory as if it where RAM and you'll never have an issue.

In short, there really is no point to turn off page file even if you have lots of memory. Obviously if you need the disk space, it's time to invest in more drives.
 
BLAH BLAH BLAH who the heck cares either run a pagefile or don't run a pagefile at the end of the day it's your choice
 
More garbage.
I did provide proof, but clearly you're too stupid to click a picture, photographic evidence of my claim, or maybe you're just blind and cannot see what is right in front of your face. Either way I stand by my claim that you're blind as well as stupid.
 
Enough. Disagree constructively or find something better to do with your time.
 
Back
Top