1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Win7 Memory/Cache further analyzing (mapped file)

Discussion in 'General Software' started by RuskiSnajper, Apr 26, 2011.

  1. RuskiSnajper

    RuskiSnajper

    Joined:
    Feb 18, 2009
    Messages:
    1,607 (0.79/day)
    Thanks Received:
    231
    Location:
    slovenia , europe
    Welcome to the thread where i try to ... discover something new or just learn the possiblities already outthere but hard-to-find.

    Basically first notes are:

    - Never trust Task manager graphs/certain values, they are inaccurate and show something else (unknown)
    - Talking about multiple things separated with ---- (but it's basically same topic / stuffs connected)
    - I have mistakenly called Resource Monitor (shown in pics and talked about in this whole thread) a "perfmon" ... PERFormance MONitor is something else ... not yet there (but both have same exe icon)


    The wrong task manager values got me confused again and i was making all up in my head "oh theres probably a hidden pagefile win7 damit" so i got onto cheching the LORE again ... ... and recalled my findings again , ... a lot of typing saved.


    [hr]
    ------------------
    Anyways there is this "trend" of incaccuracies between diagnostic/info programs and win7perfmon , but it's really that they're showing different stuff , some are showing "in use" memory, some are showing "commit charge". So it's all about digging up what is what and calculating.

    However, most, most programs do not have capability to detect "NO PAGEFILE" , if there's a pagefile to show , they will show fake/wrong information and somehow mix it weirdly , that goes for games too, SC2 for example has no idea that im without pagefile and when i send error report when a crash occured it was quite BIZZARE! (heh) , first , i just can't find a diagnostic info "super duper exe program whatever" that would show ACCURATE pagefile values when the pagefile is totally disabled. Games whould have ran much better if they would be able to differentiate and maybe use different approach with physical-only memory , .. for example could disable those "compensating" features or how do you want to call them , which would allow the game to just uses as much as physical is possible and run as fast as possible. >> ID Tech 5 anyone :) ... we'll see.


    YOu can find useful information in the other thread , which is also from what this thread will be continuing. Useful pictures there for explanation but i could made some new ones for this topic (cause that's mixed with sc2)
    (ignore the Starcraft 2 connections , it was a "crossfire", that turned out to be the game's own problem, it's a very buggy, laggy and unoptimized game, plus the singleplayer memory management is screwed up, makes a small memory leak noticable only in certain circumstances (4GB Ram + disabled pagefile + system (system base mem was in my case at requirement of 700MB while default win7 install is 1100MB ... i disabled useless services and had registery tweaks ... i could have tweaked to even lowe sys mem req. but that would be impractical as a longterm solution for me ... )
    (more info about sc2 here, useful but not necessary)

    That "disabling of the windows 7 standby memory (cache/mapped file)" was actually because of "elimination of variables" for my starcraft tests to go further, it was never meant to be used for PC tweaks/testing; since i leanred what this cache really is (yes, it's the one in memory, and if apps need that memory, it gets cleared so it's practically "free" memory)


    [hr]

    Appreciated and well, you had most of it correct while i wasn't sure at the time, even tho you are correct 100% , this time around im having no other issue with any game , im just trying to squeeze out the performance as much as possible! 6GB Ram should be enough for anything (but i don't run video editing or anything heavy like that)




    Simple explanation before we go further:
    [​IMG]



    [hr]
    -------------------------------
    SOMETHINGS CONFUSING ME:


    (low-priority)
    How can be a single file , loaded into memory , active in 30% and the rest being in standby ? ... but that is just weird , so if that "standby space" gets needed , would half of the file just cutoff and "active" would remain in the memory (in use) ? Is it like physically separate or just "tagged" as standby .... maybe that standby isn't corresponding to "mapped file" in RAMMap and the whole file is supposed to be in "In use" (or Commit charge , the stuff that's not free/available) but it's just another context of the word "standby" - i guess ?

    [​IMG]


    ... to be continued (have something to add ) ... if there's like a DUPLICATE of the same file inside "inuse" and inside "standby" ,... if this is true then that pretty much breaks the purpose and makes caching useless.
    Let's leave this for ... later ... this is not a high priority now.
    (low-priority)



    [hr]
    -----------------------
    NEW STUFF I FOUND:

    http://smallvoid.com/article/winnt-system-cache.html

    Probably valid for Win7 too i think. Win7 just had all that improved in terms of "using the useless free ram" - but it's the control im after, settings to control it.


    Get DnyCache here
    Get RAMMap here

    I don't think all these ambitions would be possible without RAMMAP, that's a golden app, email those guys at sysinternals for updated version!(i will anyways), latest one is old a little less then a year now.


    [hr]
    --------------------------
    SOMETHING I FORGOT:

    Summary: LargeSystemCache analyze ... which i made because i forgot ... but now i recalled , this should be disabled if it's not a server machine.

    (non-critical)
    Other people areeed too ... up until now i always got reports of "largesystemcache" being good ... that's the setting in advanced system setting about "optimize for Programs or background services" .... but now im readin and probably realizing ofcourse ... it is really because of an ILLUSION , it doesn't make difference when there's no pagefile , because "largesystemcache" just filled up the pagefile in those "single core" XP times when they thought a dual core wouldn't have this problem but it turns out it's because it was filling up in pagefile so that's why it made it more laggy, and microsoft is always talking with pagefile enabled in mind (i really hate this thing)

    UF... LargeSysCache is not CPU scheduling , it's a similar setting but it's something else , and the dialog is missing , maybe that was in vista, basically i have CPU scheduling set to "Programs" and LargeSystemCache disabled in registery ... (i did stuff long time (months) ago so im trying to recall everything :p)

    This is not present in win7 as an option ... ah the confusion but i know somehow it shows if you enable ro what ah... forgot
    [​IMG]

    ... SO basically i did research it back then, and i still had it disabled , good ... so okay.

    This simple thing says it (more explanation on that smallvoid site). And that's definitely what im trying to aviod. (performance!)
    (non-critical)





    [hr]
    ------------------------------------------------
    A LOT OF WEBSHIT: (or honest mistakes :p)

    Summary: How is the mainstream wrong about Pagefile/Commit Charge and the general inability of detecting correct pagefile values when a pagefile doesn't exist.

    An example of a program that is a great explnation how many many programs have wrong "Pagefile" values or wrong labels for the given value , this is probably detecting DXDIAG info , DirextX Diagnostic tool is ITSELF reporting wrong Pagefile values with-or-without pagefile , that's because it's not pagefile what your looking at, it's Commit Charge , total difference.


    [​IMG]

    As You can see from the image , that program is really only linking all those settings into one place , and the funny part is, the "Free memory" doesn't even work. It just zeroes some standby stuff i guess but it makes a mess with the graphs too.

    Here's the Vista task manager:
    [​IMG]
    For comparrison (pagefile -> commit )



    [hr]
    -------------------------------------

    ADVANCED CACHING POSSIBILITES:

    (high-priority! I want this badly! So far very doubtful if it's possible)

    Summary: Rammap --> Disabling or having control of the caching automation systems?


    [​IMG]
    [Why only 188kb :p]

    Why wouldn't we be able to choose to "cache" a whole file instead of that automated caching algorithm that takes small chunks of A LOT of files, pretty much stuff that i used previously and not in use for like hours ... but i know this has no negative effect , the point here is , it could have A REALLY BIG POSITIVE effect if i would have the customization to cache A WHOLE PROGAM INSTALL FOLDER MUHUHAHAAH !!!!

    I have a few questions and possibly do my own research into if actually is possible to customize win7 cache manager.

    One of the most obviousy things is:

    -selecting which files/folders NOT to cache
    -selecting which files/folders to FORCEFULLY ALWAYS cache
    -selecting percent of total file/folder size to cache ( if this thing is working in the way i see it now)


    I would simply select a file like the one aboive in picture, and set "magic option" to 100% and it would not cache like 188kb of that file , but the whole 18.xxx MB of it :D. That would be awesome.

    You would just end up loading your windows for like about 20-30 seconds longer if there would be an option to set what stuff load at what time (not the automatic way of gradual expanding it's doing now) , i would set not to cache ANYTHING else at all only those folders/files that i specified and those would be pre-loaded.

    If that's possible then , this should be really great. just imagine

    Which basically is: Replacing 2.5 GB of "a lot of files with small caching" with "a lot less files with full caching" ... i can't see that breaking something or making it crash (windows) , it's so simple , the system won't see the difference:p

    So like ... that's so so similar to how superfectch works but ... that's only for EXEs , althought it's a very similar thing, ... or maybe in the end we'll sjust found a way to tweak the superfetch in that way. hmmm



    [hr]
    ---------------------------------
    CUSTOMIZATION IN REGISTERY:

    There's something great i found is named DynCache , it's an service that makes some more advanced registery edits , which is what i was looking for , but i'll work with this later, enough for now...

    The other things i did before was SessionPoolSize and SessionViewSize and SystemPages increase , i forgot where was the calculation about how to set it properly , i didn't overkill of course it was a slight increase which would make system more flexible in low-memory, basically more memory "reservation/share" for WIN7 than for programs/apps (or i just won't search a 200 of bookmarks :p ...later)



    [hr]
    -----------------------
    CONCLUSION NOTES:

    In terms of what am i working with: is really not any SuperFectch or LargeSystemCache at play here , that's all disabled, it's the low-level integrated memory management of windows that's all at play here and that's what im after , just for everyone be all clear. It cannot be completely disabled as i firstly discovered.... but it's good stuff , [[[it's just lacks options and features for customization!]]] <<<< off to actually find out if!


    [hr]
    Specs:
    (pasted in)
    Win7 x64 ultimate
    NOD 32 Antivirus 4.0
    CPU: Intel Q9300 2.5ghz stock
    GPU: Sapphire ATI Radeon HD4870 512MB
    PSU: Enermax 620W Liberty DXX
    APU: Asus Xonar D1 PCI
    HDD: Western Digital Caviar Black 1000GB 64MB cache SATA3 (WD1002FAEX)
    RAM: 6GB - Corsair Dominator 1066mhz DDR2 CL5
    MOBO: Gigabyte P35-DS4 rev2.1 Bios-F14
    KEY: MS SideWinder X4
    MAU: MS SideWinderTM (latest, similar to X5)
    SND: Logitech X-540 5.1 Sorround System

    Settings Tweaks:
    - No Pagefile (HDD virtual memory disabled)
    - No Readyboot (not readyboost) - a kind of prefetcher working at boot*
    - No Superfetch
    - UAC disabled

    Registery Tweaks:
    -optimized memory management (nothing big , i just increased possible paged/nonpaged pools for better flexibility in low-memory circumstance so games/programs will fail first instead of the drivers/kernel)
    -Prefetcher disabled

    Service Tweaks:
    - Homegroup Disabled
    - Win Defender disabled
    - Win Firewall disabled
    - Win Search disabled (all indexing)
    - Win Update disabled (manual updates every month)


    [hr]
    ----------------
    Finishing remarks:


    I really hate Win7 Paint , it's quick but it's so awkward/annoying !

    The biggest hatred is probably the unresponsive UI - window lags , im trying to improve that as much as possible, because stuff like this makes me extremely annoyed by far mostly only this if you ask me about PCs :p, i hate UI lags. (ofcourse win7 has like 90% improvement over XP, but only if you disable pagefile the difference becomes huge)

    STATUS:
    - Currently getting DnyCache service to work
    Last edited: Apr 26, 2011
    hellrazor says thanks.
  2. streetfighter 2

    streetfighter 2 New Member

    Joined:
    Jul 26, 2010
    Messages:
    1,658 (1.11/day)
    Thanks Received:
    732
    Location:
    Philly
    Wowza, that's a long post! You might want to be more concise because your post is fairly difficult to read.

    Also use the [hr] tag to create line breaks for different sections of a large post.

    Commit charge is the amount of virtual memory allocated to a process. The pagefile is just a representation of virtual memory; in other words they're the same thing.

    I'm fairly certain of this, but it's really hot and my head hurts. :laugh:
  3. RuskiSnajper

    RuskiSnajper

    Joined:
    Feb 18, 2009
    Messages:
    1,607 (0.79/day)
    Thanks Received:
    231
    Location:
    slovenia , europe
    No, http://en.wikipedia.org/wiki/Commit_charge , here , it's a word play, it's not really about "virtual address space" or anything virtual , it should be "logical" , like it is in Intel's Cores , 8 Physcial and 8 Virtual Cores make up 16 Logical cores (but ofcourse those virtual aren't as good as physical)

    Basically MS got it a bit wrong and they never fixed the context , it still is "RAM + Virtual (pagefile)", in theory it should be "logical" because it's counting both as a total, however this is not the same case as with CPU cores, in CPUs that virtual cores would be a big boost , but when we come to ram-vs-disk , it's a whole other story since virtual ram is so slow.

    Commit Charge Limit wouldn't show 6140 MB (= 6GB) if it was reading real pagefile value , since pagefile is not present this is showing correctly.



    About the post, well , yeah it's separated a bit , but just seeing what's possible and what not , just tell me which part is not understandable or which context (quote it) and i'll explain gladly.

    ----------Pagefile Theory:

    In theory, pagefile sounds like a good thing, and all those MS "explanations" ... which are all theories on paper. If Pagefile was really used for paging out the very standby memory / cache , that is being unused. In reality, you see that the theory doesn't work as good as a lot of people always say, in reality, it's not nearly as good as it sounds. If it would transfer the whole "mapped file" (cache/stanby ... large blue bar) to the pagefile on disk that would be probably a lot better.

    ANd for the sideeffects, UI lag , unresponsivness , game issues , many thing that are because of crapy programs just unoptimized , which was the case in SC2, really bad lag with pagefile, not to mention 30-second huge lag after mission load and 10 second temporary freeze every 3-4 minutes. (between the 10-sec temp freeze , HDD activity LED at 100% ...) .. etc : And all this disappeared when i disabled the pagefile.


    Alot of "point outs" have been made for pagefile "paging out the memory that's not active" --- yeah but that's the "alternative theory" getting that chunk to save more RAM for other things - Why the heck can't we get rid of the mapped file altogether (low-mem systems, testing purposes, just proving if it's possible!)
    Last edited: Apr 26, 2011
  4. RuskiSnajper

    RuskiSnajper

    Joined:
    Feb 18, 2009
    Messages:
    1,607 (0.79/day)
    Thanks Received:
    231
    Location:
    slovenia , europe
    DynCache Service won't start ...


    Error: 1153 The specified program was written for an earlier version of windows.


    Guess that this was a fix for Vista and ofcourse for Windows Server 2008 , maybe that's integrated into win7 already , but the problem is finding those registery entries.


    Ah now i found it:

    So , the big questions , are the same OPTIONS integrated and available in win7 without the service!
  5. THRiLL KiLL

    THRiLL KiLL

    Joined:
    Oct 26, 2009
    Messages:
    711 (0.40/day)
    Thanks Received:
    140
    Location:
    Seattle
    correct me if i am wrong. but isnt page file memory evil?

    yes you would get more memory to play with, but at the same time, it is a slow as dirt, as it would be running at the speed of your hard drive?

    with ddr3 droping to bargin bin prices (new egg had 4gb x2 1600 gskill ripjaws for 60$ yesterdaty!)

    i cant see a system having less then 4gb as minimum.

    I run 8gb on most of my systems and i am going to bump my main system to 16gb soon as i run a ton of virtual machines..
  6. RuskiSnajper

    RuskiSnajper

    Joined:
    Feb 18, 2009
    Messages:
    1,607 (0.79/day)
    Thanks Received:
    231
    Location:
    slovenia , europe
    Exactly , it's evil! :p Not by it self , it's a mix of multiple factors that make it seem evil, but it's not only pagefile that's guilty.

    Not only it's an badly implemented feature by MS (comparing to linux), the fundamental rule of using "disk as RAM" is totally out of scope of any possible positive theory that is as i said only "how it should work in theory" across the web any everyone links with that because it sounds official rather then form a random guy :shadedshu

    Im not against it or anything, never said that, im just surprised it is still enabled by default in win7, the OEM minimum memory is 4 GB currently and it's been that for quite some time now.

    There come the sideeffects of improperly coded games and bugs which make pagefile even more exposed to such lag and similar things.

    The problems im more worried about are games/programs being developed always with pagefile in mind and there's really no program/gameI have ever seen that would use a different kind of approach when basically using pure RAM ... those programs always report "Pagefile/virtual: xxxx" -> it's all wrong.

    The good thing of Pagefile is basically only when it comes to low RAM , but if you have low rum , you surely aren't playing games in the first place and surely aren't a typo that even needs performance.

    Then, it's the Recovery and Diagnostics , without pagefile, Windows cannot make Full Memory Dumps, Kernel Memory Dumps , ... but AFAIK it can make small memory dumps --- nope not even that apparently ...

    [​IMG]
    So for a small dump you need a 1 MB pagefile or larger
    And for Kernel Dump you need a 400MB pagefile ...
    For Full dump you obviously need a pagefile for all of the physical ram + 100 MB

    "Complete Memory Dump" option is not visible/selectable , there's a registery edit required to enable it. (simple google)

    Whch comes down to a funny thing, pagefile seems to be enforced in an ignorant way; they just choosed to use pagefile system for the recovery/dumps, they could have made it separate, this is basically a joke that it would work like that, that's computer common sense, it was in reality designed to be like this for multiple and unknown reasons.

    One reason is ... the lazy option, saving time and cost.

    But maybe they wanted a lot of people to hide them what would probably in the early days be constant low memory errors (expensive at the time) as well as having a lot of computers with dumps for easier fixing (so if your windows get's broken the service has a dump to find the problem possibly or just send it to MS and call it a reformat) ...

    This is a wild speculation, but microsoft might rather still be doing it on purpose to force people to keep using pagefile(the real source of the problem) and campaign it as somethign so so so good in order to make an assumption that the user needs more RAM ; "that's why it's lagging for you" ... for the memory companies to sell more ram. ? ... Which also opens market to a whole chain of products to hundreds of " 3-click speed up your PC" - kind of stuff. ... which also makes the developing games a lot easier the nooby devs just throw everything into virtual pagefiles , never think of optimization and basically you end up with a working laggy game , but they HIDE the fail because of pagefile, "oh all of them have enough memory ... 2 GB + 4GB pagefile ... 6GB should be more than enouhg". ---> Makes me really thing something , you guessed it; buying more RAM wouldn't solve the lag problem ! , it's still a pile of mess with multiple buggy programs in pagefile, which would prompt people to buy even more ram?

    Pagefile is just helping an easier job for the devs and hiding a lot of problems , i wouldn't say that was the intent, but that's what it has been exploited, when will microsoft stop supporting this pile of mess and force devs to write programs that actually do have good memory management, because seriously , with pagefile they could have those leaks gone into there undetected , but without , all those leaks would fill the RAM and bam ... i bet there would be like a ... tens of thousands of memory freezes around the world per day or week? if they decided to disable pagefile by default.

    And this "unintentional misinformation campaing" , i still don't accuse microsoft or anyone but this assumption was in the air that pagefile is essential , yes ofcourse everything breaks and programs lose stability , OFCOURSE it all makes sense now, but in the end it only gets down to: THE PROGRAM SUCKS, NOT THE ABSENCE OF PAGEFILE, IF A PROGRAM WORKS FINE WITHOUT LEAKS, BUY MORE RAM.

    It's a lazy way to dump stuff into pagefile and call it a day ... that's why all those side effects , so now, even i didn't realized this unitl i did a little thinking.

    It's without the pagefile that i was able to discover the memory leak that SC2 singleplayer had , Blizzard is without ANY responses on that, they obviously want to protect a good image on their bnet 0.3 (still the best thing out there but .. come on for bnet , it won't be 2.0 for another 2 years)


    Another thing i thought, now that you understand; a simple theory emerges that no matter how much RAM you have , just having a same-sized pagefile enabled , will make NO difference if you would have a lot less of both, but still enough
    = for example performance testing a 16/16 GB VS 6/6 while running the same game (easier to detect changes)

    If somebody has 16 GB of ram or more than 8 at least , could make this possible to see for real, maybe he would need to ramp up the system load for the same ratio a 6/6 GB system would. The test could be completed in the first try if the theory is 100% correct. (however i need for that guy to have Starcraft 2 installed , as an standard example of a buggy/unoptimized game )

    I can make a 6/6 GB test , but i only have access to 8 GB max , (those 2GB aren't available for me to just take on-fly though, would require hardware work)



    [hr]
    Yet another theory:
    There's a magic circle, both sides are fueling each other, microsoft listens to devs about performance memory concerns this that .... (and nobody points out the pagefile , so microsoft goes to optimze file system cache (standby/mapped file) (and what this would do would just minimize the pagefile mess a bit but nothing actually IMPROVED beyond the line, im talking about the end result (user experience) , though they are totally separate things,) ... so MS just fixing other stuff instead of the source,(or actually tell devs to stop relying on pagefile when writting programs), on the other hand, microsoft is saying back to keep using pagefile as without it "will cause stability problems and possibly loss of data, freeze ..etc" ... so ms keeps pumping pagefile as default , the devs take it as granted when making programs,


    The required specs about RAM is pretty much always purely specific to a standard they just created out of fin air (most developers always test and base their "system requirements" with defalt settings!), it's so many factors at play "how much ram i need to play this game" ... everybody has to calculate that on their own.

    But it's obvious that you will MOST PROBABLY need (calculated) +100% the memory(RAM) that's written on the game's system requirements , simply because pagefile's default is ... even more than 150% of RAM installed, (that 50% is overkill for standard PCs) , so we end up at 4-6* GB of RAM you need to play any game you want. You can run anything with 5 too, but the point is in dual channel.

    6 GB is the golden amount , you have dual channel in both DDR2 and DDR3 and can run any game. 4GB is not enough as we can see ... leaky SC2 singleplayer freezes after an hour.
    Last edited: Apr 26, 2011
  7. W1zzard

    W1zzard Administrator Staff Member

    Joined:
    May 14, 2004
    Messages:
    14,796 (3.93/day)
    Thanks Received:
    11,501
    using a pagefile will reduce memory usage.

    take the example of gpuz.. it comes with a logo for all graphics cards (intel, nvidia, s3, amd). if you have only one card you only need one logo. yet the binary contains all four and is loaded in memory. without a pagefile you always have to keep it in memory. with a pagefile windows puts the memory of the logos into the page file and the memory pages become available for other things

    every exe (drivers and dlls including) has sections that are unused or rarely used - these go into the pagefile to free up usable memory

    as i mentioned before, on windows 7, the pagefile also serves as memory leak protection because it puts the leaked memory into the pagefile

    the general user blames microsoft for slow application performance, not the developers
  8. RuskiSnajper

    RuskiSnajper

    Joined:
    Feb 18, 2009
    Messages:
    1,607 (0.79/day)
    Thanks Received:
    231
    Location:
    slovenia , europe
    Ofcourse , that's how it should be, and it totally is fine in the theory, however i honestly got more lag from using one, that's my personal opinion, no any claims.

    This is a golden explanation, like a best case scenario, but there's the doubt the pagefile system is advanced enough to even know for a million different kind of programs what's usable and what not ... automation can't be fixing and correct for all of the programs , especiall bugged programs. Sadly i can't see that in practice, what im talking about are all my first-hand experiences, didn't heard or saw anywhere else ... with a grain of salt.


    EDIT:

    OH i get it, ofcourse , it has to be something with flagging stuff "Rarely Used" and "unused"; the problem here is the logic just doesn't know what will get used in the future time, it probably starts to "transfer to pagefile (pages in/out ? ... correct term?) too early when a program launches.


    But knowing when to FLAG a piece of code as unused or rarely used and being CORRECT is hugely coincidential/circumstatial and i very much doubt if that can ever be 100% accurate; How can the pagefile system from microsoft (example:games) which is not even developing games (external/licensed) know, when i run Starcraft 2 Singleplayer, i surely won't need to use beacons , and so the pagefile system would put the beacon info/data/icons/souns to the pagefile , and if i use a beacon ... which according to the pagefile system i " wasn't supposed to use" - the game would need to load that from the pagefile/install ; and that's exactly what makes lag. (but not necessairly in the same exact way all the time)

    The point is , the pagefile is not flagging pieces of memory as "rarely used" accurately enouhg ... it flags whatever seems to not be in use in the last X seconds/minutes (the delay is definitely too low) ... and i have no idea how it can work for big games , if it starts to determine and proceeds paging out when you start the game , how can it know what will i press next in the menu , MP or SP, "choosen by a fair dice roll let's page out the MP button icon" so i happened to "changed" my mind and clicked to play MP , there would be a slight lag/lockup, or maybe it would continue but the image wouldn't be shown (depends)

    But im realizing how it works , it's probably based and memory reads (whole mem/disk I/O for more accuracy/logic) , definitely mem reads.
    And there's a delay how many seconds/minutes some bit/piece isn't being read from memory , that will get pagefiled, this doesn't work at all in my head, further more i just probably realized , that timing of these operations once in a while makes the "pagefile eclipse" , in a circumstance , many things will get pagefiled (paged out, whatever) simultaneously , many things that were flagged as "rarely used" are needed this very moment and that should make this temporary freeze (severe) or multiple lags, and the harddrive can catch one thing at the time , not to mention pagefile fragmentation which would make this even worse.

    But these should be small bits of files and pieces , there's probably some other factor of , re-checing , making this whole process even harder to chew for the HDD.

    There's like a million possibilites , including AI code and what he does , what he might not do , and what he does rarely ... How can the pagefile system know if Brood Lords won't be used in the next match; ...
    Starcraft 2 Ai might be making a zerg rush , "okay let's switch to banelings , ... wait, we need to load banelings from the pagefile". (ofcourse i mean the bits and pieces of memory data, this is just a simplifed example)

    It's definitely not in that kind of level this sounds , but it's definitely a problem with defining what is and what isn't rarely used ... pagefile system does that to a point , but there's the always a possiblity it won't be right and it won't be accurate always.

    Nothing's a claim here, it's all opinion, this thread is not a tutorial or similar (yet:p)


    [hr]
    There would be a possiblity of much more testing if you could see what is in the pagefile, and actually control the pagefile systems ?

    Linux has ... a pagefile in a separate HDD partition, i never really saw for my self but advanced linux users told that implementation is better than in windows , i really don't know the diehard difference , wasn't researching on that yet.
    Last edited: Apr 26, 2011
  9. RuskiSnajper

    RuskiSnajper

    Joined:
    Feb 18, 2009
    Messages:
    1,607 (0.79/day)
    Thanks Received:
    231
    Location:
    slovenia , europe
  10. RuskiSnajper

    RuskiSnajper

    Joined:
    Feb 18, 2009
    Messages:
    1,607 (0.79/day)
    Thanks Received:
    231
    Location:
    slovenia , europe
    i have posted on microsoft msdn forums in hope of an answer there if it is possible to adjust the same or even more parameters used in dyncache on win7 natively.

    because Dyncache service was a kind of hotfix for the cached memory problems in vista and server 2005


    Asking the RAMMap guy at sysinternals.com is a good start too, he might know more whats possible and what definitely not.


    Practically , currently it doesn't make much of a difference at all , what's 188 KB of an file that's 18 MB in total size (example picture of sc2editor.exe); how can the pissy 188 KB of cached (pre-loaded) piece of a 18000 KB file going to help to load it faster ? hah , that's like a 0.001 less seconds to wait , pretty useless. :p

    In summary, the memory management automation just caches a lot of files it thinks you'll use , but where's the part when it decided what files to cache , based on what specifications of "recent use" and some sort of caclulation of user's habits , if we can tweak the analyzer , we can trick the whole memory system , but the problem is finding the analyzer, it definitely is hardcoded from what i can tell so far , ...

    ;;A lot of files being cached that it makes up 2.5 GB of it normally, Imagine all the possibilites i can do with that 2.5 GB, personally i would cache whole install folders and set it so it loads at windows boot (so you don't actually wait at logon) , when you opened that program , it's stuff would just move from the cache to the working set accordingly , then back to the cache when you stop using it.

    This could open thousands of possiblities , you could set to cache all the startup programs and stuff that normally goes up at logon, preferably loading all that into memory at windows boot, that's practicall so so much , you normally turn on your PC then maybe get afk, and you get back later than it boots up (atleast me) ;, so what's the POINT of microsoft's faster boot, it can't be more faster than the speed of HDD, the system gradually grows the cache instead of all at once at win boot. You actually end up having faster loading of programs once you used your computer for an hour or 2.
    I would gladly sacrifice win boot time for a faster logon any time , i just hate when UI icons and desktop crawl slowly, that one of the most annoying things. Inside that 2.5 GB ... you can fit a ton of stuff, would require some extensive configuration but, optimally, Logon would be ultra-fast, you could put in dozens of smaller programs, some music and a video ... whatever you like as long as you have enoug RAM.

    A small problem emerges ... but that could be configurable too, a priority system for low-memory circumstances "what will get cleared from cache if memory is too-low" ...

    You would eliminate the slowest factor in the computer for that specific moment when that specified stuff would needed to be loaded from HDD but it would use cache already in memory.

    How much it would took is all down to: CPU power, RAM-CPU transfer rate and memory speed, as well as Software's code-specific performance (code technology and quality)

    No more waiting for HDD to load all those files, HDDs just crawls when it needs to load TONS of different and small files > small files randomly on HDD > more seek time needed ! (it's not continious, HDD needs to spin more, even longer to wait) So this was an example for if you would choose to configure for faster logon.

    For a game , you would definitely want to cache ("preload", this term more accurately describes this, because they're not pieces but rater whole entities) smaller files , like DLLs, cofigs , and exes , as well as some room for other data, that would clear the way for HDD to only load bigger files (which should also be more continious on a defragged partition).


    And if that doesn't make sense , i quit PCs and gone playing underwater baseball with obstacles.:shadedshu Windows 7 is a 300$ cosmetic patch.


    Use a SSD and the problem is solved ... ?
    -This is years away from being affordable
    -They aren't nearly as fast as RAM

    But you can do a SSD RAID and with enough of them ... ?

    -Yeh, do you have 5000$ lying around to make it ?
    Last edited: Apr 28, 2011
  11. streetfighter 2

    streetfighter 2 New Member

    Joined:
    Jul 26, 2010
    Messages:
    1,658 (1.11/day)
    Thanks Received:
    732
    Location:
    Philly
    I'm not seeing how anything of what I said is wrong. Wikipedia seems to confirm whats I said, specifically:
    Commit charge is the amount of virtual memory allocated to a process.
    Though if a system has no virtual memory, then it is just a theoretical number.

    Microsoft's verbiage is crude, but it's better than nothing. Some memory items may be resident in both physical memory and the pagefile.

    Pagefile writing isn't a double-blind process. Programmers can tell Windows when to push things into the pagefile by trimming the working set or by allocating virtual memory. Most of my programs will push the heap into virtual memory when it's not being used.
    http://msdn.microsoft.com/en-us/library/aa366533(VS.85).aspx
    http://msdn.microsoft.com/en-us/library/aa366781(v=vs.85).aspx

    The AI base code is probably allocated on the stack (because it's used so much) with any additional variables (such as the array for each unit) allocated on the heap, some of which is automatically paged by Windows.

    It's the latency that kills. Accessing on-die cache is say, <1μs (probably a lot lot less actually :laugh:); Memory is say, 10μs; the HDD is maybe >300μs. (All numbers guestimated).
    Last edited: May 1, 2011
  12. RuskiSnajper

    RuskiSnajper

    Joined:
    Feb 18, 2009
    Messages:
    1,607 (0.79/day)
    Thanks Received:
    231
    Location:
    slovenia , europe
    Wikipedia is wrong, that's why. The context it's being said it's too rough and it's not properly formatted (missing context notifiers)

    Commit charge is the amount of virtual memory allocated to a process.

    That is only a part of the description, they're talking about the Commit Charge for a process here , Im talking about the Total System Commit Charge so basically everything the computer needs either physical or virtual.

    Why im talking about the total is simple, that's becuase windows shows it that way, if you look the value , it's higher than total physical , without pagefile , that's a bit weird, but if you run out of commit then you.

    Since it's only physical RAM, it can't be commited anywhere else, but , here's another catch here, i have revaluated it and i found an interesting thing.

    "virtual memory allocated to a process" -> it's virtual address space , not virtual memory, and that's something completely different. I will fix that on wiki.

    AH ! I got it now! Thevirtual address space is called virtual because it's not necesairly used, it's virtual , for the application to think that it has memory there.

    "On systems with a pagefile, it may be thought of as the maximum potential pagefile usage." - this is also wrong , will fix that too.


    Definitely , this is it, Commit is something that is virtual but non-filled memory , it's there on the pagefile , but when you enable pagefile it county physical memory too, when it is on the pagefile (virtual address space) it's for the program to have "reservation" of free space which will or not be filled in future ,

    The problem arises when Commit is lower than the Working set for a process.

    Which makes Commit something completely different, it's just weird.

    But still, when commit runs out , the PC and programs will stop working. Normally it's higher than "physical memory" counters.


    EDIT: "Commit charge is the amount of virtual memory allocated to a process "
    Correction: I can't find that statement on the wiki page. That statement is still wrong.
    --------------
    [hr]
    not only it needs to access it longer, it's also loading something so long and so hard from it , it's just a lot of HDD operations with a pagefile.
    Last edited: May 1, 2011
  13. RuskiSnajper

    RuskiSnajper

    Joined:
    Feb 18, 2009
    Messages:
    1,607 (0.79/day)
    Thanks Received:
    231
    Location:
    slovenia , europe
    Yep I got it finally!

    http://social.technet.microsoft.com...n/thread/eb8457b4-d121-4572-95a6-ee58d6c44c64

    Holy post!

    So you see, they use quite a context word play, it's their jargon that they use, it's confusing for a lot of people.

    "page file space " ----> RAM + Pagefile , it's a microsoft geek term ! He explains it him self, just read the whole "best answer" post.

    That's because commit can be either on pagefile or physical ram.

    Not all the commit charge gets actually written to the pagefile , and it's NOT EVEN USED memory , it's "RESERVED" --> exactly what i have predicted from my earler findings, the MSDN answer completely confirms that , it's is virtual address space which is not necesairly used , that's why it's called virtual, it's not real.


    SO basically , with a pagefile , some part of that commit (KB) [in the task manager processes columns] (not "Commit Charge", lol even this is a difference now:laugh:) the part that is "actually used commit" written to the pagefile is uncertain , some data will get written there , the rest is virtual address space , which is in physical memory not actually


    So we can now say that Commit (MB) [below in the win7 task manager] ... the part of it that is about a few 100 megs larger than Physical Memory used, is actually reserved virtual address space , that "overhead" from the actual physical used. Idk if this is proper explanation.

    The problem becomes the uncustomizable pagefile, for performance theory, .. why doesn't pagefile work to only "allocate" that virtual address space that's being inside RAM (supposably taking RAM) and not write any actual data to it , that would work , and that's how w1zzard and a few other explained it how it works , but definitely not a performance boost of any kind or any "help" for the ram , even if this virtual address space is in my ram , like ... 500 MB of it , it's still worth to run without the buggy and laggy pagefile system , basically the whole memory management is closed down and it works how microsoft automated it , which is to say at least, a huge pity, with proper config, that kind of config would require some work on their end i guess.

    EDIT:

    The grand rule: Virtual Memory =! Virtual Address Space

    And i see you get the idea and you understand it , but it's confusion for those that don't , and it was confusing me as well. You aren't wrong , but that statement is , virtual memory is usually term for the pagefile.sys , but i get it, they're a lot of times mixing the terms , virtual address space is 100% correct term for that statement.

    What if memory runs out , Virtual Address Space will obviously get filled up and it won't expand over the Physical Memory limits (without pagefile)





    EDIT2:

    Also this:

    Pavel Lebedinsky [MSFT]


    Found it here: http://social.technet.microsoft.com...f/thread/7cf838b7-2ced-45a4-a348-3490a226c637
    Last edited: May 1, 2011
  14. RuskiSnajper

    RuskiSnajper

    Joined:
    Feb 18, 2009
    Messages:
    1,607 (0.79/day)
    Thanks Received:
    231
    Location:
    slovenia , europe
    Funny stuff what gets cached into the mapped file:

    [​IMG]

    These are the bugs i was meaning , such inaccuracies and stupid behavior by memory management; why on EARTH would caching recycle bin items, "speed up" the system ?



    Not to mention , that there is so many "useless"(subjectively) stuff cached to mapped file just because it was used some 30 days ago or what ... it's mostly big programs and game files , but i don't need more than half of those, i would use that space for something else i currently use the most.
  15. RuskiSnajper

    RuskiSnajper

    Joined:
    Feb 18, 2009
    Messages:
    1,607 (0.79/day)
    Thanks Received:
    231
    Location:
    slovenia , europe
    I think i got the understanding now: (TLDR checkpoint - for those, the summary)


    Commit (virtual address space, it should be "logical" here) is the sum of all logical address space allocated by a process. Logical because some of it is virtual(empty,reserved) and some of it is physical(real,used/active)

    It has nothing to do with pagefile(swap file)

    Pagefile, in essence, helps to loosen the RAM to store this "virtual address space" , which is the reserved empty space,

    There is, surprisingly no way to detect this virtual reserve, at least i don't know, but one was is to



    The problem is , when Commit is lower than Working Set , this is when this comes in:




    And this settles this for good , but i really have hard time understanding this wacky microsoft idea, ... it clearly doesn't work for the performance, makes more lag.

    One rule still remains: when commit charge (which is sum of all processes;; and commit charge Peak which is sum of all logical memory(PF + RAM))hits it's peak , applications already start to behave like they ran out of memory and start acting weird, even if that virtual reserve is still in RAM and being unused, because another app reserved it, but the yet another one cannot fill it with it's own data because it's flagged as taken/restriced access.

    Just now i had Firefox act strangely, crashing so many times and wierd missing icons and UI items, just to see that Photoshop.exe was having 2000 MB Working set, and 2500 MB Commit , those 500 MB overhead was definitely this virtual empty reserve which is lying inside no matter how much you need it.
    This is FAIL at program developers, but microsoft has a aproblem if the whole world just makes CRAPPY programs; because it's microsoft who even allowed using C++ commands for stupid virtual reserved memory. If it wasn't for this, a lot less people would need pagefile, so, pagefile still bad for performance.

    If they couldn't get rid of virtual reserve , then they could at least FIX the pagefile and give us option to only use pagefile in a way, to store this silly reserved memory to hdd, while no actual real data should be allowed written there.

    Which is probably very similarly how W1zzard tried to explain, give all credit that was correct.

    But there's more.

    Yes, pagefile always looks good on paper, in reality it also writes data there, and even if mircorosft employee said, "it's not extra memory" , but it does actually write actual data into pagefile, and that is what pagefile% and pagefile usage ... that is the logical memory there, with virtual allocations, the fraction of that value being actual real data is another story, it's not detectable and not shown in any programs (not PF nor MEM) , we can't tell how much of this virtual reserve is on pagefile usage and how much on RAM, it's just not shown on any diagnostics (the freaking CAUSE of the confusion!!!) but we have the sum -> Commit Charge :)

    In theory now, this virtual reserve memory could be calculated (without pagefile) Physical Memory usage (active/working sets) should not be greater or same of the current Commit Charge. (*** it can be same but in terribly coincidential or controlled circumstances)

    By re-calculating, Physical Memory usage minus Commit Charge , we can get an solid value of the size of the virtual reserve memory(if this is that simple?).


    At my example: I got 433 MB currently (supposably this should be virtual reserve memory(allocation) , so this is the megabytes that supposed to go to Pagefile(this is why people praise it so much), yes, not bad, but pagefile has it's own side-effects, and normally a lot of badly developed programs are prone to slowdowns and lags with pagefile enabled.
    It's problem is, if it only worked this way, only storing this certain space and not anything else, no actual data, then it would have been on a much better direction.

    Althought Windows 7 really nails it in this, it has been hugely improved, so I might say im over-reacting ... but explaining is taking a critical view; watch out for bugged programs like SC2 , which will make a lot of lag with pagefile, developers to blame in this case.




    Note: That active memory = total used Physical RAM , is not only from Working sets , Metafile and Mapped File have also some 300 Megs of Active (used) memory, kernel , pools , shared memory (some of it is also modified) ... etc. (RAMMap at sysinternals.com)

    Note2: there's some inconsitencies with reading the values with Task manager, Resource Manager and Rammap , they all show a bit different, but Rammap stands out and shows everything a bit higher, dunno which is most accurate though. But there's ... a little catch, one part that i always thought to be "inaccurate" is the Memory graph in Taskbar, that graph shows a green colored number on the botom of the bars, that value is most accurate with RAMMap , and not Resource Monitor, which shows some ~100 MB less.(In Use <compare> Active)

    Note3: that "commit" inside Commit Charge is also , virtual allocations from kernel and pf-backed stuff which MSFT explained, so it's not just from processes.


    If that's how the system behaves , treating this empty reserved space of memory as used memory is simply worthless to even differentiate for normal usage, as i said in the beginning , thankful that this rule still applies, when Commit Charge hits it's limit , then you surely are out of memory. Nothing else really matters then (for general usage)

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page