• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

System unresponsive at max RAM usage

  • Thread starter Thread starter Deleted member 50521
  • Start date Start date
Well maybe you are onto something with your below quote.


I would actually get a hold of the Vendor and see what they have to say seeing as how the job sets are so long. https://www.mothur.org/forum/memberlist.php?mode=contactadmin&sid=7fa22c67e332f791ca56d609fbf02f78

EDIT: I was reading in other forums that the first line of advise for this error is to make sure you are using the most recent version of Mothur.

EDIT 2: The most recent version is
Version 1.39.5 located here https://github.com/mothur/mothur/releases/tag/v1.39.5

yeah i am using 1.39.5. I tried all different compiled version as well


It used to be in the day for the amt of memory you have double the swap/paging file.

Windows 7 requires 2GB itself atleast in my case it does. If there is a way to set your program to use set ram I would leave 1GB to spare so if you have 16GB of ram set the program to use 13GB at most

So I assume if I look back at my logging file to assume the amount of disk space used for swapping. Take that and plus 2 GB and set it manually in Win10?

have y

you had successful computation using this before? I was under the impression this was still a fairly beta feature.

yes. I did a couple RNAseq analysis using different program but under the Ubuntu for Win10 FCU. They run just fine while using close to 90GB of RAM
 
The way it was then, if you only had 2048MB of ram, you would set paging to 4096 MB. Now today if you have 16,384 MB of ram if you did double of that you would be using 32,768 MB of storage for paging.

If there is a way to limit the programs ram useage I would.

Paging file itself for most users of W7-W10 typically let the system manage it, I set it to 4096 or was it 8192 to reserve that space so it's not continually shrinking or expanding on the SSD (less Writes).

You may have to find out if you need flags/command switches to force the program to use a set amount of ram and even possibly force it to use paging.
 
Got an email response from the developer team. It was short. Basically they tell me to either man up and buy some expensive shit server grade load reduced 512GB~1TB RAM to pair up with a Xeon if I want to run it locally. Or I can sign up on Amazon Web Service.

Welp back to the drawing board.

I had high hopes that the computational power of HEDTs of nowadays are finally good enough for demanding workload. guess we will have to wait a bit longer. :(
 
For me I'd love a dual TR setup but I haven't heard anything about it...
 
Go to your settings and set a large chunk of your SSD (256GB, I think that you said) as the page file.
Do not let the system manage it. Specify the size yourself. Set it to your preferred size minimum, and the same size maximum so the system always has it available.
See if that helps
 
Is computational power the issue though? You are running out of RAM...

Thats what I think too. I think HEDT would be more then sufficient but I would also be running this nativley and not through the linux sub system in power shell. Remember WINDOWS pages, we dont have any idea what memory managemnet is like in the ubuntu subsystem it may not be correctly conveying that it needs to page. since those system are different.

Maybe try running in in a ubuntu partition on the same unit?
 
Is computational power the issue though? You are running out of RAM...

The thing is HEDT max out RAM at 128GB. Any larger ram size requires going Xeon or epyc. And the price for those kind of hardware just skyrocketed up comparing to consumer HEDT.

Both xeon based platform and AWS will have way more available RAM.

And of course as any piece of free linux based bioinformatics software, the optimization is usually none existent. It is likely just some poor grad students living on ramen noodles doing all the patching and optimization.
 
Right.. so it isn't computational power... just platform limitations.

Won't your employer pay for or at least subsidize a PC for this type of work? I don't understand exactly what you are doing but assumed you are getting paid for it by someone? Or is this college research???
 
Older server would cut it for you. whats your dataset size?
 
128GB of ram, but 158/174GB committed, you are deep into swap somewhere.
 
Right.. so it isn't computational power... just platform limitations.

Won't your employer pay for or at least subsidize a PC for this type of work? I don't understand exactly what you are doing but assumed you are getting paid for it by someone? Or is this college research???


university HPC is pay to play. Price is not that cheap, plus I have to wait ~2 weeks even get on the waiting queue.

and yeah academia research. bascially trying to do most amount of work with nothing, because limited grant funding. :(
 
128GB of ram, but 158/174GB committed, you are deep into swap somewhere.
He should just set aside the whole NVMe drive & call it a day, though I guess swapping large parts of memory in & out will run the drive into the ground. You still haven't told us what the other drives (spinner vs SSD) are & what's on E?
university HPC is pay to play. Price is not that cheap, plus I have to wait ~2 weeks even get on the waiting queue.

and yeah academia research. bascially trying to do most amount of work with nothing, because limited grant funding. :(
There's a chance TR might support rdimm, since it supports ECC. You could get in touch with AMD or someone at ASUS or Asrock & see if they could support rdimm on any of their boards.

TcdcYqx57L1evp4KJ9QmAsnJZcd9OX2B_D0So3c_JYc.jpg
 
E was a WD 5TB drive, spinner for data storage. I am not rich enough to store data on nvme ssd.

Guess i should mention that all my drives are set to have pagefile managed by system
 
The high disk activity on E: points to a pagefile being used constantly, the system normally doesn't become unresponsive when it runs out of memory. It's only paging that causes this effect, especially on a spinner.

Disable all your other pagefiles, set the one on C: to 32GB, if you have the space even more could be assigned, & see how that goes. Page thrashing on a spinner is horrendous, always has been.
 
you can also see it in perfmon, it will tell you how many hard faults/s you have which basically means page out.

E was a WD 5TB drive, spinner for data storage. I am not rich enough to store data on nvme ssd.

Guess i should mention that all my drives are set to have pagefile managed by system

you can always RAID 5 those suckers for extra iops.
 
I do not believe system pagefile activity is shown with process explorer, so you would need to either log the hard page faults or use performance mon to show it. And to agree with everyone else here who is right, yeah, reading to and from disk is insanely slow for applications and will cause a system to hang or freeze. The latency and speed for the reads are snail speed compared to RAM, and it also requires 2 writes for every read as it has to copy data out of cache, and if memory is full it has to write out of the cache, clean the prior RAM map and then copy or read data in.

https://blogs.technet.microsoft.com/clinth/2013/10/16/tracking-page-file-reads-and-writes/
 
There's a chance TR might support rdimm, since it supports ECC. You could get in touch with AMD or someone at ASUS or Asrock & see if they could support rdimm on any of their boards.

Registered tends to be reserved for server space (product segmentation). Many amd cpus have supported ecc unofficially, that's nothing new or special.
 
I had high hopes that the computational power of HEDTs of nowadays are finally good enough for demanding workload. guess we will have to wait a bit longer. :(

You are already kind of pushing it beyond anything even remotely close to the typical use case for an HEDT.
 
this thread brought to you by downloadmoreram.com

you should be running windows server
 
windows server will handle being 'out of memory' a lot better then pro

the consumer os is not configured for large workloads

that and windows server uses less ram

hes at the point where he needs to go though and disable any and all services not needed for his task
 
I do have ubuntu installed on my old mx200, I can pull that out and see running in native linux change anything. Maybe not. who knows. I am just throwing mud at wall and see what sticks.
 
Linux should be far superior for these workloads
 
Back
Top