• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Squabbling cores

Status
Not open for further replies.
Is the OS intelligent enough to impose the second, more efficient scenario?

No. Because there's no way to know that those 2-other threads are in a priority-inversion situation. https://en.wikipedia.org/wiki/Priority_inversion

4 independent processes

There's no way for the OS to know that those processes are independent.

Its not that the OS "isn't smart enough", its that you don't understand all the possible process-interdependencies. Processes / threads can be connected in extremely subtle ways, making most "smarts" completely irrelevant in the scope of operating-system design.
 
Last edited:
Now we are talking; that's an answer!

Much appreciated; you have tamed that dragon.
 
Now we are talking; that's an answer!

Much appreciated.

Priority inversion is just one of the many possible issues that occurs in OS design. There are many, many other scenarios to account for.

Windows "solves" priority inversion by forcibly letting all processes have a slice of the CPU every 2 seconds IIRC. I forget the Linux solution, but... you can probably get the gist of OS design when you consider these hacked-solutions / weird ad-hoc behavior.
 
Linux doesn't even know how much RAM its processes are using because of its demand-paging system an OOM-killer.

So even a statement like:

each taking 4GB in a machine

Linux literally doesn't know that. Its literally impossible for Linux to know how much RAM its processes are using, have used, or will use.

Windows knows how much processes are using currently (ie: malloc/new can fail on Windows due to out-of-memory). But in Linux's design, malloc/new always succeed. Only when that RAM is "touched" to be non-zero does Linux actually run the allocator (looking for unique RAM pages)... because Linux has an over-subscription model for its processes.

Windows also has an oversubscription model, but has better tracking of each process's alleged RAM usage.
 
So many things I didn't know... that's why I ask.
 
I'd expect that excessive paging would more than half the speed of a process
All the more reason to use SSDs.

It should be noted that SSDs are ideally suited for Page Files. See Support and Q&A for Solid-State Drives and scroll down to, "Frequently Asked Questions, Should the pagefile be placed on SSDs?" While the article is getting old and was written for Windows 7, it applies to Windows 10/11 too and even more so today since wear problems of early generation SSDs are no longer a problem and each new generation of SSD just keeps getting better and better.

@Shrek As for setting a fixed PF size, again - see if you can find one true expert, an actual case study white paper, or KB article on memory management that recommends it.
 
So many things I didn't know... that's why I ask.

This stuff is at least 3rd year college of computer-science degrees, and most people aren't paying attention at that point. Even if they are paying attention, its easy to forget these details.

Don't sweat it.
 
All the more reason to use SSDs.

I would hate to page on an SSD knowing the wear it puts on such a drive.

This stuff is at least 3rd year college of computer-science degrees, and most people aren't paying attention at that point. Even if they are paying attention, its easy to forget these details.

Don't sweat it.

Got a text to recommend? I like reading.
 
Got a text to recommend? I like reading.


Did I say 3rd year college? Maybe 3rd year graduate degree, lol. Seriously though, most programmers don't ever get anywhere near this level of detail.

I too like learning low-level details of systems, but its quite rare for me to apply that knowledge in practice. I've skimmed the above book and have even read entire paragraphs sometimes. But I don't think its reasonable to read such a book cover-to-cover. There's just so many details and its hardly applicable to anyone's actual programming job.
 
Nice, that should keep me quiet for a good while.
 
I would hate to page on an SSD knowing the wear it puts on such a drive.
:( But it doesn't. Did you read the section in the article I referenced? If you think wear is a problem in consumer computers, you are still stuck in the past of 10+ years ago and first generation SSDs. Wear is no longer a problem.

Here's the pertinent part.
Should the pagefile be placed on SSDs?

Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well.

In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD.
 
Hi,
I hate agreeing with the mad duck but yeah not a problem :p
 
And here was me thinking that wearing out an SSD was still an issue.
 
Hi,
All mine should technically be dead :laugh:

Especially existing win-7 ssd's

Here's one of my oldest win-7 ssd's
Besides using performance power plan nothing else done


1648590987400.png
 
4TB of writes, I could imagine a lot more with paging; just thinking out loud.
i.e. it's been written to about 16 times and already some life is gone.

Worth saying again; I'm not here to argue, I'm here to learn.
 
And here was me thinking that wearing out an SSD was still an issue.

All electronics will wear out - eventually. But first, it is only the writes that matter. Reads don't cause any wear to the memory devices. Writes do cause where but the numbers needed to be significant are almost unimaginable.

4TB of write, but how many times to each storage location is what really matters.

It is much more likely your motherboard, RAM or just your computing needs or CPU capability will wear out before your SSD does. Meaning you are much more likely to replace your computer to something more modern and capable BEFORE your SSD wears out. In fact, in this 5 year old computer, I have my SSDs from my last computer in here as secondary drives.

Note that many SSDs have a 5 year warranty. What does your drive have? 1 year? Maybe 3?

And do drives never wear out? Fail? Or fail prematurely?
 
10 year warranty these 850 pro's are still expensive as hell :cool:
 
I would hate to page on an SSD knowing the wear it puts on such a drive.



Got a text to recommend? I like reading.
That can be a problem. IF you have enough RAM, pagefiles can be disabled without penalty. On my personal systems, the only reason I assign one is for older/legacy programs that I still use it and which crash or refuse to run when virtual-memory is not present.

The opinion expressed by Bill is not shared by a great many very experienced people.

I hate to cite LTT, however he showcases a lot of good points.

Leo also makes some good points to consider.

A LOT of industry experts recommend setting a static pagefile to keep it from causing problems. Another configuration, one I personally use, is to move the pagefile to a separate drive in addition to setting a static size. Just because no microsoft KB article exists doesn't mean it's a bad idea or doesn't carry benefits.
4TB of writes, I could imagine a lot more with paging; just thinking out loud.
This is why having a secondary SSD for temp files is a good idea. You don't need a big SSD, 8GB or 16GB will do the job well, an inexpensive drive off ebay will work. You install Windows to your main SSD/HDD and then you configure the pagefile and temp folders to the second drive. It's an inexpensive SSD, so if it wears out or dies, no big deal. It's easily and cheaply replaced. Saves wear & tear on your main SSD. If you use a HDD for a boot drive, it will drastically speed up pagefile accessing.

In short, leaving Windows to manage the pagefile will not break anything, but it will come with certain penalties and will not serve to provide optimal system performance.
 
The new 3D NAND flash is supposed to last a lot longer.
 
Last edited:
Hi,
I've always had 32gb's of memory
Only special sauce is disabling hibernation with cmd.
Otherwise use it 7 use ss magician for trim once in a while.
 
I was talking only of the SSD. It's the reality of SSD's, they wear out. It is a scientific certainty.

I was just being silly or as close to silly as I can manage today.

My latest USB thumb drive is 3D NAND and I hope lasts.
 
4TB of writes, I could imagine a lot more with paging; just thinking out loud.
i.e. it's been written to about 16 times and already some life is gone.

Worth saying again; I'm not here to argue, I'm here to learn.

Here's the endurance and warranty information from Crucial on their MX500 SATA SSDs:

They have a 5 year warranty, or the following TBW, whichever comes first. It's a lot of writes.

Endurance - Total Bytes
Written (TBW)

250GB drive: 100TB, equal
to 54GB per day for 5 years

500GB drive: 180TB, equal
to 98GB per day for 5 years

1TB drive: 360TB, equal to
197GB per day for 5 years

2TB drive: 700TB, equal to
383GB per day for 5 years


4TB drive: 1000TB, equal to
547GB per day for 5 year
 
I was just being silly or as close to silly as I can manage today.
Fair enough. I didn't pick up on that..

Here's the endurance and warranty information from Crucial on their MX500 SATA SSDs:

They have a 5 year warranty, or the following TBW, whichever comes first. It's a lot of writes.

Endurance - Total Bytes
Written (TBW)

250GB drive: 100TB, equal
to 54GB per day for 5 years

500GB drive: 180TB, equal
to 98GB per day for 5 years

1TB drive: 360TB, equal to
197GB per day for 5 years

2TB drive: 700TB, equal to
383GB per day for 5 years


4TB drive: 1000TB, equal to
547GB per day for 5 year
That is one of the reason I like Crucial drive, they state detail specs and information for making an informed buying choice.
 
Status
Not open for further replies.
Back
Top