Win 7 many memory hard pagefaults with 8 gb of ram.

Hello. I’m using Win 7 x64 with 8 GB of ram, and I noticed that apps take a while to start (the first time), so I looked at resource manager, and noticed that whenever I start a program, Hard pagefault monitor would max out (above 100). It appears that whenever I start the program for the first time it’s pagefault counter (I assume avarage per minute) will be up to 250. During system start-up, pagefault monitor is almost constantly maxed out (above 100). I read that hard pagefault is when a program asks for data from the ram, but it’s isn’t there, so it is fetched from the pagefile. I have 8 GB of ram and 1 GB (fixed size) of Pagefile, why the heck programs (appear to) use pagefile so much when they got tons of free ram?

So what can I do to minimize usage of pagefile and force programs to use RAM as much as posible?

Please read What are Hard Faults per Second? for reference.

As long as performance isn’t an issue don’t worry:

When this happens a lot, it causes slowdowns and increased hard disk activity. When it happens an awful lot, the possibility of hard disk thrashing arises.
.

For reference on how big pagefile could be: Page file size on a Windows 7 64 bit installation? .

Paging is done by the OS not by individual programs.

Are you facing performance issues with your system?

I’ve set pagefile to “system managed” and then ran full defrag on all internal drives, as well as offline defrag for C: (took hours in total) but everything is notebly faster now. Except I now got pagefile of about 7,9 Gb in size, which I think is a bit overkill and waste of disk space. I need to know how much of a pagefile system is actually using at most (I think it’s called peak commit) under heavy load (like after playing a game or two with high resource usage). I remember I could see that in XP’s task manager, but I can’t find peak commit in win 7’s task manager or resource manger. Do i need seperate program for that, or did they change the name “peak commit” in Win 7?

So the question is: How can I find out what’s peak pagefile use in win 7?

If it helps, ever since reading Pushing the Limits of Windows: Virtual Memory I’ve followed Marks advice:

So how do you know how much commit charge your workloads require? You might have noticed in the screenshots that Windows tracks that number and Process Explorer shows it: Peak Commit Charge. To optimally size your paging file you should start all the applications you run at the same time, load typical data sets, and then note the commit charge peak (or look at this value after a period of time where you know maximum load was attained). Set the paging file minimum to be that value minus the amount of RAM in your system (if the value is negative, pick a minimum size to permit the kind of crash dump you are configured for). If you want to have some breathing room for potentially large commit demands, set the maximum to double that number.

You can use Process Explorer for the commit values, look under System Information/memory

umm… just to make it clear, commit peak is how much RAM + pagefile (together) is used at most? And commit limit is basically RAM + pagefile in total? How do i know how much pagefile (only) is used at most?

I decided to do this way: I set pagefile to manual size min - 2 gb and max - 3 gb. I read that windows can dynamically only increase pagefile size (untill restarted, in which case it’ll be reseted to minimal size). So if after some heavy use pagefile will remain at 2 gb, then that’s what i’ll leave it at.