Win 10 Memory Management

Patsy Murphy

Senior Member
Messages
4,548
Solutions
5
Reaction score
2,805
Location
Dublin, IE
I found this to be a very interesting article, which appears to be part of the hidden features in Win 10. It appears over the coming months more and more will be revealed.


Regards Patsym
 
Thank you, bookmarked for future study.

At first glance, I'm thinking it looks like this could make small, low-powered, limited-memory multicore processors like the Z37xx Atoms more useful.

I'd seen them featured in some really low-cost mini laptops, and wondered how well they'd work; maybe better than I thought.
 
Thank you, bookmarked for future study.

At first glance, I'm thinking it looks like this could make small, low-powered, limited-memory multicore processors like the Z37xx Atoms more useful.
I am highly skeptical.

I've looked at using memory compression in some applications and it really isn't viable. Compression and decompression just burns too many cycles. Furthermore, memory is still getting larger/smaller/cheaper but the rate at which CPU speeds are increasing has slowed down quite a bit.

I think these folks get that and so they have a slightly different twist. They're trying to compress those memory structures which are rarely used so less data needs to be written to the page file. So maybe this will help on systems that are paging heavily.

But the performance of systems that are paging significantly kinda sucks anyway. This may make them suck a little less. But the real solution is to entirely deallocate that which isn't being used.

Applications programmers need to code more thoughtfully.

End users can do a couple of things:

1. Close applications that aren't being used or add RAM. If you allow Windows to start using the swap file, the performance is going to take a nose dive. So just don't go there.

2. Trim their autostarts to get rid of all those background tasks that vendors like to keep running. e.g. GWX.EXE. That's the "Get Windows 10" advert in your system tray! There's no need to burn CPU cycles compressing the thing and writing it to swap. Just nuke the sucker.
 
Thank you, bookmarked for future study.

At first glance, I'm thinking it looks like this could make small, low-powered, limited-memory multicore processors like the Z37xx Atoms more useful.
I am highly skeptical.

I've looked at using memory compression in some applications and it really isn't viable. Compression and decompression just burns too many cycles. Furthermore, memory is still getting larger/smaller/cheaper but the rate at which CPU speeds are increasing has slowed down quite a bit.

I think these folks get that and so they have a slightly different twist. They're trying to compress those memory structures which are rarely used so less data needs to be written to the page file. So maybe this will help on systems that are paging heavily.
The Atoms I was referring to are hard limited to 2GB, so they're candidates for such paging. Searching now, I see there are 4GB Z37xx Atoms, but I wasn't aware of that; AFAIK they aren't used in the supercheapies I would casually buy on a whim anyway.
But the performance of systems that are paging significantly kinda sucks anyway. This may make them suck a little less. But the real solution is to entirely deallocate that which isn't being used.

Applications programmers need to code more thoughtfully.
Right, but that won't help legacy programs.
End users can do a couple of things:

1. Close applications that aren't being used or add RAM. If you allow Windows to start using the swap file, the performance is going to take a nose dive. So just don't go there.
Can't add RAM to the ones I was looking at. :-(
2. Trim their autostarts to get rid of all those background tasks that vendors like to keep running. e.g. GWX.EXE. That's the "Get Windows 10" advert in your system tray! There's no need to burn CPU cycles compressing the thing and writing it to swap. Just nuke the sucker.
One thing about these particular Atoms that struck me when I was thinking about getting an ultralight, ultracheap laptop that uses them, is that they are quad-core.

I have not read the entire article in detail yet, but when I scanned it one phrase I noticed stated that "it’s simultaneously decompressing the data it just read in parallel using multiple CPUs."

Those extra cores might as well be doing something useful, I figure.

But after tempting myself with a visit to Fry's I've decided that I have too many PCs already, and just can't justify another toy. Yet. :-D

Oh, and I ain't got no steenkin' "Get Windows 10" advert anymore--this IS a Windows 10 machine. ;-)
 
Last edited:
I've read about that as well and it will be a great feature of Windows 10. I hope Microsoft will have a better power management with Windows 10 - that's another thing I don't like about Win 10.
 
I've read about that as well and it will be a great feature of Windows 10. I hope Microsoft will have a better power management with Windows 10 - that's another thing I don't like about Win 10.
I noticed that too. Have not checked power management since all of the updates that have been installed since the release of the OS.
 
But the performance of systems that are paging significantly kinda sucks anyway. This may make them suck a little less. But the real solution is to entirely deallocate that which isn't being used.
Applications programmers need to code more thoughtfully.
Right, but that won't help legacy programs.
True, but I think the legacy programs are less of a problem compared with some of the bloatware being churned out today :-(
End users can do a couple of things:

1. Close applications that aren't being used or add RAM. If you allow Windows to start using the swap file, the performance is going to take a nose dive. So just don't go there.
Can't add RAM to the ones I was looking at. :-(
Fair point.
I have not read the entire article in detail yet, but when I scanned it one phrase I noticed stated that "it’s simultaneously decompressing the data it just read in parallel using multiple CPUs."
Those extra cores might as well be doing something useful, I figure.
Yes, multi-core decompression is nice but keep in mind it's eating battery life on a mobile device.
But after tempting myself with a visit to Fry's I've decided that I have too many PCs already, and just can't justify another toy. Yet. :-D
Chicken! You weren't shopping with my wife, were you? That's what she always says, but without the "yet" :-)
 
I noticed that too. Have not checked power management since all of the updates that have been installed since the release of the OS.
If you suffer from power management problems, it's always worth checking for BIOS updates.
 
But the performance of systems that are paging significantly kinda sucks anyway. This may make them suck a little less. But the real solution is to entirely deallocate that which isn't being used.
Applications programmers need to code more thoughtfully.
Right, but that won't help legacy programs.
True, but I think the legacy programs are less of a problem compared with some of the bloatware being churned out today :-(
I've gotten so used to multi-gigabyte DRAM and multi-terabyte HDs. Compared to the computers of the past, I confess I haven't been troubled by bloat in a long time; I know I am showing a Bad Attitude here. ;-)
End users can do a couple of things:

1. Close applications that aren't being used or add RAM. If you allow Windows to start using the swap file, the performance is going to take a nose dive. So just don't go there.
Can't add RAM to the ones I was looking at. :-(
Fair point.
I have not read the entire article in detail yet, but when I scanned it one phrase I noticed stated that "it’s simultaneously decompressing the data it just read in parallel using multiple CPUs."

Those extra cores might as well be doing something useful, I figure.
Yes, multi-core decompression is nice but keep in mind it's eating battery life on a mobile device.
True. These little puppies have a minuscule TDP, but they have small batteries to match.
But after tempting myself with a visit to Fry's I've decided that I have too many PCs already, and just can't justify another toy. Yet. :-D
Chicken! You weren't shopping with my wife, were you? That's what she always says, but without the "yet" :-)
Hah, my lady is like that too. Always asking awkward questions like, "What are you going to do with another computer?".

They just don't understand us. Or maybe they understand us all too well. ;-)
 
I think this has to be a win-win. Remember, what happens here is replacing what used to be a simple process of memory getting low, write some pages to disk and free them, with memory getting low, compress some pages to compress store and free them, memory getting lower, write compressed pages to disk and free them.

So page swapping becomes either in-memory compression (faster than disk I/O) or swapping compressed memory (reducing disk I/O).

So once you have to swap, you do it better. Seems like it should help portable devices with low RAM a lot, especially if compression also uses less power than disk I/O.
 
I think this has to be a win-win. Remember, what happens here is replacing what used to be a simple process of memory getting low, write some pages to disk and free them, with memory getting low, compress some pages to compress store and free them, memory getting lower, write compressed pages to disk and free them.

So page swapping becomes either in-memory compression (faster than disk I/O) or swapping compressed memory (reducing disk I/O).

So once you have to swap, you do it better. Seems like it should help portable devices with low RAM a lot, especially if compression also uses less power than disk I/O.
It might help in some cases. But I just don't think it's the right answer.

Swapping is basically bad and typically makes performance take a nose dive. It's much, much better to avoid swapping in the first place by tuning the application and/or the processes that are running simultaneously. Of course, there are some exceptions to every rule but I really don't think memory compression is going to move us along very far at all.
 
The features it looks like Windows is implementing to compress infrequently used pages in memory is what linux calls zram. It's been around in Android, Chrome OS and Linux for a while now.

More about zram here:

https://en.wikipedia.org/wiki/Zram

But. it's not used very often with Linux distros, even though it became accepted into the kernel in March of last year (and used for niche purposes for years before that as compcache, prior to it being merged into the linux kernel as zram).

Here's an article from 2013 discussing it's use, as Lubuntu decided to use it by default.

http://linuxvillage.org/en/2013/10/next-lubuntu-provided-with-zram-enabled/

Some of the discussion is around some of the other parameters you can control in Linux like settings for swappiness, which would be used in conjunction with RAM compression techniques to conserve memory without swapping out to a physical disk. I notice that Microsoft article goes into how to prioritize how pages are swapped, and Linux swappiness settings control that kind of thing.

More about swappiness here (how aggressively an OS will swap out pages in memory to swap space).

https://en.wikipedia.org/wiki/Swappiness

You'll see articles about Linux from over a decade ago debating algorithms for swapping and what settings are more useful for various system types. I've seen debates from almost that long ago about swapping pages to compressed RAM, too. But, zram didn't make it into the mainline Linux kernel until March of last year. But, Google was using it before that with Android and Chrome OS.

With lower resource machines (i.e., the amount of memory installed), you'd need a good balance between CPU cycles used for the compression/decompression and memory saved by compression of infrequently used pages and storing them in compressed memory versus writing them out to disk.

Google uses it with the Chrome OS and Android, since those are typically devices without a lot of memory installed, in order to prevent the need to swap out to what is usually very slow internal storage space (type non-volatile memory in most of those devices is relatively slow compared to faster hard drives or SSDs used in higher end laptops and desktops, so using RAM for swap space can help out with performance in some cases.

But, again, you'd need to reach a balance between the amount of CPU used for compression/decompression and the amount of memory saved by compressing pages into memory versus saving those pages to disk.

For most of us, it wouldn't make any difference (since a typical photography enthusiast is going to have enough memory installed so that swap space on disk (virtual memory) is not needed very often (either disk or memory).

But, swapping out to compressed pages in memory might help out in some cases with lower end machines (such as the cheaper Windows devices with 1GB or 2GB of RAM using 32GB of slow flash memory for storage purposes, such as the new Intel Compute Sticks or similar devices).

That's where it's been most useful from what I can find out about it (low end embedded devices or netbooks with very little memory installed, like you'd find Chrome OS or Android running on).

But, other than Android and Chrome OS (that both use a Linux Kernel), I'm not aware of any mainstream Linux distro that enables zram by default unless Lubuntu still does (a distro using the LXDE desktop designed for low resource machines)

--
JimC
------
 
Last edited:

Keyboard shortcuts

Back
Top