HOME

TheInfoList



OR:

Virtual memory compression (also referred to as RAM compression and memory compression) is a
memory management Memory management is a form of resource management applied to computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when ...
technique that utilizes
data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression ...
to reduce the size or number of
paging In computer operating systems, memory paging is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in main memory. In this scheme, the operating system retrieves data from secondary storage ...
requests to and from the auxiliary storage. In a virtual memory compression system, pages to be paged out of virtual memory are compressed and stored in
physical memory Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. The central processing unit (CPU) of a computer ...
, which is usually
random-access memory Random-access memory (RAM; ) is a form of computer memory that can be read and changed in any order, typically used to store working Data (computing), data and machine code. A Random access, random-access memory device allows data items to b ...
(RAM), or sent as compressed to auxiliary storage such as a
hard disk drive A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage with one or more rigid rapidly rotating platters coated with magnet ...
(HDD) or
solid-state drive A solid-state drive (SSD) is a solid-state storage device that uses integrated circuit assemblies to store data persistently, typically using flash memory, and functioning as secondary storage in the hierarchy of computer storage. It is ...
(SSD). In both cases the
virtual memory In computing, virtual memory, or virtual storage is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very l ...
range, whose contents has been compressed, is marked inaccessible so that attempts to access compressed pages can trigger
page fault In computing, a page fault (sometimes called PF or hard fault) is an exception that the memory management unit (MMU) raises when a process accesses a memory page without proper preparations. Accessing the page requires a mapping to be added to t ...
s and reversal of the process (retrieval from auxiliary storage and decompression). The footprint of the data being paged is reduced by the compression process; in the first instance, the freed RAM is returned to the available physical memory pool, while the compressed portion is kept in RAM. In the second instance, the compressed data is sent to auxiliary storage but the resulting I/O operation is smaller and therefore takes less time. In some implementations, including
zswap zswap is a Linux kernel feature that provides a compressed write-back cache for swapped pages, as a form of virtual memory compression. Instead of moving memory pages to a swap device when they are to be swapped out, zswap performs their c ...
,
zram zram, formerly called compcache, is a Linux kernel module for creating a compressed block device in RAM, i.e. a RAM disk with on-the-fly disk compression. The block device created with zram can then be used for swap or as general-purpose RAM di ...
and Helix Software Company’s
Hurricane A tropical cyclone is a rapidly rotating storm system characterized by a low-pressure center, a closed low-level atmospheric circulation, strong winds, and a spiral arrangement of thunderstorms that produce heavy rain and squalls. Depend ...
, the entire process is implemented in software. In other systems, such as IBM's MXT, the compression process occurs in a dedicated processor that handles transfers between a local
cache Cache, caching, or caché may refer to: Places United States * Cache, Idaho, an unincorporated community * Cache, Illinois, an unincorporated community * Cache, Oklahoma, a city in Comanche County * Cache, Utah, Cache County, Utah * Cache County ...
and RAM. Virtual memory compression is distinct from
garbage collection Waste collection is a part of the process of waste management. It is the transfer of solid waste from the point of use and disposal to the point of treatment or landfill. Waste collection also includes the curbside collection of recyclable m ...
(GC) systems, which remove unused memory blocks and in some cases consolidate used memory regions, reducing fragmentation and improving efficiency. Virtual memory compression is also distinct from
context switching In computing, a context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point, and then restoring a different, previously saved, state. This allows multiple processes ...
systems, such as
Connectix Connectix Corporation was a software and hardware company, noted for having released innovative products that were either made obsolete as Apple Computer incorporated the ideas into system software, or were sold to other companies once they became ...
's RAM Doubler (though it also did online compression) and Apple OS 7.1, in which inactive processes are suspended and then compressed as a whole.


Benefits

By reducing the I/O activity caused by paging requests, virtual memory compression can produce overall performance improvements. The degree of performance improvement depends on a variety of factors, including the availability of any compression co-processors, spare bandwidth on the CPU, speed of the I/O channel, speed of the physical memory, and the compressibility of the physical memory contents. On multi-core, multithreaded CPUs, some benchmarks show performance improvements of over 50%. In some situations, such as in
embedded device An embedded system is a computer system—a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is ''embedded'' as ...
s, auxiliary storage is limited or non-existent. In these cases, virtual memory compression can allow a virtual memory system to operate, where otherwise virtual memory would have to be disabled. This allows the system to run certain software which would otherwise be unable to operate in an environment with no virtual memory.
Flash memory Flash memory is an electronic non-volatile computer memory storage medium that can be electrically erased and reprogrammed. The two main types of flash memory, NOR flash and NAND flash, are named for the NOR and NAND logic gates. Both us ...
has certain endurance limitations on the maximum number of erase cycles it can undergo, which can be as low as 100 erase cycles. In systems where Flash Memory is used as the only auxiliary storage system, implementing virtual memory compression can reduce the total quantity of data being written to auxiliary storage, improving system reliability.


Shortcomings


Low compression ratios

One of the primary issues is the degree to which the contents of physical memory can be compressed under real-world loads. Program code and much of the data held in physical memory is often not highly compressible, since efficient programming techniques and data architectures are designed to automatically eliminate redundancy in data sets. Various studies show typical
data compression ratio Data compression ratio, also known as compression power, is a measurement of the relative reduction in size of data representation produced by a data compression algorithm. It is typically expressed as the division of uncompressed size by compresse ...
s ranging from 2:1 to 2.5:1 for program data, similar to typically achievable compression ratios with
disk compression A disk compression software utility increases the amount of information that can be stored on a hard disk drive of given size. Unlike a file compression utility, which compresses only specified files—and which requires the user to designate ...
.


Background I/O

In order for virtual memory compression to provide measurable performance improvements, the throughput of the virtual memory system must be improved when compared to the uncompressed equivalent. Thus, the additional amount of processing introduced by the compression must not increase the overall latency. However, in
I/O-bound In computer science, I/O bound refers to a condition in which the time it takes to complete a computation is determined principally by the period spent waiting for input/output operations to be completed. This is the opposite of a task being CPU bo ...
systems or applications with highly compressible data sets, the gains can be substantial.


Increased thrashing

The physical memory used by a compression system reduces the amount of physical memory available to processes that a system runs, which may result in increased paging activity and reduced overall effectiveness of virtual memory compression. This relationship between the paging activity and available physical memory is roughly exponential, meaning that reducing the amount of physical memory available to system processes results in an exponential increase of paging activity. In circumstances where the amount of free physical memory is low and paging is fairly prevalent, any performance gains provided by the compression system (compared to paging directly to and from auxiliary storage) may be offset by an increased
page fault In computing, a page fault (sometimes called PF or hard fault) is an exception that the memory management unit (MMU) raises when a process accesses a memory page without proper preparations. Accessing the page requires a mapping to be added to t ...
rate that leads to thrashing and degraded system performance. In an opposite state, where enough physical memory is available and paging activity is low, compression may not impact performance enough to be noticeable. The middle ground between these two circumstanceslow RAM with high paging activity, and plenty of RAM with low paging activityis where virtual memory compression may be most useful. However, the more compressible the program data is, the more pronounced are the performance improvements as less physical memory is needed to hold the compressed data. For example, in order to maximize the use of a compressed pages cache, Helix Software Company's Hurricane 2.0 provides a user-configurable compression rejection threshold. By compressing the first 256 to 512 bytes of a 4 KiB page, this virtual memory compression system determines whether the configured compression level threshold can be achieved for a particular page; if achievable, the rest of the page would be compressed and retained in a compressed cache, and otherwise the page would be sent to auxiliary storage through the normal paging system. The default setting for this threshold is an 8:1 compression ratio.


Price/performance issues

In hardware implementations the technology also relies on price differentials between the various components of the system, for example, the difference between the cost of RAM and the cost of a processor dedicated to compression. The relative price/performance differences of the various components tend to vary over time. For example, the addition of a compression co-processor may have minimal impact on the cost of a CPU.


Prioritization

In a typical virtual memory implementation, paging happens on a
least recently used In computing, cache algorithms (also frequently called cache replacement algorithms or cache replacement policies) are optimizing instructions, or algorithms, that a computer program or a hardware-maintained structure can utilize in order to ma ...
basis, potentially causing the compression algorithm to use up CPU cycles dealing with the lowest priority data. Furthermore, program code is usually read-only, and is therefore never paged-out. Instead code is simply discarded, and re-loaded from the program’s auxiliary storage file if needed. In this case the bar for compression is higher, since the I/O cycle it is attempting to eliminate is much shorter, particularly on flash memory devices.


Compression using quantization

Accelerator designers exploit quantization to reduce the bitwidth of values and reduce the cost of data movement. However, any value that does not fit in the reduced bitwidth results in an overflow (we refer to these values as outliers). Therefore accelerators use quantization for applications that are tolerant to overflows. In most applications the rate of outliers is low and values are often within a narrow range providing the opportunity to exploit quantization in generalpurpose processors. However, a software implementation of quantization in general-purpose processors has three problems. First, the programmer has to manually implement conversions and the additional instructions that quantize and dequantize values, imposing a programmer's effort and performance overhead. Second, to cover outliers, the bitwidth of the quantized values often become greater than or equal to the original values. Third, the programmer has to use standard bitwidth; otherwise, extracting non-standard bitwidth (i.e., 1–7, 9–15, and 17–31) for representing narrow integers exacerbates the overhead of software-based quantization. A hardware support in the memory hierarchy of general-purpose processors for quantization can solve these problems. Tha hardware support allows representing values by few and flexible numbers of bits and storing outliers in their original format in a separate space, preventing any overflow. It minimizes metadata and the overhead of locating quantized values using a software-hardware interaction that transfers quantization parameters and data layout to hardware. As a result, transparent hardware-based quantization has three advantages over cache compression techniques: (i) less metadata, (ii) higher compression ratio for floating-point values and cache blocks with multiple data types, and (iii) lower overhead for locating the compressed blocks.


History

Virtual memory compression has gone in and out of favor as a technology. The price and speed of RAM and external storage have plummeted due to
Moore's Law Moore's law is the observation that the number of transistors in a dense integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empir ...
and improved RAM interfaces such as
DDR3 Double Data Rate 3 Synchronous Dynamic Random-Access Memory (DDR3 SDRAM) is a type of synchronous dynamic random-access memory (SDRAM) with a high bandwidth ("double data rate") interface, and has been in use since 2007. It is the higher-speed ...
, thus reducing the need for virtual memory compression, while multi-core processors, server farms, and mobile technology together with the advent of flash based systems make virtual memory compression more attractive.


Origins

Acorn Computers Acorn Computers Ltd. was a British computer company established in Cambridge, England, in 1978. The company produced a number of computers which were especially popular in the United Kingdom, UK, including the Acorn Electron and the Acorn Archi ...
' Unix variant,
RISC iX RISC iX is a discontinued Unix operating system designed to run on a series of workstations based on the Acorn Archimedes microcomputer. Heavily based on 4.3BSD, it was initially completed in 1988, a year after Arthur but before RISC OS. It was ...
, was supplied as the primary operating system for its R140 workstation released in 1989. RISC iX provided support for demand paging of compressed executable files. However, the principal motivation for providing compressed executables was to be able to accommodate a complete Unix system in a hard disk of relatively modest size. Compressed data was not paged out to disk under this scheme. Paul R. Wilson proposed compressed caching of virtual memory pages in 1990, in a paper circulated at the ACM OOPSLA/ECOOP '90 Workshop on Garbage Collection ("Some Issues and Strategies in Heap Management and Memory Hierarchies"), and appearing in ACM SIGPLAN Notices in January 1991. Helix Software Company pioneered virtual memory compression in 1992, filing a patent application for the process in October of that year. In 1994 and 1995, Helix refined the process using test-compression and secondary memory caches on video cards and other devices. However, Helix did not release a product incorporating virtual memory compression until July 1996 and the release of Hurricane 2.0, which used the Stac Electronics
Lempel–Ziv–Stac Lempel–Ziv–Stac (LZS, or Stac compression or Stacker compression) is a lossless data compression algorithm that uses a combination of the LZ77 sliding-window compression algorithm and fixed Huffman coding. It was originally developed by Stac ...
compression algorithm and also used off-screen video RAM as a compression buffer to gain performance benefits. In 1995, RAM cost nearly $50 per
megabyte The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol is MB. The unit prefix ''mega'' is a multiplier of (106) in the International System of Units (SI). Therefore, one megabyte is one million bytes o ...
, and
Microsoft Microsoft Corporation is an American multinational technology corporation producing computer software, consumer electronics, personal computers, and related services headquartered at the Microsoft Redmond campus located in Redmond, Washing ...
's
Windows 95 Windows 95 is a consumer-oriented operating system developed by Microsoft as part of its Windows 9x family of operating systems. The first operating system in the 9x family, it is the successor to Windows 3.1x, and was released to manufacturin ...
listed a minimum requirement of 4 MB of RAM. Due to the high RAM requirement, several programs were released which claimed to use compression technology to gain “memory”. Most notorious was the
SoftRAM SoftRAM and SoftRAM95 were system software products which claimed to double the available random-access memory in Microsoft Windows without the need for a hardware upgrade. However, it later emerged that the program did not even attempt to increa ...
program from Syncronys Softcorp. SoftRAM was revealed to be “placebo software” which did not include any compression technology at all. Other products, including Hurricane and
MagnaRAM Quarterdeck Expanded Memory Manager (QEMM) is a memory manager produced by Quarterdeck Office Systems in the late 1980s through the late 1990s. It was the most popular third-party memory manager for the MS-DOS and other DOS operating systems. ...
, included virtual memory compression, but implemented only
run-length encoding Run-length encoding (RLE) is a form of lossless data compression in which ''runs'' of data (sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original ...
, with poor results, giving the technology a negative reputation. In its 8 April 1997 issue, PC Magazine published a comprehensive test of the performance enhancement claims of several software virtual memory compression tools. In its testing PC Magazine found a minimal (5% overall) performance improvement from the use of Hurricane, and none at all from any of the other packages. However the tests were run on Intel
Pentium Pentium is a brand used for a series of x86 architecture-compatible microprocessors produced by Intel. The original Pentium processor from which the brand took its name was first released on March 22, 1993. After that, the Pentium II and Pe ...
systems which had a single core and were single threaded, and thus compression directly impacted all system activity. In 1996, IBM began experimenting with compression, and in 2000 IBM announced its Memory eXpansion Technology (MXT). MXT was a stand-alone chip which acted as a
CPU cache A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, located closer to a processor core, which ...
between the CPU and memory controller. MXT had an integrated compression engine which compressed all data heading to/from physical memory. Subsequent testing of the technology by Intel showed 5–20% overall system performance improvement, similar to the results obtained by PC Magazine with Hurricane.


Recent developments

* In early 2008, a
Linux Linux ( or ) is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged as a Linux distribution, which ...
project named
zram zram, formerly called compcache, is a Linux kernel module for creating a compressed block device in RAM, i.e. a RAM disk with on-the-fly disk compression. The block device created with zram can then be used for swap or as general-purpose RAM di ...
(originally called compcache) was released; in a 2013 update, it was incorporated into
ChromeOS ChromeOS, sometimes stylized as chromeOS and formerly styled as Chrome OS, is a Linux-based operating system designed by Google. It is derived from the open-source ChromiumOS and uses the Google Chrome web browser as its principal user interfac ...
and Android 4.4 * In 2010, IBM released Active Memory Expansion (AME) for
AIX Aix or AIX may refer to: Computing * AIX, a line of IBM computer operating systems *An Alternate Index, for a Virtual Storage Access Method Key Sequenced Data Set * Athens Internet Exchange, a European Internet exchange point Places Belgi ...
6.1 which implements virtual memory compression. * In 2012, some versions of the
POWER7 POWER7 is a family of superscalar multi-core microprocessors based on the Power ISA 2.06 instruction set architecture released in 2010 that succeeded the POWER6 and POWER6+. POWER7 was developed by IBM at several sites including IBM's Roche ...
+ chip included AME hardware accelerators using the 842 compression algorithm for data compression support, used on AIX, for virtual memory compression. More recent POWER processors continue to support the feature. * In December 2012, the
zswap zswap is a Linux kernel feature that provides a compressed write-back cache for swapped pages, as a form of virtual memory compression. Instead of moving memory pages to a swap device when they are to be swapped out, zswap performs their c ...
project was announced; it was merged into the
Linux kernel mainline The Linux kernel is a free and open-source, monolithic, modular, multitasking, Unix-like operating system kernel. It was originally authored in 1991 by Linus Torvalds for his i386-based PC, and it was soon adopted as the kernel for the GNU o ...
in September 2013. * In June 2013, Apple announced that it would include virtual memory compression in OS X Mavericks, using the Wilson-Kaplan WKdm algorithm. * A 10 August 2015 "
Windows Insider Windows Insider is an open software testing program by Microsoft that allows users globally who own a valid license of Windows 11, Windows 10, or Windows Server to register for pre-release builds of the operating system previously only accessible ...
Preview" update for
Windows 10 Windows 10 is a major release of Microsoft's Windows NT operating system. It is the direct successor to Windows 8.1, which was released nearly two years earlier. It was released to manufacturing on July 15, 2015, and later to retail on Ju ...
(build 10525) added support for RAM compression.


See also

*
Disk compression A disk compression software utility increases the amount of information that can be stored on a hard disk drive of given size. Unlike a file compression utility, which compresses only specified files—and which requires the user to designate ...
*
Swap partitions on SSDs A solid-state drive (SSD) is a solid-state storage device that uses integrated circuit assemblies to store data persistently, typically using flash memory, and functioning as secondary storage in the hierarchy of computer storage. It is a ...


References

{{DEFAULTSORT:Virtual Memory
virtual memory compression Virtual memory compression (also referred to as RAM compression and memory compression) is a memory management technique that utilizes data compression to reduce the size or number of paging requests to and from the auxiliary storage. In a virtua ...