My Blog

Secure allocation of memory

Introduction

With the use of increasingly sophisticated encryption systems, an attacker wishing to gain access to sensitive data is forced to look elsewhere for information. One avenue of attack is the recovery of supposedly erased data from magnetic media (e.g. hard disks) or random-access memory (RAM).

The Problem

The easiest way to solve the problem of erasing sensitive information from magnetic media is to ensure that it never gets to the media in the first place. Although not practical for general data, it is often worthwhile to take steps to keep particularly important information such as encryption keys from ever being written to disk. This would typically happen when the memory containing the keys is paged out to disk by the operating system, where they can then be recovered at a later date, either manually or using software which is aware of the in-memory data format and can locate it automatically in the swap le (for example there exists software which will search the Windows swap le for keys from certain DOS encryption programs). An even worse situation occurs when the data is paged over a network, allowing anyone with a packet sniffer or similar tool on the same subnet to observe the information.

Contrary to conventional wisdom, volatile semiconductor memory does not entirely lose its contents when power is removed. Both static (SRAM) and dynamic (DRAM) memory retains some information on the data stored in it while power was still applied. SRAM is particularly susceptible to this problem, as storing the same data in it over a long period of time has the effect of altering the preferred power-up state to the state which was stored when power was removed. Older SRAM chips could often remember the previously held state for several days. In fact, it is possible to manufacture SRAM’s which always have a certain state on power-up, but which can be overwritten later on a kind of writeable ROM. DRAM can also remember the last stored state, but in a slightly different way.

Locking Memory in a Operating System

To solve these problems the memory pages containing the information can be locked to prevent them from being paged to disk or transmitted over a network. This approach is taken by at least one encryption library, which allocates all keying information inside protected memory blocks visible to the user only as opaque handles, and then optionally locks the memory (provided the underlying OS allows it) to prevent it from being paged [1]. The exact details of locking pages in memory depend on the operating system being used. Many Unix systems now support the mlock()/munlock() calls or have some alternative mechanism hidden among the mmap()-related functions which can be used to lock pages in memory. Unfortunately, these operations require superuser privileges because of their potential impact on system performance if large ranges of memory are locked. Other systems such as Microsoft Windows NT allow user processes to lock memory with the VirtualLock()/VirtualUnlock() calls, but limit the total number of regions which can be locked. But it seems that Windows NT only guarantees that the memory will not be paged while a thread in the process is running; when all threads are preempted the memory is still a target for paging [2]. Under Windows 95 it gets even worse, as VirtualLock() is defined as return TRUE;. Windows 2000 denes a new function, AllocateUserPhysicalPages(), which, as far as anyone knows [2], actually does allocate non-pageable memory; it also limits the total number of pages available.

In practice neither of these allocation strategies seem to cause any real problems. Although any practical measurements are very difficult to perform since they vary wildly depending on the amount of physical memory present, paging strategy, operating system, and system load, in practice locking a dozen 1K regions of memory (which might be typical of a system on which a number of users are running programs such as mail encryption software) produced no noticeable performance degradation observable by system- monitoring tools. On machines such as network servers handling large numbers of secure connections (for example an HTTP server using SSL), the effects of locking large numbers of pages may be more noticeable.

The most practical solution to the problem of DRAM data retention is therefore to constantly flip the bits in memory to ensure that a memory cell never holds a charge long enough for it to be “remembered”. While not practical for general use, it is possible to do this for small amounts of very sensitive data such as encryption keys.

This is particularly advisable where keys are stored in the same memory location for long periods of time and control access to large amounts of information, such as keys used for transparent encryption of files on disk drives. The bit-flipping also has the convenient side-effect of keeping the page containing the encryption keys at the top of the queue maintained by the system’s paging mechanism, greatly reducing the chances of it being paged to disk at some point.

The following sections briefly describe what some products do to lock memory.

Pretty Good Privacy

Pretty Good Privacy (PGP) software for Windows 9x and NT implements their own device driver for locking memory. They do this be stealing pages from the kernel memory pool, which certainly will stay in memory. The solution is very practical, but a bit troublesome because of the device driver. The source code is available at [3].

GnuPG

The Open Source alternative to PGP, GnuPG, which currently only runs under Unix, uses mlock(), munlock() and mmap(). The source code is available at [4].

Cryptlib

Cryptlib uses a combination of the VirtualLock() family under Windows NT and the mlock() family under Unix. And furthermore it always has a thread running in the background to periodic flips the bits in memory. The source code for Cryptlib is available at [1].

References

  1. cryptlib Security Toolkit (http://www.cs.auckland.ac.nz/~pgut001/cryptlib).
  2. Peter Gutmann, Personal communication, 2002.
  3. Pretty Good Privacy (http://www.pgpi.com).
  4. GnuPG (http://www.gnupg.org).