How to Emulate Persistent Memory on an Intel® Architecture Server


This tutorial provides a method for setting up persistent memory (PMEM) emulation using regular dynamic random access memory (DRAM) on an Intel® processor using a Linux* kernel version 4.3 or higher. The article covers the hardware configuration and walks you through setting up the software. After following the steps in this article, you'll be ready to try the PMEM programming examples in the NVM Library at

Why use Emulation?

If you’re a software developer who wants to get started early developing software or preparing your applications to have PMEM awareness, you can use this emulation for development before PMEM hardware is widely available.

What is persistent memory?

Traditional applications organize their data between two tiers: memory and storage. Emerging PMEM technologies introduces a third tier. This tier can be accessed like volatile memory, using processor load and store instructions, but it retains its contents across power loss like storage. Because the emulation uses DRAM, data will not be retained across power cycles.

Hardware and System Requirements

Emulation of persistent memory is based on DRAM memory that will be seen by the operating system (OS) as a Persistent Memory region. Because it is a DRAM-based emulation it is very fast, but will lose all data upon powering down the machine. The following hardware was used for this tutorial:

CPU and Chipset

Intel® Xeon® processor E5-2699 v4 processor, 2.2 GHz

  • # of cores per chip: 22 (only used single core)
  • # of sockets: 2
  • Chipset: Intel® C610 chipset, QS (B-1 step)
  • System bus: 9.6 GT/s Intel® QuickPath Interconnect


Platform: Intel® Server System R2000WT product family (code-named Wildcat Pass)

  • BIOS: GRRFSDP1.86B.0271.R00.1510301446 ME:V03.01.03.0018.0 BMC:1.33.8932
  • DIMM slots: 24
  • Power supply: 1x1100W


Memory size: 256 GB (16X16 GB) DDR4 2133P

Brand/model: Micron* – MTA36ASF2G72PZ2GATESIG


Brand and model: 1 TB Western Digital* (WD1002FAEX)

Operating system

CentOS* 7.2 with kernel 4.5.3

Table 1 - System configuration used for the PMEM emulation.

Linux* Kernel

Linux Kernel 4.5.3 was used during development of this tutorial. Support for persistent memory devices and emulation have been present in the kernel since version 4.0, however a kernel newer than 4.2 is recommended for easier configuration. The emulation should work with any Linux distribution able to handle an official kernel. To configure the proper driver installation, run make nconfig and enable the driver. Per the instructions below, Figures 1 to 5 show the correct setting for the NVDIMM Support in the Kernel Configuration menu.

$ make nconfig

        -> Device Drivers -> NVDIMM Support ->

                    <M>PMEM; <M>BLK; <*>BTT

Set up the device drivers.
Figure 1:Set up device drivers.

Set up the NVDIMM device.
Figure 2:Set up the NVDIMM device.

Setup the file system for Direct Access support.
Figure 3:Set up the file system for Direct Access support.

Setting for Direct Access support.
Figure 4: Set up for Direct Access (DAX) support.

 Property of the NVDIMM support.
Figure 5:NVDIMM Support property.

The kernel will offer these regions to the PMEM driver so they can be used for persistent storage. Figures 6 and 7 show the correct setting for the processor type and features in the Kernel Configuration menu.

$ make nconfig

        -> Processor type and features

                      <*>Support non-standard NVDIMMs and ADR protected memory

Figures 4 and 5 show the selections in the Kernel Configuration menu.

Set up the processor to support NVDIMM.
Figure 6:Set up the processor to support NVDIMMs.

Enable the NON-standard NVDIMMs and ADR protected memory.
Figure 7:Enable NON-standard NVDIMMs and ADR protected memory.

Build your Kernel Now

Now you are ready to build your kernel using the instructions below.

$ make -jX

        Where X is the number of cores on the machine

During the new kernel build process, there is a performance benefit to compiling the new kernel in parallel. An experiment with one thread to multiple threads shows that the compilation can be up to 95 percent faster than a single thread. With the time saved using multiple thread compilation for the kernel, the whole new kernel setup goes much faster. Figures 8 and 9 show the CPU utilization and the performance gain chart for compiling at different numbers of threads.

Compiling the kernel sources.
Figure 8:Compiling the kernel sources.

Performance gain for compiling the source in parallel.
Figure 9:Performance gain for compiling the source in parallel.

Install the Kernel

# make modules_install install

Installing the kernel.
Figure 10:Installing the kernel.

Reserve a memory region by modifying kernel command line parameters so it appears as a persistent memory location to the OS. The region of memory to be used is from ss to ss+nn. [KMG] refers to kilo, mega, giga.


For example, memmap=4G!12G reserves 4 GB of memory between 12th and 16th GB. Configuration is done within GRUB and varies between Linux distributions. Here are two examples of a GRUB configuration.

GRUB Configuration Under CentOS 7.0

# vi /etc/default/grub
On BIOS-based machines:
# grub2-mkconfig -o /boot/grub2/grub.cfg

Figure 11 shows the added PMEM statement in the GRUB file. Figure 12 shows the instructions to make the GRUB configuration.

Define PMEM regions in the /etc/default/grub file.
Figure 11:Define PMEM regions in the /etc/default/grub file.

Generate the boot configuration file bases on the grub template.
Figure 12:Generate the boot configuration file bases on the grub template.

After the machine reboots, you should be able to see the emulated device as /dev/pmem0…pmem3. Trying to get reserved memory regions for persistent memory emulation will result in split memory ranges defining persistent (type 12) regions as shown in Figure 13. A general recommendation would be to either use memory from the 4GB+ range (memmap=nnG!4G) or to check the e820 memory map upfront and fitting within. If you don’t see the device, verify the memmap setting correctness in the grub file as shown in Figure 9, followed by dmesg(1) analysis as shown in Figure 13. You should be able to see reserved ranges as shown on the dmesg output snapshot: dmesg.

Persistent memory regions are highlighted as (type 12).
Figure 13:Persistent memory regions are highlighted as (type 12).

You'll see that there can be multiple non-overlapping regions reserved as a persistent memory. Putting multiple memmap="...!..." entries will result in multiple devices exposed by the kernel and visible as /dev/pmem0, /dev/pmem1, /dev/pmem2, …

DAX - Direct Access Extensions

The DAX (direct access) extensions to the filesystem creates a PMEM-aware environment. Some distros, such as Fedora* 24 and later, already have DAX/PMEM built in as a default, and have NVML available as well. One quick way to check to see if the kernel has DAX and PMEM built into it is to grep the kernel’s config file which is usually provided by the distro under /boot. Use the command below:

# egrep ‘(DAX|PMEM)’ /boot/config-`uname –r`

The result should be something like:


To install a filesystem with DAX (available today for ext4 and xfs):

# mkdir /mnt/pmemdir
# mkfs.ext4 /dev/pmem3
# mount -o dax /dev/pmem3 /mnt/pmemdir
Now files can be created on the freshly mounted partition, and given as an input to NVML pools.

Persistent memory blocks.
Figure 14:Persistent memory blocks.

Making a file system.
Figure 15:Making a file system.

It is additionally worth mentioning that you can emulate persistent memory with ramdisk (i.e., /dev/shm) or force PMEM-like behavior by setting environment variable PMEM_IS_PMEM_FORCE=1. This would eliminate performance hit caused by msync(2).


By now, you know how to set up an environment where you can build a PMEM application without actual PMEM hardware. With the additional cores on an Intel® architecture server, you can quickly build a new kernel with PMEM support for your emulation environment.


Persistent Memory Programming


Thai Le is the software engineer focusing on cloud computing and performance computing analysis at Intel Corporation.

For more complete information about compiler optimizations, see our Optimization Notice.