Quick Start Guide: Provision Intel® Optane™ DC Persistent Memory Modules

Introduction

Intel® Optane™ DC persistent memory is a disruptive technology that creates a new tier between memory and storage. Intel® Optane™ DC memory modules support two modes—Memory mode for volatile use cases, and App Direct mode, which provides byte-addressable persistent storage. More information about each operating mode can be found in the article Intel Optane DC Persistent Memory Operating Modes Explained.

In this guide, we provide instructions for configuring and managing Intel Optane DC memory modules using the ipmctl utility and other basic methods to expose persistent memory namespaces to applications using operating system-specific namespace management tools.

ipmctl is an open source utility created and maintained by Intel specifically for configuring and managing Intel Optane DC persistent memory modules. It is available for both Linux* and Windows* from the ipmctl project page on GitHub*.

There are vendor-agnostic tools available in both Linux and Windows to manage non-volatile dual in-line memory modules (NVDIMMs). Non-volatile Device Control (ndctl) is an open-source utility used for managing namespaces in Linux, and Microsoft* has created PowerShell* cmdlets for persistent memory namespace management.

Persistent Memory Provisioning Concepts

This section describes the basic terminology and concepts that apply to configuration and management of NVDIMMs.

Region

A region is a group of one or more NVDIMMs, also known as an interleaved set. Regions can be created in either n-way interleaved or non-interleaved. Interleaving is a technique that makes multiple persistent memory devices appear as a single logical virtual address space. It allows spreading adjacent virtual addresses within a page across multiple memory devices. This hardware-level parallelism increases the available bandwidth from the devices. Regions can only be created or changed on Intel Optane DC persistent memory modules using ipmctl.

Label

Intel Optane DC memory modules support labels, which allow regions to be further divided into namespaces. A label contains metadata stored on an NVDIMM. It is similar to a partition table while a namespace is a partition.

Namespace

A namespace defines a contiguously addressed range of non-volatile memory conceptually similar to a hard disk partition, SCSI logical unit (LUN), or an NVM Express* namespace. It is the unit of persistent memory storage that appears in /dev as a device used for input/output (I/O). Intel recommends using the ndctl utility for creating namespaces for the Linux operating system.

Diagram mapping
Regions are created within interleaved or non-interleaved sets. Interleaving implies n-way mapping.

Diagram physical address space
Creates contiguous physical address space and provides striped reads and writes for better throughput.

Diagram namespaces
Similar to SSD, the raw capacity of a region is partitioned into one or more logical devices called namespaces.

Figure 1. Persistent memory provisioning options

DAX

Direct Access (DAX) is a mechanism that allows applications to directly access persistent media from the CPU (through loads and stores), bypassing the traditional I/O stack (page cache and block layer). File systems that have been extended for DAX-enabled persistent memory include Linux ext4 and XFS, and Windows NTFS. These file systems bypass the I/O subsystem to directly use persistent memory as byte-addressable load/store memory as the fastest and shortest path to data stored in persistent memory. In addition to eliminating I/O operations, this path enables small data writes to be executed faster than those to traditional block storage devices.

Which Mode Should I Use?

Memory mode is volatile and is all about providing a large main memory at a cost lower than dynamic random-access memory (DRAM) without any changes to the application, which usually results in cost savings. There can be a performance advantage if you are able to fit your working set in memory instead of paging it to disk.

App Direct is for persistence, where you are displacing traditional non-volatile storage, such as solid-state drives (SSDs) and NVMe devices, with considerably faster persistent memory. This is usually a big performance improvement, but not always. If an application that tends to page something in from disk and then use the data in that page for quite some time while the page is in DRAM, you get DRAM speeds most of the time. With a DAX file system, the page cache is bypassed, providing direct access to the underlying persistent memory so you get the persistent memory media performance.

Therefore, it is difficult to say which is faster between the volatile Memory mode and the App Direct mode. Performance is most optimal when the application developer determines which data structures belong in each storage tier: DRAM, persistent memory, and non-volatile storage.

Profiling tools are available to characterize application workloads. This helps to evaluate which mode best fits the application. Applications should be tested in both modes to fully determine what optimizations are necessary to achieve the maximum performance and benefits of persistent memory. For more information on the available persistent memory analysis tools, visit the Tools page on the Intel® Developer Zone PMEM site.

Introduction to Provisioning Utilities

ipmctl: Intel® Optane™ DC Persistent Memory Configuration Utility

The ipmctl utility is used to configure and manage Intel Optane DC memory modules. This tool is available in several distributions of Linux and Microsoft Windows 2019 server or later.

At a high level, ipmctl supports the following functionality:

  • Discovery
  • Configuration
  • Firmware management
  • Security functionality management
  • Health monitoring
  • Performance tracking
  • Debugging and troubleshooting

Installing ipmctl

Refer to the instructions in the ipmctl README.md on the ipmctl GitHub project page.

Getting Help

For a comprehensive list of commands and their descriptions, run man ipmctl or ipmctl help.

ndctl: Namespace Management in Linux*

ndctl is a vendor-neutral Linux-only utility used to manage namespaces in Linux. It is designed to work with NVDIMMs from different vendors, including Intel Optane DC persistent memory modules. We will use ndctl for creating and managing namespaces in Linux.

ndctl supports the following functionality:

  • Show persistent memory module information
  • Manage namespaces and configuration labels
  • Monitor health
  • Manage security - passphrases and secure erase
  • Error injection/testing

Installing ndctl

The ndctl utility is available in most Linux package repositories, or you can download and compile the source code, which is available on the ndctl GitHub project page. Additional information can be found in the NDCTL User Guide.

Getting Help

For a comprehensive list of commands and their descriptions, run man ndctl or ndctl help.

PowerShell Cmdlets: Namespace Management in Windows*

Microsoft has introduced PowerShell* cmdlets for persistent memory namespace management. Following is a list of the most commonly used commands:

Get-PmemDisk

Returns one or more logical persistent memory disks. The returned object has information about size, atomicity type, health status, and underlying physical devices.

Get-PmemPhysicalDevice

Returns one or more physical persistent memory devices. The returned object has information about size(s), RFIC, device location, and health/operational status.

New-PmemDisk

Creates a new disk out of a given unused region. Writes out the labels to create the namespace, then rebuilds the SCM stacks to expose the new logical device.

Remove-PmemDisk

Removes the given persistent memory disk.

Get-PmemUnusedRegion

Returns aggregate persistent memory regions available for provisioning a logical device.

Initialize-PmemPhysicalDevice

Writes zeros to the label storage area, writes new label index blocks, and then rebuilds the SCM stacks to reflect the changes.

Getting Help

Refer to the Interleave sets section in the Understand and deploy persistent memory Windows persistent memory PowerShell Cmdlet documentation.

Comparison of ipmctl, ndctl and PowerShell cmdlets Features

 

ipmctl

ndctl

PowerShell* Cmds

vendor

Intel

Vendor-neutral

Vendor-neutral

Linux*

Yes

Yes

No

Windows*

Yes

No

Yes

Manage Goals/Regions

Yes

No

No

Manage namespaces

No

Yes

Yes

Health/SMART

Yes

Yes

yes

Performance

Yes

No

No

Security

Yes

Yes

No

System Requirements

The following hardware and software components are required:

Hardware

2nd generation Intel® Xeon® Scalable processor-based platforms populated with Intel Optane DC memory modules and DRAM are widely available. These platforms generally come in four major configurations, designated by the number of memory slots on each memory controller’s three channels.

Diagram Platform configurations

DiagramPlatform configurations

Diagram Platform configurations

Diagram Platform configurations

Figure 2. Platform configurations
 

Installing Intel Optane DC Persistent Memory

To configure a 2nd generation Intel Xeon Scalable platform with Optane DC persistent memory modules and get the best performance, watch this short video:

Software

This section provides an overview of the software ecosystem enabled for Intel DC Optane persistent memory.

Operating System Support for Intel Optane DC Persistent Memory

Several distributions of Linux include support for both App Direct and Memory mode. See Operating System OS for Intel Optane DC Persistent Memory.

Note: Starting with Red Hat* Enterprise Linux 7.3, technology reviews using persistent memory are enabled for both the Ext4 and XFS file systems. Please refer to Red Hat documentation for more information.

Linux Kernel Support

The Linux NVDIMM/persistent memory drivers were enabled by default starting with Linux kernel 4.2. We currently recommend kernel version 4.19 or later.

Custom Kernel

If you compile or build custom kernels, verify that support for persistent memory is enabled. Here is a list of parameters you need to enable in the kernel configuration file, usually found under /boot/config -uname –r:

(Compile or custom kernel)

  • CONFIG_ZONE_DEVICE=y
  • CONFIG_TRANSPARENT_HUGEPAGE=y
  • CONFIG_ACPI_NFIT=m
  • CONFIG_LIBNVDIMM=m
  • CONFIG_BLK_DEV_PMEM=m
  • CONFIG_ND_BLK=m
  • CONFIG_BTT=y
  • CONFIG_NVDIMM_PFN=y
  • CONFIG_NVDIMM_DAX=y
  • CONFIG_DEV_DAX_PMEM=m
  • CONFIG_FS_DAX=y

Provisioning Persistent Memory Modules through the BIOS

Intel Optane DC memory modules can be provisioned using options provided in the BIOS. Please refer to support provided by your system vendor.

Provisioning Persistent Memory Modules through UEFI

ipmctl can be launched from a Unified Extensible Firmware Interface (UEFI) shell. The same features are supported in both versions of ipmctl. The full list of commands can be seen by running ipmctl help from the command line. You need root privilege to run ipmctl.  

Note: ipmctl can be used for namespace creation and management at UEFI level.

 

Provisioning Persistent Memory Modules through Operating System

Using ipmctl

ipmctl can be launched from a Unified Extensible Firmware Interface (UEFI) shell or a terminal window in an operating system. The same features are supported in both versions of ipmctl. The full list of commands can be seen by running ipmctl help from the command line. You need root privilege to run ipmctl.

All the commands described in this section are demonstrated for use on a two-socket system with 6 terabytes (TiB) of persistent memory and 384 gigabytes (GB) of DRAM installed.

Discovery

Before configuring Intel Optane DC memory modules, you can discover current module status through a list of show commands. Figure 1 shows a fully populated two-socket system, commonly referred to as a 2-2-2 configuration, with a total of twelve 32-gigabyte (GiB) DDR4 memory modules (DIMMs) and twelve 512 GiB Intel Optane DC persistent memory modules.

Show Topology

The show -topology command displays both the Intel Optane DC persistent memory and DDR4 DIMMs discovered in the system by enumerating the SMBIOS Type 17 tables. For more information, refer to ACPI Specifications v6.0 or later for NFIT table information.

ipmctl show -topology

 DimmID | MemoryType                  | Capacity  | PhysicalID| DeviceLocator
==============================================================================
 0x0001 | Logical Non-Volatile Device | 502.6 GiB | 0x0028    | CPU1_DIMM_A2
 0x0011 | Logical Non-Volatile Device | 502.6 GiB | 0x002c    | CPU1_DIMM_B2
 0x0021 | Logical Non-Volatile Device | 502.6 GiB | 0x0030    | CPU1_DIMM_C2
 0x0101 | Logical Non-Volatile Device | 502.6 GiB | 0x0036    | CPU1_DIMM_D2
 0x0111 | Logical Non-Volatile Device | 502.6 GiB | 0x003a    | CPU1_DIMM_E2
 0x0121 | Logical Non-Volatile Device | 502.6 GiB | 0x003e    | CPU1_DIMM_F2
 0x1001 | Logical Non-Volatile Device | 502.6 GiB | 0x0044    | CPU2_DIMM_A2
 0x1011 | Logical Non-Volatile Device | 502.6 GiB | 0x0048    | CPU2_DIMM_B2
 0x1021 | Logical Non-Volatile Device | 502.6 GiB | 0x004c    | CPU2_DIMM_C2
 0x1101 | Logical Non-Volatile Device | 502.6 GiB | 0x0052    | CPU2_DIMM_D2
 0x1111 | Logical Non-Volatile Device | 502.6 GiB | 0x0056    | CPU2_DIMM_E2
 0x1121 | Logical Non-Volatile Device | 502.6 GiB | 0x005a    | CPU2_DIMM_F2
 N/A    | DDR4                        | 32.0 GiB  | 0x0026    | CPU1_DIMM_A1
 N/A    | DDR4                        | 32.0 GiB  | 0x002a    | CPU1_DIMM_B1
 N/A    | DDR4                        | 32.0 GiB  | 0x002e    | CPU1_DIMM_C1
 N/A    | DDR4                        | 32.0 GiB  | 0x0034    | CPU1_DIMM_D1
 N/A    | DDR4                        | 32.0 GiB  | 0x0038    | CPU1_DIMM_E1
 N/A    | DDR4                        | 32.0 GiB  | 0x003c    | CPU1_DIMM_F1
 N/A    | DDR4                        | 32.0 GiB  | 0x0042    | CPU2_DIMM_A1
 N/A    | DDR4                        | 32.0 GiB  | 0x0046    | CPU2_DIMM_B1
 N/A    | DDR4                        | 32.0 GiB  | 0x004a    | CPU2_DIMM_C1
 N/A    | DDR4                        | 32.0 GiB  | 0x0050    | CPU2_DIMM_D1
 N/A    | DDR4                        | 32.0 GiB  | 0x0054    | CPU2_DIMM_E1
 N/A    | DDR4                        | 32.0 GiB  | 0x0058    | CPU2_DIMM_F1
Show DIMM Information

The show -dimm command displays the persistent memory modules discovered in the system and verifies that software can communicate with them. Among other information, this command outputs each DIMM ID, capacity, health state, and firmware version.

ipmctl show -dimm

 DimmID | Capacity  | HealthState | ActionRequired | LockState | FWVersion
==============================================================================
 0x0001 | 502.6 GiB | Healthy     | 0              | Disabled  | 01.02.00.5367
 0x0011 | 502.6 GiB | Healthy     | 0              | Disabled  | 01.02.00.5367
 0x0021 | 502.6 GiB | Healthy     | 0              | Disabled  | 01.02.00.5367
 0x0101 | 502.6 GiB | Healthy     | 0              | Disabled  | 01.02.00.5367
 0x0111 | 502.6 GiB | Healthy     | 0              | Disabled  | 01.02.00.5367
 0x0121 | 502.6 GiB | Healthy     | 0              | Disabled  | 01.02.00.5367
 0x1001 | 502.6 GiB | Healthy     | 0              | Disabled  | 01.02.00.5367
 0x1011 | 502.6 GiB | Healthy     | 0              | Disabled  | 01.02.00.5367
 0x1021 | 502.6 GiB | Healthy     | 0              | Disabled  | 01.02.00.5367
 0x1101 | 502.6 GiB | Healthy     | 0              | Disabled  | 01.02.00.5367
 0x1111 | 502.6 GiB | Healthy     | 0              | Disabled  | 01.02.00.5367
 0x1121 | 502.6 GiB | Healthy     | 0              | Disabled  | 01.02.00.5367
Show Provisioned Capacity

To check the capacity provisioned for use in different operating modes, use the show -memoryresources command. The MemoryCapacity and AppDirectCapacity values can be used to determine if the system was configured in Memory mode, App Direct mode, or mixed mode. The example below shows that the persistent memory modules are currently configured in App Direct mode.

ipmctl show -memoryresources
Capacity=6031.2 GiB
MemoryCapacity=0.0 GiB
AppDirectCapacity=6024.0 GiB
UnconfiguredCapacity=0.0 GiB
InaccessibleCapacity=7.2 GiB
ReservedCapacity=0.0 GiB
ipmctl show -memoryresources
Capacity=6031.2 GiB
MemoryCapacity=0.0 GiB
AppDirectCapacity=6024.0 GiB
UnconfiguredCapacity=0.0 GiB
InaccessibleCapacity=7.2 GiB
ReservedCapacity=0.0 GiB

Provisioning With ipmctl

Provisioning Intel Optane DC persistent memory is a two-step process. During this process, a goal is specified and stored on the persistent memory modules for the BIOS to read on the next reboot. A goal configures Intel Optane DC t memory modules in Memory mode, App Direct mode, or both.

Create a Configuration Goal

Memory Mode

Any percentage of Intel Optane DC persistent memory module capacity across sockets can be provisioned in Memory mode, as described below. In this example, 100% of the available persistent memory capacity is to be provisioned in Memory mode. You can always use the -f option to overwrite any existing goal, which is a destructive operation.

ipmctl create -goal memorymode=100

The following configuration will be applied:
 SocketID | DimmID | MemorySize | AppDirect1Size | AppDirect2Size
==================================================================
 0x0000   | 0x0001 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0000   | 0x0011 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0000   | 0x0021 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0000   | 0x0101 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0000   | 0x0111 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0000   | 0x0121 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0001   | 0x1001 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0001   | 0x1011 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0001   | 0x1021 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0001   | 0x1101 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0001   | 0x1111 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0001   | 0x1121 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
Do you want to continue? [y/n]
App Direct Mode

Intel Optane DC memory modules can be provisioned in App Direct mode with interleaving enabled or disabled. As described in the Persistent Memory Provisioning Concepts section above, interleaving increases the throughput of reads and writes to persistent memory.

Configure App Direct Mode with Interleaved Modules

The command below sets a goal to create a persistent memory region that is interleaved across all the modules on each CPU socket. Creating an interleaved set that spans multiple CPU sockets is not allowed. Refer to the following resources for more information on alternative methods:

The default create goal command creates an interleaved region configured for App Direct mode. The following two commands are equivalent:

ipmctl create -goal
ipmctl create -goal PersistentMemoryType=AppDirect
The following configuration will be applied:
 SocketID | DimmID | MemorySize | AppDirect1Size | AppDirect2Size
==================================================================
 0x0000   | 0x0001 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0000   | 0x0011 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0000   | 0x0021 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0000   | 0x0101 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0000   | 0x0111 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0000   | 0x0121 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0001   | 0x1001 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0001   | 0x1011 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0001   | 0x1021 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0001   | 0x1101 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0001   | 0x1111 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
 0x0001   | 0x1121 | 502.0 GiB  | 0.0 GiB        | 0.0 GiB
Do you want to continue? [y/n]

Configure App Direct Mode without the Interleaved Option

To create a goal for a persistent memory region that is not interleaved, specify the PersistentMemoryType to be AppDirectNotInterleaved.

ipmctl create -goal persistentmemorytype=appdirectnotinterleaved
The following configuration will be applied:
 SocketID | DimmID | MemorySize | AppDirect1Size | AppDirect2Size
==================================================================
 0x0000   | 0x0011 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0000   | 0x0021 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0000   | 0x0001 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0000   | 0x0111 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0000   | 0x0121 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0000   | 0x0101 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0001   | 0x1011 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0001   | 0x1021 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0001   | 0x1001 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0001   | 0x1111 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0001   | 0x1121 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0001   | 0x1101 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
Do you want to continue? [y/n] y
Created following region configuration goal
 SocketID | DimmID | MemorySize | AppDirect1Size | AppDirect2Size
==================================================================
 0x0000   | 0x0011 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0000   | 0x0021 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0000   | 0x0001 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0000   | 0x0111 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0000   | 0x0121 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0000   | 0x0101 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0001   | 0x1011 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0001   | 0x1021 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0001   | 0x1001 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0001   | 0x1111 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0001   | 0x1121 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
 0x0001   | 0x1101 | 0.0 GiB    | 502.0 GiB      | 0.0 GiB
A reboot is required to process new memory allocation goals.
Mixed Mode

Intel Optane DC persistent memory can be configured such that part of the capacity is assigned to Memory mode and the rest to App Direct mode. When part or all of persistent memory module capacity is set to Memory mode, all the DRAM capacity is hidden from the application and becomes the last-level cache.

The following example assigns 60% of the available persistent memory capacity to Memory mode. The rest is configured as an interleaved set for App Direct mode.

ipmctl create -goal Memorymode=60
The following configuration will be applied:
 SocketID | DimmID | MemorySize | AppDirect1Size | AppDirect2Size
==================================================================
 0x0000   | 0x0011 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0000   | 0x0021 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0000   | 0x0001 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0000   | 0x0111 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0000   | 0x0121 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0000   | 0x0101 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0001   | 0x1011 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0001   | 0x1021 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0001   | 0x1001 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0001   | 0x1111 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0001   | 0x1121 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0001   | 0x1101 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
Do you want to continue? [y/n] y
Created following region configuration goal
 SocketID | DimmID | MemorySize | AppDirect1Size | AppDirect2Size
==================================================================
 0x0000   | 0x0011 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0000   | 0x0021 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0000   | 0x0001 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0000   | 0x0111 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0000   | 0x0121 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0000   | 0x0101 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0001   | 0x1011 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0001   | 0x1021 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0001   | 0x1001 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0001   | 0x1111 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0001   | 0x1121 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
 0x0001   | 0x1101 | 310.0 GiB  | 192.0 GiB      | 0.0 GiB
A reboot is required to process new memory allocation goals.

Change Configuration Goal

When creating a goal, ipmctl checks to see whether or not the NVDIMMs are already configured. If so, it prints a message that the current configuration is to be deleted before creating a new goal.

ipmctl create -goal MemoryMode=100
Create region configuration goal failed: Error (115) - A requested DIMM already has a configured goal. Delete this existing goal before creating a new one

ipmctl delete –goal
Delete allocation goal from DIMM 0x0001: Success
Delete allocation goal from DIMM 0x0011: Success
Delete allocation goal from DIMM 0x0021: Success
Delete allocation goal from DIMM 0x0101: Success
Delete allocation goal from DIMM 0x0111: Success
Delete allocation goal from DIMM 0x0121: Success
Delete allocation goal from DIMM 0x1001: Success
Delete allocation goal from DIMM 0x1011: Success
Delete allocation goal from DIMM 0x1021: Success
Delete allocation goal from DIMM 0x1101: Success
Delete allocation goal from DIMM 0x1111: Success
Delete allocation goal from DIMM 0x1211: Success

Dump Current Goal

ipmctl dump -destination testfile -system -config

Successfully dumped system configuration to file: testfile

Create a Goal from a Configuration File

Goals can be provisioned using a configuration file with the load -source <file> -goal command. To save the current configuration to a file, use the dump -destination <file> -system –config command. This allows for the same configuration to be applied to multiple systems or restored to the same system.

# ipmctl dump -destination testfile -system -config
# ipmctl load -source myPath/testfile –goal

Show Current Goal

To see the current goal, if there is one, use the show goal command.

ipmctl show -goal

There are no goal configs defined in the system.
Please use 'show -region' to display currently valid persistent memory regions.

Confirm Mode Change

So far, we have seen how the goal can be set for different modes. Upon reboot, run the following commands to see if the mode is applied correctly. The current output shows that the goal was set to configure the entire capacity in Memory mode.

ipmctl show –memoryresources
Capacity=6031.2 GiB
MemoryCapacity=6024.0 GiB
AppDirectCapacity=0.0 GiB
UnconfiguredCapacity=0.0 GiB
InaccessibleCapacity=7.2 GiB
ReservedCapacity=0.0 GiB

If the mode is changed from Memory mode to App Direct mode, upon reboot a single region per socket is created. If the mode is changed from App Direct mode to Memory mode, there are no regions created. Use the following command to see the regions created:

ipmctl show -region

 SocketID | ISetID             | PersistentMemoryType | Capacity   | FreeCapacity | HealthState
=====================================================================
 0x0000   | 0xa0927f48a8112ccc | AppDirect            | 3012.0 GiB | 3012.0 GiB   | Healthy
 0x0001   | 0xf6a27f48de212ccc | AppDirect            | 3012.0 GiB | 3012.0 GiB   | Healthy

Delete Configuration

The current configuration can be deleted by first disabling and destroying namespaces and then disabling the active regions.

Namespace Management in Linux

In this section, we show how to use ndctl commands to manage namespaces. For more information, refer to Creating Namespaces in the ndctl User Guide.

List Active Namespaces

See below for an example of how to list active namespaces.

ndctl list -N
[
  {
    "dev":"namespace0.0",
    "mode":"fsdax",
    "map":"mem",
    "size":4294967296,
    "sector_size":512,
    "blockdev":"pmem0"
  },
  {
    "dev":"namespace1.0",
    "mode":"fsdax",
    "map":"dev",
    "size":3183575302144,
    "uuid":"c8a9751c-0d5b-47ab-b45f-70fe54f7ce43",
    "sector_size":512,
    "align":2097152,
    "blockdev":"pmem1"
  }
]

Create Namespace

The ndctl create-namespace command:

  • Creates a new namespace in fsdax mode, by default. The size of the namespace is the same as the region size minus size of the metadata. Other modes supported include sector, devdax, and raw.
  • Creates a new /dev/pmem{X[.Y]} device
    • The X value represents the region in which the namespace is created. This defaults to zero (0). If more than one namespace is created within a region, the naming convention of pmem{X.Y} is used, where Y represents a sequentially increasing integer value for the new namespace.
    • If multiple namespaces are created within a region, the first namespace is always created as pmem0; it does not get re-enumerated to pmem0.0 like subsequent namespaces.
# ndctl create-namespace
{
 "dev":"namespace0.0",
 "mode":"fsdax",
 "map":"dev",
 "size":"123.04 GiB (132.12 GB)",
 "uuid":"a60e6a4f-274d-4cd5-8d39-c8dd263345e2",
 "raw_uuid":"8d28948f-5434-4c8d-8efa-581eacad265a",
 "sector_size":512,
 "blockdev":"pmem0",
 "numa_node":0
}

On Linux, run the following command to see if the device was created. Then create a DAX-enabled file system such as XFS or EXT4 on the new persistent memory device.

# ls -l /dev/pmem*
brw-rw----. 1 root disk 259, 0 Jul 9 10:42 /dev/pmem0

Create a 50 GiB 'fsdax' Mode Namespace

The size value provided to ndctl includes space required for metadata, so the resulting available capacity for a file system will be smaller, as the following example shows:

# ndctl create-namespace -m fsdax -s 50G
{
 "dev":"namespace0.0",
 "mode":"fsdax",
 "map":"dev",
 "size":"49.22 GiB (52.85 GB)",
 "uuid":"638d67f3-4c18-4b2e-a6f2-a044bdc82253",
 "raw_uuid":"6c6dfdad-e12f-418d-9fb0-3a75032dd9de",
 "sector_size":512,
 "blockdev":"pmem0",
 "numa_node":0
}

Note: If the remaining capacity needs to be assigned to another namespace using the same or a different mode, it can be assigned to the new namespace without specifying the -s <size> option. For example, executing ndctl create-namespace after the command above creates a new namespace in the same region using all remaining capacity (~78 GiB):

# ndctl create-namespace
{
 "dev":"namespace0.1",
 "mode":"fsdax",
 "map":"dev",
 "size":"73.83 GiB (79.27 GB)",
 "uuid":"d7f9473e-97aa-48cf-aefa-128797c83e88",
 "raw_uuid":"6c116e57-19dd-43d8-ae03-039f1588a23a",
 "sector_size":512,
 "blockdev":"pmem0.1",
 "numa_node":0
}

Create a Namespace with a Friendly Name (Tag)

Tagging the namespace with a friendly name or description using the -n, --name option can be useful to show what a namespace is used for. This is particularly useful when provisioning space for multiple end users or applications, or to identify a namespace with for production or test/dev environments. Creating a namespace with a tag/name/description can be done as follows:

# ndctl create-namespace -n "PROD Web DB 1"
{
 "dev":"namespace0.0",
 "mode":"fsdax",
 "map":"dev",
 "size":"123.04 GiB (132.12 GB)",
 "uuid":"6a0abb59-5279-4731-921a-0099101e17f2",
 "raw_uuid":"03b40e23-56e1-407a-b5d1-f1ec929645c1",
 "sector_size":512,
 "blockdev":"pmem0",
 "name":"PROD Web DB 1",
 "numa_node":0
}

List Namespaces in a Region

ndctl list -RuN -r region1
{
  "regions":[
    {
      "dev":"region1",
      "size":"3012.00 GiB (3234.11 GB)",
      "available_size":0,
      "max_available_extent":0,
      "type":"pmem",
      "iset_id":"0xa0927f48a8112ccc",
      "persistence_domain":"memory_controller",
      "namespaces":[
        {
          "dev":"namespace1.0",
          "mode":"fsdax",
          "map":"dev",
          "size":"2964.94 GiB (3183.58 GB)",
          "uuid":"c8a9751c-0d5b-47ab-b45f-70fe54f7ce43",
          "sector_size":512,
          "align":2097152,
          "blockdev":"pmem1"
        }
      ]
    }
  ]
}

Disable Namespace

Warning: Disabling a namespace while it is mounted or in use results in undefined behavior by the application(s) using the namespace. Always stop any applications and unmount file systems in fsdax mode before disabling a namespace.

ndctl disable-namespace namespace1.0
disabled 1 namespace

Destroy Namespace

Disable namespace and destroy namespace are called to clear existing namespaces before creating a new goal.

ndctl destroy-namespace namespace0.0
destroyed 1 namespace

Disable Region

$ ndctl disable-region region0
disabled 1 region

Namespace Management in Windows Server

This section is an introduction to Windows PowerShell cmdlets for namespace management. Refer to this link for recent updates on provisioning https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/deploy-pmem.

PowerShell cmdlets can be used to list App Direct regions, and create, delete and manage namespaces.

Device Manager View

Device Manager provides a list of persistent memory devices available in the system.

device manager listing
Figure 3. This Device Manager display example shows four Intel® Optane™ DC memory modules
and a persistent memory disk.

Listing Regions

To list the regions, run the following command from PowerShell. If the output of this command is empty, no regions exist on the persistent memory modules. Recall that ipmctl is used to create regions.

PowerShell command examples are run on a server configured in the 2:2:1 configuration.

PS>Get-PmemUnusedRegion 
PS C:\Windows\system32> Get-PmemUnusedRegion   
  RegionId    TotalSizeInBytes    DeviceId
  ----------      -----------------          ------------
     1           2156073582592       {1, 101, 11, 111}

After creating regions using ipmctl and then rebooting, the newly created interleave sets are represented as persistent memory unused regions. These are the regions that are not assigned to a logical persistent memory device on the system.

Creating Namespaces or Persistent Memory Disk

PS> New-PmemDisk
Creating new persistent memory disk. This may take a few moments

Windows allows a namespace to be created on a region and the capacity to be visible to the Windows operating system and used by applications. Just as an SSD can be carved into partitions, persistent memory namespaces represent the unit of storage that appears as a device that can be used for I/O.

When the new persistent memory disk is created, go to the Computer Management > Storage > Disk Management console to view the new disk. You must initialize the disk using MBR or GPT partitioning before the logical disk manager can access it.

disk management listing

Create a Storage over an App Direct Namespace

For use as a traditional block storage device with power-fail write atomicity, set AtomicityType to BTT.

PS> New-PmemDisk -RegionId <Id> -AtomicityType BlockTranslationTable

List Persistent Memory Disk (Namespace)

PS C:\Windows\system32> Get-pmemdisk                                                                                    
DiskNumber Size    HealthStatus AtomicityType CanBeRemoved PhysicalDeviceIds UnsafeShutdownCount
---------- ----    ------------ ------------- ------------ ----------------- -------------------
1                  2008 GB   Healthy            None              True                      {1, 101, 11, 111}                       2
PS C:\Windows\system32> Get-physicaldisk | where Mediatype -Eq SCM                                                      
Number    FriendlyName           SerialNumber                                                            MediaType CanPool OperationalStatus HealthStatus
------ 	     ------------                  ------------                                                                          --------- ------- ----------------- ------------
1              Persistent memory disk 03018089022aedcb17c3fe499155db38a408de05 SCM       True    OK                Healthy

Get DIMM Information

PS > Get-PmemPhysicalDevice

 PS C:\Windows\system32> get-pmemdisk | Remove-PmemDisk                                                                  
This will remove the persistent memory disk(s) from the system and will result in data loss.
Remove the persistent memory disk(s)?
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"): A

Delete Namespace

Expose Persistent Memory Regions to an Application

We have shown how ipmctl is used to create regions, and ndctl to create one or more namespaces within the regions to expose persistent memory devices to applications.

Though ipmctl supports creating namespaces, Intel recommends using vendor-agnostic tools like ndctl on Linux, and PowerShell persistent memory cmdlets to create and manage namespaces on Windows.

On Linux, when a namespace is created, a persistent memory device is created: /dev/pmem{n}, where n starts with 0. On Windows, an entry for the newly created persistent memory disk appears in the Device Manager and Storage > Disk Management windows.

FSDAX Namespace

File system DAX (FSDAX) mode is the default namespace mode when creating namespaces using ndctl. If you specify ndctl create-namespace with no options, it creates a device (/dev/pmemX[.Y]) that supports the DAX capabilities of Linux file systems. Both XFS and EXT4 support the DAX feature. DAX removes the page cache from the I/O path and allows mmap(2) to establish direct mappings to persistent memory media.

As shown in the diagram below, when persistent memory modules are configured in FSDAX mode, applications have the ability to: 1) memory map a file on a DAX mounted file system then perform direct load and store access to persistent memory region, or 2) continue to use standard file APIs, which require no changes to the application.

persistent memory in f s d a x mode

To mount a DAX-enabled file system, we use the –o dax mount option. The following commands show the process of configuring the persistent memory modules for App Direct using ipmctl, then creating a single namespace on one of the regions, and finally creating and mounting the DAX enabled file system. Run the commands shown in the example below as root.

1.	Create goal to create a region in App Direct mode on reboot
        a.	ipmctl create –goal PersistentMemoryType=AppDirect
2.	Show goal
        a.	ipmctl show -goal
3.	Reboot
        a.	ndctl create-namespace -r region2
4.	Create namespace using ndctl. Default mode is fsdax.
        ndctl create-namespace 
        {
        "dev":"namespace2.0",
        "mode":"fsdax",
        "map":"dev",
        "size":"2964.94 GiB (3183.58 GB)",
        "uuid":"41d252c8-55b3-4683-989a-d131e8136870",
        "sector_size":512,
        "align":2097152,
        "blockdev":"pmem2"
        }
5.	Check out the pmem device created
        a.	ls -l /dev/pmem*
6.	Create filesystem and mount it on the pmem device
        a.	mkfs.ext4 -f /dev/pmem<x>
7.	Create a directory
        a.	mkdir /mnt/mypmem
8.	Mount filesystem with -o dax option
        a.	mount -o dax /dev/pmem<x> /mnt/mypmem
9.	Create a file on the file system and memory map it

Sector Mode Namespace (Storage over App Direct)

For file systems or applications that use a traditional storage API, create a namespace with sector mode.

Persistent memory-based storage can perform I/O at byte, or more accurately, cache line granularity. However, exposing such storage as a traditional block device does not guarantee atomicity and requires the block drivers for persistent memory to provide this support.

Traditional SSDs typically provide protection against torn sectors in hardware, to complete in-flight block writes. Whereas, if a persistent memory write is in progress and a power failure happens, the block may contain a mix of old and new data. Existing applications may not be prepared to handle such a scenario since they are unlikely to protect against torn sectors or metadata.

The Block Translation Table (BTT) provides atomic sector update semantics for persistent memory devices so that applications that rely on sector writes not being torn can continue to do so.

As shown in the diagram below, the BTT manifests itself as a stacked block device and reserves a portion of the underlying storage for its metadata. At the heart of it is an indirection table that re-maps all of the blocks on the volume. It can be thought of as an extremely simple file system that only provides atomic sector updates. The DAX feature is not supported in this mode.

app direct region

In the example shown below, the default sector size used is 4 KB. Sector atomicity is provided by kernel drivers.

Commands to Create Persistent Storage over App Direct
1.	Create goal using ipmctl command
        o	 ipmctl create –goal PersistentMemoryType=AppDirect
2.	Show goal
        o	# ipmctl show -goal
3.	Reboot
4.	Create namespace with sector mode using ndctl
        5.	# ndctl create-namespace -m sector
                {
                "dev":"namespace0.0",
                "mode":"sector",
                "size":"124.88 GiB (134.09 GB)",
                "uuid":"08f4e273-bbdd-4d1d-85e8-cf1f847e1df7",
                "raw_uuid":"f3bd03ad-4225-4919-a370-bb6293180e4d",
                "sector_size":4096,
                "blockdev":"pmem0s",
                "numa_node":0
        }
6.	# ls -l /dev/pmem* to view the block device 
        o	brw-rw----. 1 root disk 259, 0 Jul 9 10:47 /dev/pmem0s
7.	Use unmodified application (using standard File API)

Configure Intel Optane DC Persistent Memory in Virtualized Environments

Documentation for virtualized and container environments is provided by the software vendor. These are links to virtualization technologies that support persistent memory:

Emulate Persistent Memory

If you are a software developer who wants to get started developing software or modifying an application to have persistent memory awareness, but you do not have access to a system with persistent memory hardware, it is possible to emulate it. Emulation is not intended for performance or benchmarking as it cannot simulate the performance characteristics. Several approaches used to emulate persistent memory are described below.

Using System DRAM

The article How to Emulate Persistent Memory Using Dynamic Random-access Memory (DRAM) describes the requirements and steps for using system DRAM to emulate persistent memory. This approach does not provide persistent storage characteristics. When the system is rebooted, all the configuration and data is lost.

Using KVM/QEMU

KVM and QEMU can be used to create virtual machines that run their own dedicated operating system and persistent memory regions. This approach allows the guest virtual machines (VMs) to use ndctl as if they had access to persistent memory. The host can provide persistent storage behavior by mapping the persistent memory devices from a non-volatile storage device such as an SSD. This approach most closely matches that of real persistent memory hardware.

On a host system with persistent memory, it is possible to directly assign a persistent memory device or file residing on a DAX mounted file system to the guest, and for that guest to mount the device with DAX support.

Refer to Using QEMU Virtualization for more information.

Frequently Asked Questions on Provisioning

This section describes some of the issues and messages that are commonly seen when configuring Intel Optane DC persistent memory.

When I configure in App Direct mode, why do I see part of the capacity as “Inaccessible” and “Unconfigured”?

Intel DC Optane persistent memory reserves a certain amount of capacity for metadata. For first-generation Intel Optane DC persistent memory, it is approximately 0.5% per DIMM.

Unconfigured capacity is the capacity that is not mapped into the system’s physical address space. It could also be the capacity that is inaccessible because the processor does not support the entire capacity on the platform.

ipmctl show -memoryresources

Capacity=6031.2 GiB
MemoryCapacity=0.0 GiB
AppDirectCapacity=6024.0 GiB
UnconfiguredCapacity=0.0 GiB
InaccessibleCapacity=7.2 GiB
ReservedCapacity=0.0 GiB
How do I find the firmware version on the modules?
ipmctl show -dimm
 DimmID | Capacity  | HealthState | ActionRequired | LockState | FWVersion
==============================================================================
 0x0001 | 502.6 GiB | Healthy         | 0                            | 
My processor does not allow me to configure the entire Intel Optane DC memory module capacity on the platform. How do I find out maximum supported capacity on the processor?

Run the following command to display the maximum amount of memory that is allowed to be mapped into the system physical address space for this processor based on the processor SKU. For more information on the capacity supported by the processor SKU on your platform, run the following commands:

ipmctl show -d TotalMappedMemory -socket
---SocketID=0x0000---
   TotalMappedMemory=3203.0 GiB
---SocketID=0x0001---
   TotalMappedMemory=3202.5 GiB
How do I unlock the encrypted modules?

The ability to enable security, unlock a persistent memory region, and change passphrase is enabled in both the Linux kernel driver 5.0 and is also available in ndctl.

Does Intel Optane DC persistent memory support SMART errors?

Yes.

How do I find a module’s lifetime?

The remaining persistent memory module life as a percentage value of factory-expected life span can be found by running the following command:

ipmctl show -o text -sensor percentageremaining -dimm
 DimmID | Type                | CurrentValue | CurrentState
============================================================
 0x0001 | PercentageRemaining | 100%         | Normal
 0x0011 | PercentageRemaining | 100%         | Normal
 0x0021 | PercentageRemaining | 100%         | Normal
When configured in Mixed mode, is the DRAM capacity seen by the operating system and applications?

No. When part of the capacity is configured in Memory mode, DRAM is hidden from the operating System and the capacity configured in Memory mode becomes the volatile capacity for the capacity configured in App Direct mode.

I am adding 2 new DIMMs and would like to erase all the existing configurations and safely reconfigure the hardware but was unable to delete regions.

To create a new configuration, you need to delete the existing goal and create a new one as shown below. There is no need to delete regions. Create-goal -f will overwrite the existing goal and on reboot you will see the new goal.

1.	ndctl destroy-namespace namespace0.0
or
ndctl destroy-namespace -f all
2.	ipmctl delete -goal
Delete memory allocation goal from DIMM 0x0001: Success
Delete memory allocation goal from DIMM 0x0011: Success
Delete memory allocation goal from DIMM 0x0101: Success
Delete memory allocation goal from DIMM 0x0111: Success
3.	ipmctl create -f -goal persistentmemorytype=appdirect
Why am I seeing a certain amount of capacity shown as reserved?

Reserved capacity is total system Intel Optane DC persistent memory capacity that is reserved. This capacity is the persistent memory partition capacity (rounded down for alignment) less any App Direct capacity. Reserved capacity typically results from a Memory Allocation Goal request that specified the Reserved property. This capacity is not mapped to system physical address (SPA) space.

For a comprehensive list of debugging and troubleshooting commands, refer to the list of debugging and troubleshooting options listed in Appendix A

What is the list of supported configurations and what happens if I am not using one of these configurations

A list of validated and popular configurations are provided in  the System Requirements section of this document. Other configurations are likely to  not be supported by the BIOS and may not boot.

Managing Namespaces

For troubleshooting namespace configuration issues with ndctl, refer to the following guides:

Summary

This quick start guide describes methods to provision Intel Optane DC persistent memory modules using ipmctl and operating system specific persistent memory vendor-neutral utilities. ipmctl is necessary for discovering Intel Optane DC memory module resources, creating goals and regions, updating the firmware, and debugging issues with these persistent memory modules. Learn more about each of these tools and get started with persistent memory programming at Intel® Developer Zone (Intel® DZ) – Persistent Memory.

Resources

Persistent Memory Resources at Intel Developer Zone

GitHub repository for ipmctl

GitHub repository for ndctl

ipmctl manpages

What is Non-volatile Device Control (ndctl)

Appendix A: ipmctl Commands

Memory Provisioning

  • create -goal
  • delete -goal
  • show -goal
  • dump -goal
  • load -goal
  • show -region

Health and Performance

  • show -performance
  • show -sensor
  • change -sensor

Security

  • change -device-passphrase
  • change -device-security
  • enable -device-security
  • erase -device-data

Device Discovery

  • show –device
  • show -dimm
  • show –memoryresources
  • show –socket
  • show –system -capabilities
  • show -topology

Support and Maintenance

  • dump -support
  • show –firmware
  • show –host
  • acknowledge –event
  • update -firmware
  • change –preferences
  • show -preferences
  • help

Debugging

  • dump –debug-log
  • inject -error
  • run -diagnostic
  • show -acpi
  • show -pcd
  • show –nfit
  • show –error-log
  • show –event
For more complete information about compiler optimizations, see our Optimization Notice.

2 comments

Top

Please provide  complete example for MemoryMode, Appdirect, and Mixed mode.  Some users need to take more time to have a try.  complete example helps customers to save time. 

Don't forget to sudo! Otherwise the user privilage would not let you see NVDIMMs:

lab@clx:~$ ipmctl show -topology

No DIMMs in the system.

Compare:

lab@clx:~$ sudo ipmctl show -topology
[sudo] password for lab:

DimmID  MemoryType      Capacity        PhysicalID      DeviceLocator
0x0001  Unknown         126.3 GiB       0x0016          CPU1_DIMM_A2
0x0101  Unknown         126.3 GiB       0x001c          CPU1_DIMM_D2
0x1001  Unknown         126.3 GiB       0x0022          CPU2_DIMM_A2
0x1101  Unknown         126.3 GiB       0x0028          CPU2_DIMM_D2
N/A     DDR4            16.0 GiB        0x0015          CPU1_DIMM_A1
N/A     DDR4            16.0 GiB        0x001b          CPU1_DIMM_D1
N/A     DDR4            16.0 GiB        0x0021          CPU2_DIMM_A1
N/A     DDR4            16.0 GiB        0x0027          CPU2_DIMM_D1

Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.