Enhance Your VMware* VMs with Intel® Optane™ DC Persistent Memory

Overview

Eduardo Berrocal shares a step-by-step process for accessing Intel® Optane™ DC persistent memory directly from your virtual machines running on a VMware* ESXi hypervisor.

Transcript

Hi. I'm Eduardo Berrocal. In this video, I will show step by step how you can access Intel® Optane™ DC persistent memory directly from your virtual machines running on a VMware* ESXi Hypervisor. Ready? Let's do it.

The first thing you need to do to make sure your virtual machines running on an ESXi server can access Intel Optane DC persistent memory is to configure such memory in app-direct mode. There are three ways in which this can be accomplished. One, through the ipmctl management utility at the OS level. Second, through the same ipmctl utility, but at the EFI level. Or third, through the system's BIOS, if such BIOS provides the capabilities to do so.

In this video, I will leave out the BIOS option, given that BIOSs differ by platform. If you want to use this option, please consult the documentation provided by your platform's manufacturer. I will also leave out the configuration at the OS level and assume that no OS is installing the system.

Boot into your EFI cell and run drivers -b hit enter until you can see Intel® Optane™ DC persistent memory drivers. Check the version of the drivers. This version should match exactly the version of your ipmctl tool. If it does not, you need to uninstall the drivers first and then install the ones whose version matches the ipmctl tool. To uninstall a driver, you will need to use the hexadecimal value of the first column, the one called the DRV.

In our case, for example, those values are 1F9 and 1FA. The command name to uninstall a driver is unload. Next, load the driver's chip with your ipmctl tool. For that, you can use the command load. Once the drivers are installed, run ipmctl to show the available regions in the system.

If your Intel Optane DC persistent memory modules are configured in memory mode, no regions will be defined. Configure your whole persistent memory capacity in app-direct interleavened mode by running ipmctl create -f -goal. Once the goal is defined, reboot the system.

After rebooting the system, run ipmctl show -region again to check your newly created app-direct regions. You should see one region per CPU socket. At this point, you are ready to install VMware ESXi in the system. The minimum version required is 6.7.0.

Install VMware ESXi as you would for any other server. No special options are required during the installation phase. After the installation is complete, boot the server and access the web console. You can check that ESXi is the detecting your persistent memory capacity by checking the persistent memory value from the Hardware tab.

A special data store should have also been created, which can be checked by going to Storage in the left hand side menu. Awesome. Now that you have persistent memory properly configured in the ESXi, you stand to add it to your virtual machines through vCenter. Like in the ESXi, you should make sure that you're running version 6.7.0 or higher of vCenter.

Persistent memory is added to VMs in the form of virtual NVDIMMs. To add a virtual NVDIMM, go to the VM Settings. Once there, click on Add New Device, and then on NVDIMM. If you expand the NVDIMM controller, you should see all the available persistent memory capacity.

Choose a value for the size of the NVDIMM. In our case, 192 GB, and then click OK. At this point, persistent memory is attached to the VM as a special block device. Boot into your VM and check that /dev/pmanserial exists. If you attach multiple virtual NVDIMMs, you should see multiple payment devices under /dev.

We are not done yet, though. To accomplish the final step, you need to have the NDCTL tool installed in the VM. You can install it using the packet manager of your favorite Linux* distribution. Run ndctl list -RuN. That is, uppercase R, lowercase u, and uppercase N to list the current regions and namespaces configuring the system. You should see at least one namespace within one region.

As currently configured, the PMEMs here otherwise can only be used for regular IO. This is because their namespace is configured in raw mode, which does not support the dax option. The dax option is what allows us to memory map persistent system memory and access it directly through loads in the stores from neutral space. Without it, there is really no persistent memory program.

If you have only one namespace, you can take it small by running ndctl list -RuN and doing grep for mode. As you can see, the namespace is in raw mode. To change the mode, run ndctl create-namespace -f -e with a namespace name and specify in this mode, fsdax. When the command completes, you can check if your namespace is in fsdax mode by running ndctl list -RuN and doing grep for mode.

If your namespace is in fsdax mode, you are done. The only thing left is creating the file system and mounting it. To do that, first create a mount point for your persistent memory. Next, create a file system in the device using either X4 or XFS. And finally, mount the device with the dax option in the mount point created.

And that's it. Your virtual machine is ready, and your persistent memory applications can access Intel Optane DC persistent memory modules directly and without any modifications, exactly the same as if they were running on a bare metal server. If you would love to learn more or expand on the topics covered in this video, follow the links. Thanks for watching.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804