For decades, developers had to balance data in memory for performance with data in storage for persistence. Memory is fast, but it is also expensive and limited in capacity. Storage tends to be slower, but compared to memory, storage is cheaper, with virtually unlimited capacity.
Today’s enterprise organizations need to support highly data-centric solutions such as in-memory databases, multi-media delivery, real-time data analytics, and more. In addition, cloud service providers need to deliver growing amounts of infrastructure, platforms and software through their massive compute capacity and virtualization, and emerging 5G communications and real-time AI solutions will require high performance, high availability, and data persistence.
A Major Advance in Memory & Storage Architecture
To meet these challenges, Intel has invested in direct load and store access to byte-addressable persistent space based on the 3D XPoint memory technology. Intel® Optane™ DC Persistent Memory will introduce a new, flexible tier within the memory/storage hierarchy, applicable to workloads across cloud, in-memory computing, and storage.
This disruptive technology will deliver persistence at memory bus speeds, reducing the need for persistence in storage. The new memory modules can be configured at up to 3TB per CPU socket (in addition to the DRAM in the system). That means fewer I/O trips, and lower latency, for accelerated performance. In addition, the new media will offer a lower-cost alternative compared to DRAM.
Three Groups of Use Cases
Address space on Intel® Optane™ DC memory modules can be partitioned as volatile main memory, as persistent memory, or as a combination of both. Further, the persistent memory address space can be accessed by applications using direct load/store accesses, or using standard storage APIs like file open/close/read/write.
With memory mode, applications get a high capacity main memory solution at substantially lower cost and power, while providing performance that can be close to DRAM performance, depending on the workload. No modifications are required to the application—the operating system sees the Intel Optane DC memory modules capacity as the system main memory. For example, on a common two-socket system, the Memory Mode can provide 6TB of main memory, something very difficult and expensive to do with DRAM (if it is even possible). In Memory Mode, the DRAM installed in the system acts as a cache to deliver DRAM-like performance for this high-capacity main memory.
Although the 3D XPoint media is persistent, Memory Mode makes the capacity appear volatile to application software. Data stored on the memory modules is protected with XTS AES 256 bit encryption. When memory mode is selected, the data is cryptographically erased by the controller on the module between power cycles, thus mimicking the volatile nature of DRAM.
A compelling use case for Memory Mode is to run more virtual machines (VMs) than in a traditional server system—you don’t have to starve a process for memory to spin-up a new VM. Cloud service providers and enterprise IT shops will benefit by reaching Service Level Agreements (SLAs) at lower cost. That is because memory is oversubscribed in traditional virtualized systems, and that requires repeated reads and writes to storage to meet SLAs. Persistent memory eliminates that need, thus improving performance and lowering system cost to support more or bigger VMs.
Just as some or all of the capacity of the Intel Optane DC memory modules can be provisioned as Memory Mode, as described above, some or all of the capacity can be provisioned as persistent memory. This is known as App Direct Mode, where software has a byte-addressable way to talk to the persistent memory capacity. There are many ways for applications to use App Direct Mode without any modifications. For example, the operating system may use App Direct while providing standard storage interfaces to the application. Similarly, some middleware libraries may use App Direct, again providing existing interfaces to applications so they don’t need to change. But ISVs may choose to modify their applications for persistent memory, using App Direct Mode directly to get the best value from the Intel Optane DC memory modules. OS vendors such as Microsoft*, Red Hat*, Canonical*, SUSE*, and VMware* have provided the enabling required to give software direct access to persistent memory.
With App Direct Mode, an In Memory Data Base (IMDB) restart time can be significantly reduced because applications no longer have to reload data from storage to main memory. In one of our lab tests, we determined the amount of time to reboot with a particular large IMDB was 35 minutes. Assuming a typical system is rebooted every couple of weeks to install security patches and updates, this led to an expected availability of 99.8%. By adding persistent memory, data structures such as indexes were made persistent, even though they live in memory. The reboot time was only 17 seconds for the same large database because the time to rebuild the indexes was eliminated. This resulted in an expected 99.999% service availability. In addition, our testing shows that using large capacity persistent memory with an in-memory database enables multi-TB capacity for large data sets without having to go from a 2-socket server to a more expensive 4-socket server.
For Automated Trading Systems (ATSs), the database can put its transaction logs in persistent memory, so in the event of an outage, the database can be rebuilt based on the log. In addition, a transaction is considered “complete” when it is written to a persistent medium. With persistent memory, your transaction is done as soon as it is written—even though it later gets transferred to a “warm” or “cold” storage device.
Storage over App Direct mode
As described above, the persistent memory address space can be accessed by applications by using direct load/store accesses in App Direct Mode. In addition, the same persistent memory address space can be accessed by using standard file APIs in Storage over App Direct Mode.
This allows existing storage based applications to access the App Direct region of Intel Optane DC memory modules without any modifications to the existing applications or the file systems that expect block storage devices. Storage over App Direct Mode provides high-performance block storage, without the latency of moving data to and from the I/O bus. This mode does require NVDIMM drivers which are already part of the Linux* kernel starting with version 4.2, and Windows* Server 2016.
Ecosystem Partnerships Drive Persistent Memory Solutions
Intel is partnering with multiple industry groups and industry leaders to provide an ecosystem and updated specifications for using persistent memory:
- Standards bodies such as the Storage Network Industry Association (SNIA), ACPI, UEFI, and DMTF
- Operating system vendors such as Microsoft*, Red Hat*, SUSE*, and Canonical*
- Virtualization providers such as VMware*, KVM, and Xen*
- Java* vendors such as Oracle*
- Database and enterprise application vendors such as SAP*, Oracle*, Microsoft* SQL, RocksDB*, Redis*, Apache* Cassandra, and more
- Data analytics vendors such as Cloudera*
Persistent Memory Programming Model
The software interface for using Intel Optane DC Persistent Memory has been designed in collaboration with dozens of companies to create a unified programming model for persistent memory. The Storage Network Industry Association (SNIA) formed a technical workgroup which has published a specification of the model. This software interface is independent of any specific persistent memory technology and can be used with Intel Optane DC Persistent Memory or any other persistent memory technology.
The model exposes three main capabilities:
- The management path allows system administrators to configure persistent memory products and check their health
- The storage path supports the traditional storage APIs where existing applications and file systems need no change; they simply see the persistent memory as very fast storage
- The memory-mapped path exposes persistent memory through a persistent memory-aware file system so that applications have direct load and store access to the persistent memory. This direct access does not use the page cache like traditional file systems and has been named DAX by the operating system vendors.
When an independent software vendor (ISV) decides to fully leverage what persistent memory can do, converting the application to memory map persistent memory and place data structures in it can be a significant change. Keeping track of persistent memory allocations and making changes to data structures as transactions (to keep them consistent in the face of power failure) is complex programming that hasn’t been required for volatile memory and is done differently for block-based storage.
The Persistent Memory Development Kit (PMDK) provides libraries meant to make persistent memory programming easier. Software developers only pull in the features they need, keeping their programs lean and fast on persistent memory.
These libraries are fully validated and performance-tuned by Intel. They are open source and product neutral, working well on a variety of persistent memory products. The PMDK contains a collection of open source libraries which build on the SNIA programming model. The PMDK is fully documented and includes code samples, tutorials and blogs. Language support for the libraries exists in C and C++, with support for Java*, Python*, and other languages in progress.
Opportunity for Software Developers
Discover memory that's big, affordable, persistent. Visit the Intel® Developer Zone (Intel® DZ) to learn how Intel® persistent memory can transform your applications and your data center. You will find the following resources to begin adapting your code:
- Video training on persistent memory technology and programming techniques.
- Code samples, tutorials, and links to persistent memory programming information
- Information on enabling for persistent memory done by OS vendors
- Simulators and other tools to help you test and benchmark your code
So get started. Set up your development environment, and learn how to use the PMDK to create a new application or update existing programs to use persistent memory.
Andy Rudoff is a Senior Principal Engineer at Intel, focusing on Non-Volatile Memory programming. He is a contributor to the SNIA NVM Programming Technical Work Group and author of the Persistent Memory Development Kit hosted at pmem.io. His more than 30 years industry experience includes design and development work in operating systems, file systems, networking, and fault management at companies large and small, including Sun Microsystems and VMware. Andy has taught various Operating Systems classes over the years and is a co-author of the popular UNIX Network Programming text book. View all posts by Andy Rudoff