How NUMA Affects your Workloads: Intel® VTune™ Amplifier

Overview

Many modern multi-socket systems are based on non-uniform memory access (NUMA), where access latency and bandwidth depend on the location of the physical memory relative to its use. The art of memory object placement in a NUMA system is in finding patterns to drive heuristics. Introducing new memory hierarchy elements (e.g., high-speed, on-package MCDRAM in Knights Landing) introduces an additional NUMA factor for which a performance-minded engineer needs to account. Understanding the effect of memory object placement on the memory subsystem is key to extracting the best performance out of your platform. We will demonstrate how to use Intel® VTune™ Amplifier to analyze memory objects (dynamic, global, and stack), understand the effects of your choices in data placement on an object basis, and extract the best possible performance out of your system.

Download Slides [PDF 1.47MB]

[block]pre-patch-disclaimer[/block]

[block]ftc-disclaimer[/block]

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804