Intel® MPI Library Release Notes for Linux* OS

By Dmitry Sivkov,

Published:09/11/2018   Last Updated:07/21/2020

Overview

Intel® MPI Library for Linux OS* is a high-performance interconnect-independent multi-fabric library implementation of the industry-standard Message Passing Interface, v3.1 (MPI-3.1). 

To receive technical support and updates, you need to register your product copy. See Technical Support below.

Key Features

This release of the Intel(R) MPI Library supports the following major features:

  • MPI-1, MPI-2.2 and MPI-3.1 specification conformance
  • Interconnect independence
  • C, C++, Fortran 77, Fortran 90, and Fortran 2008 language bindings

Product Contents

  • The Intel® MPI Library Runtime Environment (RTO) contains the tools you need to run programs including scalable process management system (Hydra), supporting utilities, and shared (.so) libraries.
  • The Intel® MPI Library Development Kit (SDK) includes all of the Runtime Environment components and compilation tools: compiler wrapper scripts (mpicc, mpiicc, etc.), include files and modules, static (.a) libraries, debug libraries, and test codes.

You can redistribute the library under conditions specified in the License.

What's New

Intel® MPI Library 2019 Update 9

  • MPI_Comm_accept/connect/join support for Mellanox* provider
  • mpitune_fast functionality improvements
  • Intel® Ethernet 800 Series support
  • Intel GPU buffers support enhancements (I_MPI_OFFLOAD) (technical preview)
  • I_MPI_ADJUST_SENDRECV_REPLACE optimization
  • oneAPI compiler support in mpicc/mpif90/mpif77 wrappers
  • Fixed MPI-IO operations on LUSTRE filesystem for files larger than 2 GB
  • Bug fixes

Intel® MPI Library 2019 Update 8

  • Infiniband* support enhancements for all supported platforms
  • Amazon* AWS/EFA, Google* GCP support enhancements
  • Intel GPU pinning support (I_MPI_OFFLOAD_TOPOLIB, I_MPI_OFFLOAD_DOMAIN_SIZE, I_MPI_OFFLOAD_CELL, I_MPI_OFFLOAD_DEVICES, I_MPI_OFFLOAD_DEVICE_LIST, I_MPI_OFFLOAD_DOMAIN) (technical preview)
  • Distributed Asynchronous Object Storage (DAOS) file system support
  • Intel® Xeon® Platinum 9282/9242/9222/9221 family optimizations and platform recognition
  • ILP64 support improvements
  • PMI2 spawn support
  • impi_info tool extensions (-e|-expert option)
  • Bug fixes

Intel® MPI Library 2019 Update 7

  • Performance optimizations for Intel® Xeon® Platinum 9200 (formerly Cascade Lake-AP)
  • Implemented dynamic processes support in OFI/mlx provider
  • Added integrity checks for parameters of Fortran ILP64 interface in debug library
  • Added PMI2 support
  • Fixed issue with MPI_Allreduce at large scale
  • Fixed issue with MPI-IO operations on GPFS
  • Fixed issue with MPI-IO with 2+ GiB files on NFS
  • Bug fixes

Intel® MPI Library 2019 Update 6

  • Improved Mellanox* Infiniband* EDR/HDR interconnect support.
  • Improved Amazon* Elastic Fabric Adapter (EFA) support.
  • Added performance optimizations for Intel® Xeon® Platinum 9200 (formerly Cascade Lake-AP).

  • Added non-blocking collective operations support for Autotuner.
  • Bug fixes.

Intel® MPI Library 2019 Update 5

  • Added autotuner functionality (I_MPI_TUNING_MODE, I_MPI_ADJUST__LIST).
  • Added basic “Wait Mode” support (I_MPI_WAIT_MODE).
  • Added AWS EFA (Elastic Fabric Adapter) support.
  • Added OFI/mlx provider as a technical preview for Mellanox EDR/HDR (FI_PROVIDER=mlx).
  • Added Mellanox HCOLL support (I_MPI_COLL_EXTERNAL).
  • Added shared memory allocator (I_MPI_SHM_HEAP, I_MPI_SHM_HEAP_VSIZE, I_MPI_SHM_HEAP_CSIZE, I_MPI_SHM_HEAP_OPT).
  • Added transparent Singularity (3.0+) containers support.
  • Added dynamic I_MPI_ROOT path for bash shell.
  • Improved memory consumption of OFI/verbs path (FI_PROVIDER=verbs).
  • Improved single node startup time (I_MPI_FABRICS=shm).
  • Disabled environment variables spellchecker by default (I_MPI_VAR_CHECK_SPELLING, I_MPI_REMOVED_VAR_WARNING).
  • Bug fixes.

Intel® MPI Library 2019 Update 4

  • Multiple Endpoints (Multi-EP) support for InfiniBand* and Ethernet.
  • Implemented the NUMA-aware SHM-based Bcast algorithm (I_MPI_ADJUST_BCAST).
  • Added the application runtime autotuning (I_MPI_TUNING_AUTO).
  • Added the -hosts-group option to set node ranges using square brackets, commas, and dashes (for example, nodeA[01-05],nodeB).
  • Added the ability to terminate a job if it has not been started successfully during a specified time period in seconds (I_MPI_JOB_STARTUP_TIMEOUT).
  • Added the IBM POE* trust processes placement.
  • Bug fixes.

Intel® MPI Library 2019 Update 3

  • Performance improvements.
  • Custom memory allocator is added and available by default in release and debug configurations (I_MPI_MALLOC).
  • MPI-IO enhancements (I_MPI_EXTRA_FILESYSTEM).
  • Bug fixes.

Intel® MPI Library 2019 Update 2

  • Intel® MPI Library 2019 Update 2 includes functional and security updates. Users should update to the latest version.

Intel® MPI Library 2019 Update 1

  • Performance improvements.
  • Conditional Numerical Reproducibility feature is added (I_MPI_CBWR variable).
  • Customized Libfabric 1.7.0 alpha sources and binaries are updated.
  • Internal OFI distribution is now used by default (I_MPI_OFI_LIBRARY_INTERNAL=1).
  • OFI*-capable Network Fabrics Control is partially restored (I_MPI_OFI_MAX_MSG_SIZE , I_MPI_OFI_LIBRARY).
  • OFI/tcp provider is added as a technical preview feature.
  • Platform recognition is restored (I_MPI_PLATFORM* variables).
  • Spellchecker is added for I_MPI_* variables (I_MPI_VAR_CHECK_SPELLING variable).
  • Multiple bug fixes.

Intel® MPI Library 2019

  • Customized Libfabric 1.6.1 sources are included. 
  • Customized Libfabric 1.6.1 with sockets, psm2, and verbs providers binaries are included.
  • PSM2 Multiple Endpoints (Multi-EP) support. 
  • Asynchronous progress is added as a technical preview feature.
  • Multiple bug fixes.

Intel® MPI Library 2018 Update 5

  • Bug fixes.

Intel® MPI Library 2018 Update 4

  • Bug fixes.

Intel® MPI Library 2018 Update 3

  • Performance improvements

Intel® MPI Library 2018 Update 2

  • Improved shm performance with collective operations (I_MPI_SCHED_YIELD, _MPI_SCHED_YIELD_MT_OPTIMIZATION).
  • Intel® MPI Library is now available to install in YUM and APT repositories.

Intel® MPI Library 2018 Update 1

  • Improved startup performance on many/multicore systems (I_MPI_STARTUP_MODE).
  • Bug fixes.

Intel® MPI Library 2018

  • Improved startup times for Hydra when using shm:ofi or shm:tmi.
  • Hard finalization is now the default.
  • The default fabric list is changed when Intel® Omni-Path Architecture is detected.
  • Added environment variables: I_MPI_OFI_ENABLE_LMT, I_MPI_OFI_MAX_MSG_SIZE, I_MPI_{C,CXX,FC,F}FLAGS, I_MPI_LDFLAGS, I_MPI_FORT_BIND.
  • Removed support for the Intel® Xeon Phi™ coprocessor (code named Knights Corner).
  • I_MPI_DAPL_TRANSLATION_CACHE, I_MPI_DAPL_UD_TRANSLATION_CACHE and I_MPI_OFA_TRANSLATION_CACHE are now disabled by default.
  • Deprecated support for the IPM statistics format.
  • Documentation is now online.

Intel® MPI Library 2017 Update 4

  • Performance tuning for processors based on Intel® microarchitecture codenamed Skylake and for Intel® Omni-Path Architecture.

Intel® MPI Library 2017 Update 3

  • Hydra startup improvements (I_MPI_JOB_FAST_STARTUP).
  • Default value change for I_MPI_FABRICS_LIST.

Intel® MPI Library 2017 Update 2

  • Added environment variables I_MPI_HARD_FINALIZE and I_MPI_MEMORY_SWAP_LOCK.

Intel® MPI Library 2017 Update 1

  • PMI-2 support for SLURM*, improved SLURM support by default.
  • Improved mini help and diagnostic messages, man1 pages for mpiexec.hydra, hydra_persist, and hydra_nameserver.
  • Deprecations:
    • Intel® Xeon Phi™ coprocessor (code named Knights Corner) support.
    • Cross-OS launches support.
    • DAPL, TMI, and OFA fabrics support.

Intel® MPI Library 2017

  • Support for the MPI-3.1 standard.
  • New topology-aware collective communication algorithms (I_MPI_ADJUST family).
  • Effective MCDRAM (NUMA memory) support. See the Developer Reference, section Tuning Reference > Memory Placement Policy Control for more information.
  • Controls for asynchronous progress thread pinning (I_MPI_ASYNC_PROGRESS).
  • Direct receive functionality for the OFI* fabric (I_MPI_OFI_DRECV).
  • PMI2 protocol support (I_MPI_PMI2).
  • New process startup method (I_MPI_HYDRA_PREFORK).
  • Startup improvements for the SLURM* job manager (I_MPI_SLURM_EXT).
  • New algorithm for MPI-IO collective read operation on the Lustre* file system (I_MPI_LUSTRE_STRIPE_AWARE).
  • Debian Almquist (dash) shell support in compiler wrapper scripts and mpitune.
  • Performance tuning for processors based on Intel® microarchitecture codenamed Broadwell and for Intel® Omni-Path Architecture (Intel® OPA).
  • Performance tuning for Intel® Xeon Phi™ Processor and Coprocessor (code named Knights Landing) and Intel® OPA.
  • OFI latency and message rate improvements.
  • OFI is now the default fabric for Intel® OPA and Intel® True Scale Fabric.
  • MPD process manager is removed.
  • Dedicated pvfs2 ADIO driver is disabled.
  • SSHM support is removed.
  • Support for the Intel® microarchitectures older than the generation codenamed Sandy Bridge is deprecated.
  • Documentation improvements.

Known Issues and Limitations

Intel® MPI Library 2019 Update 9

  • To use shared memory only and avoid network initialization on the single node run please explicitly set I_MPI_FABRICS=shm.

Intel® MPI Library 2019 Update 8

  • ILP64: not supported by MPI modules for Fortran 2008
  • ILP64: MPI_SIZEOF and MPI_XXX_c2f/MPI_XXX_f2с is not supported for ILP64
  • Intel GPU pinning: MPI + direct L0 interoperability only
  • csh scripts are not supported as an application for mpiexec
  • OFI/mlx: In some cases explicit FI_MLX_TLS=auto may help to get better results
  • OFI/mlx: dynamic processes do not support connect/accept connection management model
  • OFI/psm2: There may be a negative performance impact on MPI_Gather/MPI_Allgather like communication patterns. In order to workaround the potential issue you may use the following environment variable: MPIR_CVAR_CH4_OFI_ENABLE_DATA=0
  • Application may hang with LSF job manager in finalization if the number of nodes is more than 16. The workaround is setting -bootstrap=ssh or -branch-count=-1
  • Incorrect process pinning with I_MPI_PIN_ORDER=spread. Some of the domains may share common sockets
  • MPI-IO operations on LUSTRE filesystem may lead to crash for files larger than 2 GB
  • Nonblocking MPI-IO operations on NFS filesystem may work incorrectly for files larger than 2 GB.
  • Some MPI-IO features may not work on NFS v3 mounted w/o "lock" flag.

Intel® MPI Library 2019 Update 7

  • There may be a negative performance impact on MPI_Gather/MPI_Allgather like communication patterns with Intel OPA. In order to workaround the potential issue you may use the following environment variable: MPIR_CVAR_CH4_OFI_ENABLE_DATA=1
  • OFI/mlx dynamic processes does not support connect/accept connection managment model.
  • PMI2 does not support dynamic processes.
  • Application may hang with LSF job manager in finalization if the number of nodes is more than 16. The workaround is setting -bootstrap=ssh or -branch-count=-1
  • Incorrect process pinning with I_MPI_PIN_ORDER=spread. Some of the domains may share common sockets
  • MPI-IO operations on LUSTRE filesystem may lead to crash for files larger than 2 GB
  • Nonblocking MPI-IO operations on NFS filesystem may work incorrectly for files larger than 2 GB.
  • Some MPI-IO features may not work on NFS v3 mounted w/o "lock" flag.
  • The following features have not yet been implemented:
    • Unified Memory Management support
    • Timer control (I_MPI_TIMER_KIND variable)
    • Fault Tolerance Support (I_MPI_FAULT_CONTINUE variable)
    • Debug information (I_MPI_PRINT_VERSION, I_MPI_OUTPUT_CHUNK_SIZE variables)
    • Polling mode select (-demux and I_MPI_HYDRA_DEMUX)

Intel® MPI Library 2019 Update 6

  • Intel MPI may crash or have unexpected behavior in certain (special file size and number of ranks) cases during MPI IO operations on GPFS filesystem in case of I_MPI_EXTRA_FILESYSTEM=1.

Intel® MPI Library 2019 Update 5

  • OFI/mlx provider does not support RMA and Spawns. OFI/mlx isn’t selected by default (OFI/verbs is default path). OFI/mlx requires UCX 1.5+.
  • IB/iWarp/RoCE interconnects stability might depend on Libfabric version.
    • RMA window allocation with zero size may not work over IB/iWarp/RoCE interconnects.
  • If mpivars.sh sourced from another script with no explicit parameters, it will inherit parent script options and may process matching ones.
  • stdout and stderr redirection may cause problems with LSF's blaunch.
    • verbose option may cause a crash with LSF's blaunch. Please do not use -verbose option or set -bootstrap=ssh.
  • SLURM* option --cpus-per-task in combination with Hydra option -bootstrap=slurm leads to the incorrect pinning. I_MPI_PIN_RESPECT_CPUSET=disable may fix this issue.
  • Environment variables spellchecker may cause crashes (disabled by default).
  • Intel MPI may crash or have unexpected behavior in certain (special file size and number of ranks) cases during MPI IO operations on GPFS filesystem by default. You may disable filesystem recognition as workaround: I_MPI_EXTRA_FILESYSTEM=0.
  • The following features have not yet been implemented:
    • Unified Memory Management support
    • Timer control (I_MPI_TIMER_KIND variable)
    • Fault Tolerance Support (I_MPI_FAULT_CONTINUE variable)
    • OFI*-capable Network Fabrics Control (I_MPI_OFI_DSEND, I_MPI_OFI_DIRECT_RMA variables)
    • Debug information (I_MPI_PRINT_VERSION, I_MPI_OUTPUT_CHUNK_SIZE variables)
    • PMI-2 support (please use PMI-1 until PMI-2 is not implemented)
    • Polling mode select (-demux and I_MPI_HYDRA_DEMUX)

Intel® MPI Library 2019 Update 4

  • The buffered send may hang in some scenarios. Please use the debug or release_mt version of the library as a workaround in case of a hang during an MPI_Bsend/MPI_Ibsend call.
  • Topology detection works improperly on SNC (sub-numa custer) enabled machines. Please setup I_MPI_HYDRA_TOPOLIB=ipl to overcome the problem.
  • To make the variables below work properly, please setup the I_MPI_HYDRA_TOPOLIB=ipl environment variable:
    • I_MPI_PIN_DOMAIN=node
    • I_MPI_PIN_DOMAIN=<domain_size>:scatter
    • I_MPI_PIN_DOMAIN=<domain_size>:platform
  • The –genvexcl option or a combination of –genv and –genvall options may cause a crash. Please use the –genv option without –genvall or edit your run script to exclude variables listed for –genvexcl.

Intel® MPI Library 2019 Update 3

  • One of Allreduce algorithms may produce improper results or cause an application hang. If the problem can be solved with I_MPI_ADJUST_ALLREDUCE=0, please use the updated tuning configuration file

Intel® MPI Library 2019 

  • The 2019.0-045 APT package cannot be installed on Ubuntu* OS. Please use the Intel® MPI Library 2019 Update 1 (2019.1-053) package instead.
  • IB/iWarp/RoCE interconnects stability may depend on the Libfabric version
    • RMA window allocation with zero size may not work over IB/iWarp/RoCE interconnects.
  • The following features have not yet been implemented:
    • Unified Memory Management support.
    • PMI-2 support. Please use PMI-1 until PMI-2 is not implemented.
    • Timer control (I_MPI_TIMER_KIND variable).
    • Fault Tolerance Support (I_MPI_FAULT_CONTINUE variable).
    • OFI*-capable Network Fabrics Control (I_MPI_OFI_DSEND, I_MPI_OFI_DIRECT_RMA variables).
    • Debug information (I_MPI_PRINT_VERSION, I_MPI_OUTPUT_CHUNK_SIZE variables).

Removals

Starting from Intel® MPI Library 2019, the deprecated obsolete symbolic links and directory structure have finally been removed. If your application still depends on the old directory structure and file names, you can restore them using the script.

Intel® MPI Library 2019 Update 7

  • Intel® Xeon Phi™ 72xx processor (formerly Knights Landing or KNL) support (since Intel(R) MPI Library 2019 Update 6)

Intel® MPI Library 2019 Update 5

  • Intel® Xeon Phi™ 72xx processor (formerly Knights Landing or KNL) support (since Intel(R) MPI Library 2019 Update 6)

Intel® MPI Library 2019 Update 4

  • The -binding command line option and a machine file parameter.
  • Red Hat* Enterprise Linux* 6 support.

Intel® MPI Library 2019 Update 1

  • SLURM startup improvement (I_MPI_SLURM_EXT variable).
  • I_MPI_OFI_ENABLE_LMT variable.

Intel® MPI Library 2019

  • Intel® True Scale Fabric Architecture support.
  • Removed the single-threaded library.
  • Parallel file systems (GPFS, Lustre, Panfs) are supported natively, removed bindings libraries (removed I_MPI_EXTRA_FILESYSTEM*, I_MPI_LUSTRE* variables).
  • Llama support (removed I_MPI_YARN variable).
  • Wait Mode, Mellanox Multirail* support, Checkpoint/Restart* features that depended on substituted fabrics and related variables: I_MPI_CKPOINT*, I_MPI_RESTART, I_MPI_WAIT_MODE).
  • Hetero-OS support.
  • Support of platforms older than Sandy Bridge.
  • Multi-threaded memcpy support (removed I_MPI_MT* variables).
  • Statistics (I_MPI_STATS* variables).
  • Switch pinning method (removed I_MPI_PIN_MODE variable).
  • Process Management Interface (PMI) extensions (I_MPI_PMI_EXTENSIONS variables).

System Requirements

Hardware Requirements

  • Systems based on the Intel® 64 architecture, in particular:
    • Intel® Core™ processor family or higher
    • Intel® Xeon® Scalable processor family is recommended
  • 1 GB of RAM per core (2 GB recommended)
  • 1 GB of free hard disk space
  • Intel® Xeon Phi™ Processor (formerly Knights Landing) based on Intel® Many Integrated Core Architecture

Software Requirements

(installation issues may occur with operating systems that are not released at the date of the current Intel MPI Library release)

  • Operating systems:
    • Amazon Linux 2
    • Clear Linux
    • Debian* 8 (deprecated), 9.x, 10.x
    • Fedora* 27 (deprecated), 28 (deprecated), 30.x
    • Red Hat Enterprise Linux* 6 (deprecated), 7.x, 8.x (equivalent CentOS versions supported, but not separately tested)
    • SUSE Linux Enterprise Server* 12.x, 15.x
    • Ubuntu* 16.04, 18.04, 19.04
  • Compilers:
    • GNU*: C, C++, Fortran 77 3.3 or newer, Fortran 95 4.4.0 or newer
    • Intel® C++/Fortran Compiler 17.0 or newer
  • Debuggers:
    • Rogue Wave* Software TotalView* 6.8 or newer
    • Allinea* DDT* 1.9.2 or newer
    • GNU* Debuggers 7.4 or newer
  • Batch systems:
    • Platform* LSF* 6.1 or newer
    • Altair* PBS Pro* 7.1 or newer
    • Torque* 1.2.0 or newer
    • Parallelnavi* NQS* V2.0L10 or newer
    • NetBatch* v6.x or newer
    • SLURM* 1.2.21 or newer
    • Univa* Grid Engine* 6.1 or newer
    • IBM* LoadLeveler* 4.1.1.5 or newer
    • Platform* Lava* 1.0
  • Fabric software:
    • Intel® Omni-Path Software 10.5 or later: 
      https://downloadcenter.intel.com/product/92003/Intel-Omni-Path-Host-Fabric-Interface-Products
    • Open Fabric Interface (OFI): https://github.com/ofiwg/libfabric or ${I_MPI_ROOT}/libfabric/src.tgz 
    • Minimum: OFI 1.5.0 (OFI/verbs provider is unstable) 
    • Recommended: the latest OFI "master" branch
    • Build and install instructions: https://software.intel.com/en-us/articles/a-bkm-for-working-with-libfabric-on-a-cluster-system-when-using-intel-mpi-library
  • Supported languages:
    • For GNU* compilers: C, C++, Fortran 77, Fortran 95
    • For Intel® compilers: C, C++, Fortran 77, Fortran 90, Fortran 95, Fortran 2008
  • Clustered File Systems:
    • IBM Spectrum Scale* (GPFS*)
    • LustreFS*
    • PanFS*
    • NFS* v3 or newer

    Notes for Cluster Installation

    When installing the Intel® MPI Library on all the nodes of your cluster without using a shared file system, you need to establish passwordless SSH connection between the cluster nodes. This process is described in detail in the Intel® Parallel Studio XE Installation Guide (see section 2.1).

    Legal Information

    No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

    Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

    This document contains information on products, services and/or processes in development.  All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

    The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

    Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at Intel.com, or from the OEM or retailer.

    Intel, the Intel logo, VTune, Xeon, and Xeon Phi are trademarks of Intel Corporation in the U.S. and/or other countries.

    Optimization Notice

    Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

    Notice revision #20110804

    * Other names and brands may be claimed as the property of others.

    Copyright 2003-2019 Intel Corporation.

    This software and the related documents are Intel copyrighted materials, and your use of them is governed by the express license under which they were provided to you (License). Unless the License provides otherwise, you may not use, modify, copy, publish, distribute, disclose or transmit this software or the related documents without Intel's prior written permission.

    This software and the related documents are provided as is, with no express or implied warranties, other than those that are expressly stated in the License.

    Technical Support

    Every purchase of an Intel® Software Development Product includes a year of support services, which provides Priority Support at our Online Service Center web site.

    In order to get support you need to register your product in the Intel® Registration Center. If your product is not registered, you will not receive Priority Support.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserverd for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804