Deliver scalable parallel code that performs on existing and future hardware from Intel.

Intel® C++ Compiler

  • Speed up applications with industry-leading, standards-based C and C++ tools.
  • Experience seamless compatibility with popular compilers, development environments, and operating systems.
  • Get superior vectorization and parallelization capabilities that include Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instructions, using the latest OpenMP* 5.0 parallel programming model.

Training

Intel® Fortran Compiler

  • Gain superior Fortran application performance.
  • Get extensive support for the latest Fortran standards (including full Fortran 2008 and expanded Fortran 2018), with backwards compatibility to Fortran 77.
  • Boost Single Instruction Multiple Data (SIMD) vectorization and threading capabilities (including Intel AVX-512 instructions) using the latest OpenMP parallel programming model.

Training

Intel® Distribution for Python*

  • Accelerate Python* application performance with NumPy, SciPy, and scikit-learn*—all optimized for Intel® processors.
  • Leverage the power of native performance libraries, multithreading, and the latest vectorization instructions for faster computing. Scale with mpi4py. Supports Python 3.x.

Training

Intel® Math Kernel Library

  • Accelerate math processing routines with the fastest, most-used math library for Intel® and compatible processors.
  • Maximize performance with highly optimized, threaded, and vectorized math functions that scale on current and future Intel® platforms.
  • Use de facto standard APIs for simple code integration.

Training

Intel® Data Analytics Acceleration Library

  • Includes highly optimized machine learning and analytics functions.
  • Simultaneously ingests data and computes results for the highest throughput performance.
  • Supports batch, streaming, and distributed usage models to meet a range of application needs.

Intel® Integrated Performance Primitives

  • Deliver highly optimized image and signal processing, data compression, and cryptography applications using Intel® Streaming SIMD Extensions 2, 3, and 4, and Intel® Advanced Vector Extensions (Intel® AVX) instruction sets.
  • Multicore, multi-OS, and multiplatform ready. Plug in and use APIs to quickly improve applications.
  • Reduces development time and costs.

Training

Intel® Threading Building Blocks

  • Specify tasks instead of manipulating threads. Map your logical tasks onto threads with full support for nested parallelism.
  • Load balance and cut task execution time with proven, efficient parallel patterns, and work stealing.
  • Use open source and licensed versions for Linux*, Windows*, macOS*, and Android* that are compatible with multiple compilers and Intel processors.
  • Simplify threading parallelism with a flow graph feature that expresses dependency and data flow graphs intuitively and easily.
  • Available for C++ only.

Training

Analyze

Get more information about the Intel Parallel Studio XE 2020 Professional Edition.

Scale

Find out more about the Intel Parallel Studio XE 2020 Cluster Edition.

Product and Performance Information

1

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information, see Performance Benchmark Test Disclosure.

 

2

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserverd for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804