Intel® oneAPI Base Toolkit Release Notes

By Jennifer L Jiang,

Published:07/10/2019   Last Updated:09/04/2020

Intel® oneAPI Base Toolkit supports direct programming and API programming, and delivers a unified language and libraries that offer full native code support across a range of hardware including Intel® and compatible processors, Intel® Processor Graphics Gen9 or Gen11, and Intel® Arria® 10 or Intel® Stratix® 10 SX FPGAs. It offers direct programming model as well as API-based programming model, and it also contains analysis & debug tools for development and performance tuning.

Major Features Supported

New in Beta Update 9

Features at toolkit level

  • Beta09 of Intel oneAPI Base Toolkit requires the latest GPU driver. On Windows drivers with version 27.20.100.8587 or older will not work with beta09. Please make sure to follow the Installation Guide of oneAPI Toolkit to install the driver.
  • Inside Intel® oneAPI Base Toolkit, the Intel® oneAPI DPC++/C++ Compiler contains a C/C++ compiler (icx) and a DPC++ compiler (dpcpp.exe).

Intel® oneAPI DPC++/C++ Compiler

  • -g option is temporary suppressing -fsycl-early-optimizations in beta09; see DPC++ ReleaseNotes for details
  • Subset of OpenMP 5.0 support
  • Explicit SIMD programming support inside parallel_for kernel through extension sycl::intel::gpu::simd template
  • buffer allocation for fpga

Intel® oneAPI DPC++ Library

  • STDLib:
    • Ahead-Of-Time (AOT) support for device library
    • Some C++ std header files are moved to oneapi::dpl:: and oneapi::std:: namespace
    • Improved performance in the device library
  • PSTL:
    • Improved performance in parallel_transform_scan.
    • Support range based API

Intel® DPC++ Compatibility Tool

  • Updates to dpct:: headers
  • Migration code improvements based on customer feedback

Intel® oneAPI Math Kernel Library

  • Added DPC++ support for select Host, CPU & GPU functions.
  • Added DPC++ support for select CPU & GPU functions.
  • Improved GPU performance for select functions.

Intel® oneAPI Threading Building Blocks

  • Improvements in scheduler
  • Improvements in tools support
  • Performance optimization
  • Note: deprecated functionalities are removed since beta08

Intel® Distribution for GDB*

  • Important bug fixes below:
    • Fixed a problem that caused breakpoints to be missed if the kernel name is too long.
    • Fixed a problem that prevented defining a breakpoint using the kernel function name in OpenCL.
  • It is no longer necessary for OpenMP CPU offload debugging to set the CL_CONFIG_CPU_ENABLE_NATIVE_SUBGROUPS and CL_CONFIG_CPU_VECTORIZER_TYPE environment variables.
  • It is no longer needed to load DCD manually after a reboot, it is loaded automatically and recompiled at kernel upgrade.

Intel® Integrated Performance Primitives

  • Bug fixes

Intel® oneAPI Collective Communications Library

  • GPU API extension (event-based API, focusing on allreduce, bcast, allgatherv)
  • Support application launch mechanism
  • Bug fixes

Intel® oneAPI Deep Neural Networks Library

  • No change

Intel® oneAPI Data Analytics Library

  • Datasources, datamanagement, K-Means API extensions to align with oneDAL specification
  • Support Level-0 v 1.0 spec
  • Support oneAPI C++ interfaces on macOS*
  • Deprecating 32-bit support

Intel® oneAPI Video Processing Library

  • New features supported for CPU only
    • H.264 & MJPEG SW Decode/Encode
    • DPC++ interoperability
    • VPP
    • Support internally allocated buffers

Intel® Distribution for Python*

  • Experimental Windows support to auto-offload data parallel kernels inside Numba functions onto Intel GPUs
  • Initial alpha release for PyDPPL - a Python wrapper for SYCL and OpenCL for Windows
  • Scikit-ipp support for multi-threading for transform functions and partial multi-threading for filters with using OpenMP
  • mkl_sparse and mkl_umath are now supported on macOS
  • Latest CVE patches have been applied

Intel® Advisor

  • Technical Preview: New and improved Intel(R) Advisor user interface workflows and toolbars, incorporating Roofline analysis for GPUs and Offload Advisor - visit here for details.
  • Technical Preview feature: new recommendations for data transfer optimizations.

Intel® VTune™ Profiler

  • Improved I/O analysis that can identify where slow MMIO writes are made
  • When optimizing FPGA software performance Vtune can now report on stall/data transfer data for each compute unit in the FPGA .

Intel® FPGA Add-On for oneAPI Base Toolkit (Optional)

  • Seamless installer: integrate FPGA add-on into base toolkit online/offline installer as an optional component
  • Linux Repo distribution for FPGA add-on packages: YUM/APT

New in Beta Update 8

Features at toolkit level

  • macOS* support for CPU target with 8 components, see details in Intel oneAPI Base Toolkit System Requirements

  • Intel® C++ Compiler is now included with icx clang based driver

  • the default installation path is changed:
    • Linux or macOS: /opt/intel/oneapi
    • Windows: C:\Program Files (x86)\Intel\oneAPI

Intel® oneAPI DPC++ Compiler & Intel® C++ Compiler

  • Intel® oneAPI DPC++ Compiler
    • Performance improvement of sycl_benchmarks on CPU
    • Manual lsu control
    • Support equivalent of CL_CHANNEL_*_INTELFPGA and forward to clCreateBuffer call
  • Intel® C++ Compiler is now included with following features:
    • icx clang based driver
    • vectorization unrolling
    • OpenMP* SIMD ORDERED
    • Initial OpenMP* 5.0 support

Intel® oneAPI DPC++ Library

  • RNG package integration
  • parallel_transform_scan refactoring
  • Public interface refactoring
  • Parallel_sort optimization for non arithmetic types and operators that are not '<', '>'
  • Range based API support.
  • Performance optimization

Intel® DPC++ Compatibility Tool

  • Nvidia* Thrust* migration coverage improved.
  • More migration coverage for CUDA* Libraries (cuBLAS, cuSPARSE, cuSolver, and cuRand)
  • Updates to DPCT header files (dpct::)

Intel® oneAPI Math Kernel Library

  • Added DPC++ support for Host, CPU & GPU for select functions.
  • Added DPC++ support for Host & CPU for select functions.
  • Added MKL Graph support for GraphBLAS
  • Added support for Windows DPC++ dynamic linking

Intel® oneAPI Threading Building Blocks

  • Rework (revamp) TBB to align with modern C++
  • Added NUMA API support
  • Remove deprecated functionalities
  • Improvements in TBB malloc and scheduler

GDB*

  • Support C++ OpenMP* offload debugging on Linux*.
  • Support non-stop mode to inspect a stopped GPU thread while the other CPU & GPU threads are running. The application debugger does not stop the other threads in non-stop mode.
  • Converging the CPU debug capabilities including
    • Intel(R) Processor Trace Write (ptwrite),
    • Visual Function Call History (func_call),
    • Transactional Synchronization Extensions (TSX),
    • BFLOAT16, etc.

Intel® Integrated Performance Primitives

  • Added IPP Cryptography domain
  • Extended optimization for Intel® IPP Cryptography cipher AES, RSA support on 10th Generation Intel® Core™ processor family.
  • Added new universal CRC function to compute CRC8, CRC16, CRC24, CRC32 checksums
  • Reinstated the ippiComplement function optimized for Intel® AVX-512, AVX2, SSE4.2

Intel® oneAPI Collective Communications Library

  • GPU scale-out
  • Sparse buffers
  • Horovod on Spark

Intel® oneAPI Data Analytics Library

  • Added new APIs for PCA, K-means, KNN, Random Forest, SVM algorithms
  • Added full oneTBB intergrated support
  • Performance optimizations

Intel® oneAPI Deep Neural Networks Library

  • Extended GEN12LP optimizations
  • Initial int8 SPR optimizations
  • Extended eltwise primitive with support for 'round' operation.
  • GRU (lbr) for GPU

Intel® oneAPI Video Processing Library

  • New oneVPL library and reference implementation with backwards compatibility to Intel® Media SDK
  • CPU plugin implementation with H.265 & AV1 SW Decode/Encode
  • oneVPL dispatcher with C API
  • First release to the Open Source repository

Intel® Distribution for Python*

  • Initial support for Intel GPU execution of Python/Numba code on Linux OS.
  • Initial release for PyDPPL a Python wrapper for SYCL and OpenCL.
  • Releasing mkl_sparse package for MKL-powered sparse matrices and mkl_umath package for Intel® technologies-powered NumPy universal function.

Intel® Advisor

  • Added memory-level Roofline analysis, that helps to pinpoint exact memory hierarchy bottlenecks (L1, L2, L3 or DRAM)
  • Technical Preview: New and improved Intel(R) Advisor user interface workflows and toolbars, incorporating Roofline analysis for GPUs and Offload Advisor
  • Technical Preview: Offload Advisor introduces new data transfer metrics, hinting on potential bottlenecks.

Intel® VTune™ Profiler

  • Refining analysis for GPU accelerators with the addition of OpenMP* offload pragna-aware metrics to HPC performance analysis.
  • Added a Performance Snapshot as a first profiling step to suggest the detailed analyses (memory, threading, etc.) that offer the most optimization opportunity.
  • Level-0 support on Windows

Intel® FPGA Add-On for oneAPI Base Toolkit (Optional)

  • New Custom platform add-on containing a newer quartus version

New in Beta Update 7

Features at toolkit level

  • Level-0 runtime is supported and enabled on Windows for some components
  • "module" support on Linux*

Intel® oneAPI DPC++ Compiler

  • Complex data types support
  • Compiler switch to Level-0 by default on Windows
  • C++17 is enabled by default

Intel® oneAPI DPC++ Library

  • Implemented "device_vector" and "host_vector"
  • Implemented c99 complex APIs for accelerators on Windows

Intel® DPC++ Compatibility Tool

  • Migration improvement on Math and Texture/Surface API calls
  • Thrust* migration improvement
  • Updates to dpct::headers

Intel® oneAPI Math Kernel Library

  • Level-0 is supported by default on Windows*

For GPU:

  • RNGs – Multiple functions added.
  • Vector Math - USM support for device-only pointers.

For Host, CPU & GPU:

  • BLAS - strided batch AXPY
  • FFT – 2D & 3D R2C OpenCL kernels
  • LAPACK – USM interface for STEQR.

For Host, CPU & GPU:

  • LAPACK – Multiple pointer-based USM interfaces to functions
  • Summary Statistics – Multiple functions added.

GDB*

  • Support debugging kernels using OpenMP offload directives
  • Support for SIMD lane syntax for the "thread apply" commands

Intel® oneAPI Threading Building Blocks

  • No change

Intel® Integrated Performance Primitives

  • Bug fixes

Intel® oneAPI Collective Communications Library

  • Enabled Deep learning frameworks on GEN9

Intel® oneAPI Data Analytics Library

  • Multiple new GPU Machine Learning algorithms
  • Several CPU performance optimizations
  • Level-0 is supported as default on Windows

Intel® oneAPI Deep Neural Networks Library

  • Support for Arm* 64-bit architecture (AArch64) and other non-x86 processors
  • Provided APIs for primitive cache management
  • Level-0 is the default on Windows

Intel® oneAPI Video Processing Library

  • Expanded codecs (AV1, VP9 decode+encode)
  • API enhancements on Composition, Async, Sharpen, procamp, and JSON parameters

Intel® Advisor

Offload Advisor:

  • support GPU performance projection on Windows host
  • capable of detecting data transfer of stack-allocated objects

Intel® Distribution for Python*

  • Release scikit-ipp 1.0.0 for Image warping , image filtering and morphological operations.
  • Implemented performance optimizations for K-Means algorithm in scikit-learn
  • Initial support of scikit-learn with the device context (GPU)

Intel® VTune™ Profiler

  • Intel® VTune™ Profiler now supports the latest Intel GPUs including Gen9, Gen11, Gen12
  • Added an improved GPU Memory Hierarchy diagram annotated with GPU hardware performance metrics.
  • I/O analysis is improved with a better summary and data presentation with additional DDIO metrics for Sky Lake and Cascade Lake servers.
  • Level-0 support on Linux*

Intel® FPGA Add-On for oneAPI Base Toolkit (Optional)

  • no change

New in Beta Update 6

Features at toolkit level

  • Added command prompt menu item for each Microsoft Visual Studio* edition installed
  • Level-Zero RT support for GPU on Linux*

Intel® oneAPI DPC++ Compiler

  • Compiler switch to Level-zero by default on Linux*
  • Limited interoperability between Level-zero and OpenCL*
  • Specialization constants

Intel® oneAPI DPC++ Library

  • Support parallel_sort (radix sort) optimization
  • support std::complex on Windows*

Intel® DPC++ Compatibility Tool

  • USM-enabled migration of cuRAND API calls (Note not all API calls can be migrated by the tool)
  • More memory management API migration coverage, improved stream management API migration
  • Updates to dpct:: headers

Intel® oneAPI Math Kernel Library

  • Unified Shared Memory (USM) support for limited set of LAPACK functions
  • DPC++ functionality (only USM interfaces) with CPU/GPU support for HPCG benchmark
  • DPC++ functionality with GPU support for MCG31 and MCG59 engines

GDB*

  • Support Intel integrated graphics GEN11

Intel® Integrated Performance Primitives

  • Support BZIP2 v1.0.8
  • Optimize Resize 8u for Ice Lake
  • Develop ippsFIRSparse_32fc

Intel® oneAPI Collective Communications Library

  • oneDAL on Spark* enabling
  • BigDL enabling

Intel® oneAPI Data Analytics Library

  • DBSCAN algorithm
  • SVM algorithm

Intel® oneAPI Deep Neural Networks Library

  • LSTM with projection for CPU
  • Eigen threadpool support for CPU
  • Level-zero RT support for GPU

Intel® oneAPI Video Processing Library

  • Expanded codecs (10bit SW encode, JPEG/MJPEG)
  • API enhancements

Intel® Advisor

  • Offload Advisor: support GPU performance projection on Windows host

Intel® Distribution for Python*

  • Decision Tree Classifier & prediction in scikit-learn
  • Adaboost ensembles for Scikit-learn

Intel® VTune™ Profiler

  • System Overview HW tracing Improvements: represent module entry points, user/kernel metrics, interrupts
  • Improvement on proper handling of too short collections in case of PMU-based analysis

Intel® FPGA Add-On for oneAPI Base Toolkit (Optional)

  • Support 3 FPGA boards (including PAC A10, PAC S10 and custom platform) with 3 add-on installers.

 

New in Beta Update 5

Features at toolkit level

  • Automatic detection of supported GPU driver and print out a warning message if the minimal required version of GPU driver is not installed.
  • If beta04 or older beta03 is installed on the system, please first uninstall beta04 or beta03 by following the Intel® oneAPI Toolkit Installation Guide and install the beta05 after. The side-by-side installation support is not fully implemented in this beta05 release.
  • Two new plugins for Microsoft* Visual Studio Code* (VS Code) (one Sample Browser plugin for Intel® oneAPI Toolkits and one Launcher plugin for Intel® oneAPI Analyzers) are available on VS Code Marketplace at https://marketplace.visualstudio.com/publishers/intel-corporation

Intel® oneAPI DPC++ Compiler

  • Initial multi-tile GPU management
  • Device code split on modules
  • FPGA diagnostic improvements
  • Basic DPC++ & OpenMP* composability

Intel® oneAPI DPC++ Library

  • Improved support for Unified Shared Memory (USM) pointers
  • Extension APIs to support USM version of device_ptr

Intel® DPC++ Compatibility Tool

  • Updated to dpct::headers
  • Migration of cuRAND APIs to oneMKL RNG improved
  • USM-enabled migration of BLAS API

Intel® oneAPI Math Kernel Library

Introduced DPC++ support for the following:

  • For Host, CPU & GPU:
    • Triangular system solver (USM interfaces)
    • Input data to all Sparse BLAS functionality (USM interfaces)
    • Sparse GEMM function (buffer-based and USM interfaces)
  • For Host & CPU:
    • Select LAPACK functions (USM interfaces)
    • Non-deterministic random number generator
  • For GPU:
    • Orthogonal & unitary matrix multiplication (buffer-based interfaces)

Added the following OpenMP GPU offload features:

  • Asynchronous offload capability for BLAS functions
  • Triangular system solver (TRTRS)

Intel® oneAPI Collective Communications Library

  • Added bfloat16 datatype support (except ccl_sparse_allreduce)
  • Added OFI/SHM provider support
  • Added Alltoallv collective

Intel® oneAPI Data Analytics Library

New CPU functionality:

  • Elastic Net algorithm with L1 and L2 regularization in batch computation mode.
  • Probabilistic classification for Decision Forest Classification algorithm with a choice voting method to calculate probabilities.

Intel® oneAPI Deep Neural Networks Library

  • No new features

Intel® oneAPI Video Processing Library

  • New features added:
    • Encode() to CPU extension
    • SVT-HEVC Encoding
    • 10-bit Decode support (CPU, GEN)
  • Enhanced VPL Memory

Intel® Advisor

  • GTpin 2.6 integration
  • First Bottleneck visualization for Integrated Roofline chart
  • Integrated Roofline single kernel view and guidance
  • Bug fixes

Intel® Distribution for Python*

  • daal4py accelerates DBSCAN algorithm in Scikit-Learn.
  • Added support for Elastic net algorithm and Classification probabilities for Decision Forest algorithm in daal4py.

Intel® VTune™ Profiler

  • Easier workflow for FPGA performance analysis
  • Additional GPU performance metrics

Intel® FPGA Add-On for oneAPI Base Toolkit (Optional)

  • no change

New in Beta Update 4

Features at toolkit level

  • In this release Intel® oneAPI Base Toolkit supports co-existence with Intel® Parallel Studio XE or Intel® System Studio on Windows* from command line or in Visual Studio 2017* or Visual Studio 2019*.
  • GPU driver for Ubuntu OS is no longer in the Intel® oneAPI Base Toolkit package; please read the Installation Guide for Intel® oneAPI Toolkits for installation instructions of the GPU driver.

Intel® oneAPI DPC++ Compiler

  • Visual Studio integration supports Intel® Performance Libraries (oneMKL, oneTBB, IPP and oneDAL).
  • "printf" support in device code
  • Introduced the infrastructure on supporting standard C/C++ libraries in device code.
  • queue::submit now throws synchronous exceptions
  • cl::sycl::pipe class moved to cl::sycl::intel namespace
  • Added single_task and parallel_for methods to cl::sycl::ordered_queue::single_task.
  • Bug fixes, diagnostic improvements and other enhancements. See more in Intel® oneAPI DPC++ Compiler Release Notes.

Intel® oneAPI DPC++ Library

  • Added 64-bit atomics support.
  • Added <complex>, most functions in <cmath> GNU libstdc++), and <ratio>.
  • Bug fixes and enhancements. See more in Intel® oneAPI DPC++ Library Release Notes.

Intel® DPC++ Compatibility Tool

  • More texture API migration coverage
  • More memory management API migration coverage
  • Added the migration of 14 cuRAND APIs to oneMKL RNG
  • Many improvements on readability and maintainability in generated code
  • Bug fixes and improvements. See more in Intel® DPC++ Compatibility Tool Release Notes.

Intel® oneAPI Math Kernel Library

  • BLAS: pointer-based (USM) interface support; C/C++ OpenMP* GPU offload support.
  • LAPACK: pointer-based (USM) interface support for PORTF and POTRS; C/C++ OpenMP GPU offload support for xGETRF; GPU device support for POTRI.
  • Sparse: Optimized solver of triangular system (TRSV) for matrices in CSR for GEN9.
  • FFT: added support for C/C++ OpenMP offload to 1D C2C FFTs (single/double precision).

Intel® oneAPI Collective Communications Library

  • Support for RHEL 7.x (CPU only) and CentOS 7.x (CPU only)
  • Added 2D and pipelined ring allreduce algorithms (CCL_ALLREDUCE=2d, CCL_ALLREDUCE=ring).
  • Bug fixes, stability and performance improvements.

Intel® oneAPI Data Analytics Library

  • Initial support for heterogeneous input data formats
  • Initial support for Unified Shared Memory (USM) for Homogeneous data
  • New GPU single-node algorithms, i.e. Gradient Boosted Trees, Linear Regression, K-nearest Neighbors etc.
  • Bug fixes, stability improvement.

Intel® oneAPI Deep Neural Networks Library

  • New supported primitives: Matmul, LogSoftmax and Resampling;
  • New functionality support: bfloat16 to RNN on GPU; asymmetric quantization support for MLPs
  • Integration with linux-perf profiling tool
  • Performance imporvements on CPU and GPU
  • Bug fixes

Intel® oneAPI Video Processing Library

  • Basic Encode (AVC, HEVC) for GEN9
  • Transcode (AVC, HEVC)
  • Enhanced VPL Memory; Improved extension design and Decode (AVC, HEVC)

Intel® Advisor

  • GPU roofline collection supports Windows in this release.
  • Migrated to Python 3.*
  • Enhancement and bug fixes.

Intel® Distribution for Python*

  • Optimized sklearn.cluster.DBSCAN using DAAL for automatic and brute force methods.

Intel® VTune™ Profiler

  • Simplified system configuration requirements for GPU analysis. GPU utilization analysis is now available without a prerequisite of rebuilding the Linux kernel. See more in VTune Profile Release Notes below.
  • The System Overview analysis type contains CPU and GPU Concurrency analysis.
  • Enhancement and bug fixes.

Intel® FPGA Add-On for oneAPI Base Toolkit (Optional)

  • Improved the installation script to work with Ubuntu OS and to remove duplicated library files.
  • Resolved a license check issue due to expired license that caused the FPGA Add-On to stop working in year 2020.

Features in initial beta (update 3)

  • In this release the following features are provided:
    • Windows* 10 x64 is supported. FPGA support on Windows is limited to emulator only.
    • Integration of Microsoft* Visual Studio* 2017 and 2019
  • Intel® oneAPI DPC++ Compiler supports SYCL specification 1.2.1 revision 5 plus extensions, i.e.:
    • Unified Shared Memory - Explicit and Restricted capabilities are supported
    • Sub-groups for NDRange Parallelism
  • Intel® DPC++ Compatibility Tool supports:
    • Partial migration of kernel definitions/calls, memory management (including unified memory), device management, data types, error handling, math, events, streams, and more.
  • GDB* supports:
    • Breakpoints inside a kernel
    • Source-level stepping
  • Intel® oneAPI Math Kernel Library (oneMKL) supports:
    • BLAS, LAPACK, FFT, Sparse, Vector math, and Random number generator
  • Intel® oneAPI Data Analytics Library supports:
    • K-Means Clustering
    • Principal Components Analysis (PCA)
  • Intel® oneAPI Deep Neural Network Library provides:
    • Support for DPC++ compiler
    • SYCL API extensions and interoperability with SYCL code

System Requirements

Please see Intel oneAPI Base Toolkit System Requirements

Installation Instructions

Please visit Installation Guide for Intel oneAPI Toolkits

How to Start Using the Tools

Please reference:

Known Issues and Workarounds

  1. In beta09 release the upgrade installation on Linux* via APT has a known-issue and fails to install with following error:
    • Errors were encountered while processing: /tmp/apt-dpkg-install-VIzkXS/59-intel-oneapi-dpcpp-debugger-eclipse-cfg-10.0-146.beta09_all.deb
    • The work-around is: 
      1. Uninstall the previous version by following the Installation Guide for Intel oneAPI Toolkits
      2. Reinstall the beta09 via APT installation. 
  2. Running any GPU code on a Virtual Machine is not supported at this time.
  3. If you have chosen to download the Get Started Guide to use offline, viewing it in Chrome may cause the text to disappear when the browser window is resized. To fix this problem, resize your browser window again, or use a different browser.
  4. On Linux Platform:
    • Eclipse* 4.12: the code sample project created by IDE plugin from Makefile will not build. It is a known issue with Eclipse 4.12. Please use Eclipse 4.9, 4.10 or 4.11.
    • Eclipse plugin specific: If Intel® Parallel Studio XE (IPSXE) is installed and the Intel C++ Compiler's Eclipse plugin is installed, the oneAPI Toolkit can be installed but the Eclipse plugin installation will fail. The workaround is below:
      • Uninstall the existing Eclipse plugin from IPSXE before installing oneAPI Toolkit.
  5. On Windows Support
    • If you encounter a runtime error that "... ... sycl.dll was not found. ... ..." or similar like below

      Unable to start program

      when running your program within Visual Studio, please follow the instructions below to update the project property "Debugging > Environment" in order to run the program:
      • Open the project property "Debugging > Environment" property and click right drop-down & select Edit

        Unable to start program

      • Copy and paste the default PATH environment variable value from lower section to the above section.
        This step is very important because of how Visual Studio 2017 or newer handles the additional directory for the "PATH" environment variable.

        Unable to start program

      • Add any additional directories needed by the program for those dll files to the path like below

        Unable to start program

    • Because of two known issues of Visual Studio 2019 below, the beta release is limited to support Visual Studio 2019 16.3.0 to 16.3.3.
      • error: expected an attribute name _NODISCARD _Check_return_
        This issue is fixed in Visual Studio 2019 16.4 preview.
      • error LNK2005: "bool const std::_Is_integral<bool >" (??$_Is_integral@_N@std@@3_NB) already defined in init-62f295.obj
        This issue only exists with Visual Studio 2019 16.2.5 or older, it's fixed in Visual Studio 16.3.0 or newer.
    • Code samples for Visual Studio 2017 is created based on Windows SDK 10.0.17763.0. If you see the error below when building the code sample, please follow the instructions from the error message to fix the issue.
                  error MSB8036: The Windows SDK version 10.0.17763.0 was not found. Install the required version of Windows SDK or change the SDK version in the project property pages or by right-clicking the solution and selecting "Retarget solution".
          
    • Error when running a code sample program within Visual Studio: unable to start program 'xxx.exe"

      Unable to start program


      Please follow the instructions below for the workaround.
      • Open Tools Options dialog, select Debugging tab, and select the check-box of "Automatically close the console when debugging stops". See the dialog image below for details.

        Unable to start program

Release Notes for All Tools included in Intel® oneAPI Base Toolkit

Notices and Disclaimers

Intel technologies may require enabled hardware, software or service activation.

No product or component can be absolutely secure.

Your costs and results may vary.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804