Technical Library
Applied Filters

January 19, 2021
PMem Learn More Series Part 3
This is an introduction to Persistent Memory (PMem) part 3, in this section, we introduce the user to the basics of Persistent Memory architecture and some basic terminology and concepts.

January 19, 2021
PMem Learn More Series Part 2
This is an introduction to Persistent Memory (PMem) part 2, in this section, we introduce the user to the basics of Persistent Memory architecture and some basic terminology and concepts.

January 19, 2021
PMem Learn More Series Part 1
This is an introduction to Persistent Memory (PMem) part 1, in this section, we introduce the user to the basics of Persistent Memory architecture and some basic terminology and concepts.

January 19, 2021
Intel® Distribution of OpenVINO™ toolkit
The Intel® Distribution of OpenVINO™ toolkit solves the cross-architecture inference execution on GPUs, VPUs, and FPGAs with its improved Model Server.

January 19, 2021
Intel® oneAPI Math Kernel Library
The Intel® oneAPI Math Kernel Library, has been optimized for cross-architecture performance, enabling complex math-processing routines to run on CPUs, GPUs, FPGAs, and other accelerators.

Intel® Open Image Denoise is an open source library of high-performance, high-quality, machine-learning-based denoising filters for images rendered with ray tracing.

How to simplify and expedite use of both a CPU and an Xe GPU using OpenMP*.

January 19, 2021
Profile DPC++ and GPU Workload Performance with the Intel® VTune™...
Demonstrates how analyzing and optimizing offload performance can be done using the Intel® oneAPI version of Intel® VTune™ Profiler.

January 19, 2021
Scientific Visualization and Photo-Realistic Design With Intel®...
Learn how Intel® OSPRay and Intel® OSPRay Studio open new doors for developers to bring scalable, interactive rendering to large, complex data sets, with the Intel® oneAPI Rendering Toolkit.

Offload your code from CPU to GPU and optimize it with Intel® Advisor.

January 19, 2021
Migrating Your Existing CUDA* Code to Data Parallel C++
Find out how to migrate CUDA* code to Data Parallel C++ (DPC++) using the Intel® DPC++ Compatibility Tool, a one-time migration engine that ports both kernels and API calls.

The Intel® oneAPI HPC Toolkit, a new workhorse product for high-performance computing.