Filters

Article

Intel® IPP - Threading / OpenMP* FAQ

This page contains common questions and answers on multi-threading in the Intel IPP.
Authored by Last updated on 06/23/2019 - 18:50
Article

OpenMP* and the Intel® IPP Library

How to configure OpenMP in the Intel IPP library to maximize multi-threaded performance of the Intel IPP primitives.
Authored by Last updated on 06/23/2019 - 18:50
Article

Code Sample: Exploring MPI for Python* on Intel® Xeon Phi™ Processor

Learn how to write an MPI program in Python*, and take advantage of Intel® multicore architectures using OpenMP threads and Intel® AVX512 instructions.
Authored by Nguyen, Loc Q (Intel) Last updated on 07/06/2019 - 16:30
Article

Fine-Tuning Optimization for a Numerical Method for Hyperbolic Equations Applied to a Porous Media Flow Problem with Intel® Tools

This paper presents an analysis for potential optimization for a Godunov-type semi-discrete central scheme, for a particular hyperbolic problem implicated in porous media flow, using OpenMP* and Intel® Advanced Vector Extensions 2.
Authored by Last updated on 07/03/2019 - 20:00
File Wrapper

Parallel Universe Magazine - Issue 22, September 2015

Authored by admin Last updated on 12/12/2018 - 18:08
Article

Programming for Multicore and Many-core Products including Intel® Xeon® processors and Intel® Xeon Phi™ X100 Product Family coprocessors

The programming models in use today, used for multicore processors every day, are available for many-core coprocessors as well. Therefore, explaining how to program both Intel Xeon processors and Intel Xeon Phi coprocessor is best done by explaining the options for parallel programming. This paper provides the foundation for understanding how multicore processors and many-core coprocessors are...
Authored by James R. (Blackbelt) Last updated on 06/14/2019 - 12:10
File Wrapper

Parallel Universe Magazine - Issue 27, January 2017

Authored by admin Last updated on 03/21/2019 - 12:00
File Wrapper

Parallel Universe Magazine - Issue 24, March 2016

Authored by admin Last updated on 12/12/2018 - 18:08
Article

Maximize TensorFlow* Performance on CPU: Considerations and Recommendations for Inference Workloads

This article will describe performance considerations for CPU inference using Intel® Optimization for TensorFlow*
Authored by Nathan Greeneltch (Intel) Last updated on 04/01/2019 - 13:01
Article

Caffe* Optimized for Intel® Architecture: Applying Modern Code Techniques

This paper demonstrates a special version of Caffe* — a deep learning framework originally developed by the Berkeley Vision and Learning Center (BVLC) — that is optimized for Intel® architecture.
Authored by Last updated on 07/06/2019 - 16:40