Filters

Article

Intel® IPP - Threading / OpenMP* FAQ

This page contains common questions and answers on multi-threading in the Intel IPP.
Authored by Last updated on 09/02/2019 - 10:11
Article

OpenMP* and the Intel® IPP Library

How to configure OpenMP in the Intel IPP library to maximize multi-threaded performance of the Intel IPP primitives.
Authored by Last updated on 07/31/2019 - 14:30
Article

Programming for Multicore and Many-core Products including Intel® Xeon® processors and Intel® Xeon Phi™ X100 Product Family coprocessors

The programming models in use today, used for multicore processors every day, are available for many-core coprocessors as well. Therefore, explaining how to program both Intel Xeon processors and Intel Xeon Phi coprocessor is best done by explaining the options for parallel programming. This paper provides the foundation for understanding how multicore processors and many-core coprocessors are...
Authored by James R. (Blackbelt) Last updated on 06/14/2019 - 12:10
File Wrapper

Parallel Universe Magazine - Issue 24, March 2016

Authored by admin Last updated on 12/12/2018 - 18:08
Article

最大限度提升 CPU 上的 TensorFlow* 性能:推理工作负载的注意事项和建议

本文将介绍使用面向 TensorFlow 的英特尔® 优化* 进行 CPU 推理的性能注意事项
Authored by Nathan Greeneltch (Intel) Last updated on 08/09/2019 - 02:02
Article

Using Intel® IPP Threaded Static Libraries

Q: How to get Intel® Integrated Performance Primitives (Intel® IPP) Static threaded libraries?

Authored by Last updated on 07/31/2019 - 14:30
Article

Intel® MKL 11.3.3 patch

There are two listed below limitations with Intel® Math Kernel Library (Intel® MKL) 11.3 Update 3 which were discovered recently.

Authored by Gennady F. (Blackbelt) Last updated on 03/27/2019 - 12:20
Article

整理您的数据和代码: 数据和布局 - 第 2 部分

Apply the concepts of parallelism and distributed memory computing to your code to improve software performance. This paper expands on concepts discussed in Part 1, to consider parallelism, both vectorization (single instruction multiple data SIMD) as well as shared memory parallelism (threading), and distributed memory computing.
Authored by David M. Last updated on 07/06/2019 - 16:40
Article

Maximize TensorFlow* Performance on CPU: Considerations and Recommendations for Inference Workloads

This article will describe performance considerations for CPU inference using Intel® Optimization for TensorFlow*
Authored by Nathan Greeneltch (Intel) Last updated on 07/31/2019 - 12:11
Article

Benefits of Intel® Optimized Caffe* in comparison with BVLC Caffe*

Overview
Authored by JON J K. (Intel) Last updated on 05/30/2018 - 07:00