Filters

Article

Combining Linux* Message Passing and Threading in High Performance Computing

An article addressing thread and task parallelism. This article can be used to optimize framework methodology. Written by Andrew Binstock--Principal Analyst at Pacific Data Works LLC and lead author of "Practical Algorithms for Programmers."
Authored by Last updated on 07/06/2019 - 16:22
Article

Getting Started with OpenMP*

Abstract
Authored by Last updated on 07/08/2019 - 15:10
Article

Intel® 64 Architecture Processor Topology Enumeration

Download Code Package: 20160519-cpuid_topo.tar.gz
Authored by Last updated on 07/05/2019 - 20:39
Article

Automatic Parallelization with Intel® Compilers

With automatic parallelization, the compiler detects loops that can be safely and efficiently executed in parallel and generates multithreaded code.
Authored by admin Last updated on 07/04/2019 - 21:33
Article

Predicting and Measuring Parallel Performance

The success of parallelization is typically quantified by measuring the speedup of the parallel version relative to the serial version. It is also useful to compare that speedup relative to the upper limit of the potential speedup.
Authored by admin Last updated on 07/05/2019 - 10:33
Article

Loop Modifications to Enhance Data-Parallel Performance

When confronted with nested loops, the granularity of the computations that are assigned to threads will directly affect performance. Loop transformations such as splitting and merging nested loops can make parallelization easier and more productive.
Authored by admin Last updated on 07/05/2019 - 14:47
Article

Granularity and Parallel Performance

One key to attaining good parallel performance is choosing the right granularity for the application. Granularity is the amount of real work in the parallel task. If granularity is too fine, then performance can suffer from communication overhead.
Authored by admin Last updated on 07/05/2019 - 19:52
Article

Expose Parallelism by Avoiding or Removing Artificial Dependencies

Many applications and algorithms contain serial optimizations that inadvertently introduce data dependencies and inhibit parallelism. One can often remove such dependences through simple transforms, or even avoid them altogether through.
Authored by admin Last updated on 07/05/2019 - 19:49
Article

OpenMP* and the Intel® IPP Library

How to configure OpenMP in the Intel IPP library to maximize multi-threaded performance of the Intel IPP primitives.
Authored by Last updated on 07/31/2019 - 14:30