Explicit Vector Programming – Best Known Methods

Vectorizing improves performance, and achieving high performance can save power. Introduction to tools for vectorizing compute-intensive processing.
Autor Última actualización 24/04/2019 - 11:25

A Parallel Stable Sort Using C++11 for TBB, Cilk Plus, and OpenMP

This article describes a parallel merge sort code, and why it is more scalable than parallel quicksort or parallel samplesort. The code relies on the C++11 “move” semantics.

Autor Última actualización 01/08/2019 - 09:30

Getting Better Performance on Dijkstra’s Shortest Path Graph Algorithm using the Intel® Compiler

We optimized a version of Dijkstra’s shortest path graph algorithm using a combination of Intel® Cilk™ Plus array notation and OpenMP* parallel for.

Autor Última actualización 04/03/2019 - 13:33
File Wrapper

Parallel Universe Magazine - Issue 18, June 2014

Autor admin Última actualización 16/05/2019 - 11:39

Efficient Parallelization

This article is part of the Intel® Modern Code Developer Community documentation which supports developers in leveraging application performance in code through a systematic step-by-step optimization framework methodology. This article addresses: Thread level parallelization.
Autor Ronald W Green (Blackbelt) Última actualización 30/09/2019 - 17:28

Vectorization Essentials

Vectorization essentials to effectively use feature in the Intel® Xeon product family
Autor admin Última actualización 02/10/2019 - 15:11

Choosing the right threading framework

This is the second article in a series of articles about High Performance Computing with the Intel Xeon Phi.

Autor Última actualización 15/10/2019 - 16:40

Hybrid Parallelism: A MiniFE* Case Study

This case study examines the situation where the problem decomposition is the same for threading as it is for Message Passing Interface* (MPI); that is, the threading parallelism is elevated to the same level as MPI parallelism.
Autor David M. Última actualización 15/10/2019 - 16:40

Putting Your Data and Code in Order: Data and layout - Part 2

Apply the concepts of parallelism and distributed memory computing to your code to improve software performance. This paper expands on concepts discussed in Part 1, to consider parallelism, both vectorization (single instruction multiple data SIMD) as well as shared memory parallelism (threading), and distributed memory computing.
Autor David M. Última actualización 15/10/2019 - 16:40