OpenMP*

Parallel algorithm for Sorted Sequence Search(Vincent Zhang)

The included source code implements a parallel search algorithm on an input sorted sequence of strings, as described in the included problem description text file. The included write-up of the parallel algorithm sets for the pros and the cons for using either OpenMP or Pthreads to implement the parallelism. The implemented Pthreads code attempts to overlap I/O with the parallel searches to boost performance. The code was intended for Linux OS and includes a makefile to build the application.

Parallel algorithm for Sorted Sequence Search(Dmitry Vyukov)

The included source code implements a parallel search algorithm on an input sorted sequence of strings, as described in the included problem description text file. The included write-up considers two approaches to the parallel algorithm, and reasons for lack of performance are examined. In order to overcome random memory access overheads, a pipeline algorithm is employed to preload cache. OpenMP is used for implementing the parallelism. The code was intended for Windows OS and includes Microsoft Visual Studio solution and project files to build the application.

Parallel algorithm for Sorted Sequence Search (Deng Hui)

The included source code implements several different serial and parallel search algorithms on an input sorted sequence of strings, as described in the included problem description text file. Besides the standard binary search algorithm, the code includes a hash map based search. The hash map search algorithm is in two ways, one with Intel Threading Building Blocks (TBB) and the second using OpenMP. The choice of search algorithm can be selected on the command line in order to more easily compare execution performance of different algorithms on given data sets.

Classroom challenge: Matrix Multiplication, Performance and Scalability in OpenMP

A simple, widely known and studied problem was posed to the class students: matrix multiplication. We made an internal contest, which was to obtain the fastest serial code in which the students learned a lot about compiler optimizations, and even more, the effect of caches in code performance. The objective of the contest was to extrapoloate this exercise into a massive multicore architecture. Students were given kickstart code with a naive C using an OpenMP implemention of the problem, and a series of rules.

Threaded Programming Methodology with Parallel Studio

In this 3 hour module, participants will learn the evolution of parallel processing architectures. After completing this module, a student should be able to describe how threading architectures relates to software development, to rapidly estimate the effort required to thread time consuming regions and to prototype the solution.

Topics covered include:

Programming for Multicore Processors w/ Win Threads & OpenMP (Cairo University)

These modules are designed for an instructor led course which introduces basic concepts towards writing parallel programs that utilize the parallel execution capability of a multi-core processor.

Topics include:

Introduction to threading
Win32 Threading API
Fork/Join Threading Model

OpenMP Part 1
Parallel Block
Parallel for
Scheduling
Scopes
Contents

OpenMP Part 2
Reduction
Parallel Sections
Parallel Tasks

OpenMP - Open Multi-Processing (French, IDRIS-CNRS, Campus universitaire d'Orsay)

Objectif : mise en pratique immédiate d\\\'OpenMP grâce à une approche par l\\\'exemple. Les nombreux shémas contenus dans ce cours, appuyés par une explication orale détaillée, montreront clairement les concepts inhérents à ce mode de parallélisation relativement efficace sur des machines multi-processeurs à mémoire partagée. Public concerné : toute personne souhaitant paralléliser une application pré-existante ou dans sa genèse pour une architecture multi-processeurs à mémoire partagée. Pré-requis : Fortran et Unix de base.
Durée : 2 jours. --- Assistance maximale : 18 personnes.

Parallel Programming with OpenMP 3.0 (Intel)

This hands-on module introduces OpenMP* 3.0 directives to parallelize common functions and loops. The first section of the module introduces the most common feature of OpenMP - work sharing for loops. The second section demonstrates how to exploit non-loop parallelism, including the new task constructs in OpenMP 3.0. The final section discusses the usage of synchronization methods, library functions, and environment variables.

A Hands-On Introduction to OpenMP

This is the latest in a 10 year series of tutorials about OpenMP. The approach is based on the latest research in learning theory and is based on an active learning program. The goal is to talk as little as possible and present much of the material through hands-on exercises. This is actually much harder to present than a traditional “lecture style” as it requires a lecturer how stays actively engaged with his or her learners … letting them get frustrated enough to entrench what they learn into long term memory but not so frustrated that the ability to learn degrades.

S’abonner à OpenMP*