The Parallel Universe Issue #28: Parallel Languages, Language Extensions, and Application Frameworks

Back in the days of nonstandard programming languages and immature compilers, parallel computing as we know it today was still far over the horizon. It was still a niche topic, so practitioners were content with language extensions and libraries to express parallelism (e.g., OpenMP*, Intel® Threading Building Blocks (Intel® TBB), MPI*, pthreads*). Programming language design and parallel programming models were separate problems, so they continued along distinct research tracks for many years. These tracks would occasionally cross with varying degrees of success (e.g., High-Performance Fortran*, Unified Parallel C*), and there were frequent debates about whether the memory models of popular languages even allowed parallelism to be implemented safely. However, much was learned during this time of debate and experimentation.

Today, parallel computing is so ubiquitous that we’re beginning to see parallelism become a standard part of mainstream programming languages. This issue’s feature article, Parallel STL: Boosting Performance of C++ STL Code, gives an overview of the Parallel Standard Template Library in the upcoming C++ standard (C++17) and provides code samples illustrating its use.

Though it’s not a parallel language in and of itself, we’re still celebrating 20 years of OpenMP, the gold standard for portable, vendor-neutral parallel programming directives. In the last issue of The Parallel Universe, Michael Klemm (the current CEO of the OpenMP Architecture Review Board) gave an overview of the newest OpenMP features. In this issue, industry insider Rob Farber gives a retrospective look at OpenMP’s development and its modern usage in Happy 20th Birthday, OpenMP.

I rely on R for certain tasks but I won’t lie to you, it’s not my favorite programming language. I would never have thought to use R for high-performance computing (HPC) but Drew Schmidt from the University of Tennessee Knoxville makes the case for using this popular statistics language in HPC with R: The Basics. Drew’s article is helping to make an R believer out of me.

New Software for Machine Learning

There’s no denying that machine learning, and its perhaps-more-glamorous nephew, deep learning, are
consuming a lot of computing cycles these days. Intel continues to add solutions to its already robust machine learning portfolio. The latest offering, BigDL, is designed to facilitate deep learning within big data environments. BigDL: Optimized Deep Learning on Apache Spark* will help you get started using this new framework. Solving Real-World Machine Learning Problems with the Intel® Data Analytics Acceleration Library (Intel® DAAL) walks through classification and clustering using this library. Two problems taken from the Kaggle predictive modeling and analytics platform are used to illustrate, and comparisons to Python* and R alternatives are shown.

Coming Attractions

Future issues of The Parallel Universe will contain articles on a wide range of topics. Stay tuned for articles on the Julia* programming language, working with containers in HPC, fast data compression for cloud and IoT applications, Intel® Cluster Checker, and much more.

Read it >

Subscribe >

Para obtener información más completa sobre las optimizaciones del compilador, consulte nuestro Aviso de optimización.