The Parallel Universe Issue #31: Happy New Year, Happy Parallel Computing

Technical Abstract image, cover of PDFWelcome to our first issue of 2018. I thought about making some predictions about hardware and software trends, but I'm more of a fast follower than a computing visionary. After all, my academic background is biotechnology and data science―not computer science. In the last issue, I made a not-so-bold prediction about the heterogeneous parallel computing future. I've been worrying about heterogeneous parallelism since 2009, and real visionaries were worrying about it years before that. It's not a prediction when there's evidence of a trend all around you, but FPGAs are the next phase in this evolution toward heterogeneity.

It won't be long before FPGAs are as ubiquitous as multicore processors―but few of us can effectively program them. So we welcome back James Reinders, the founding editor of The Parallel Universe, to tell us more about FPGA programming. In the last issue, we discussed FPGA programming from a software development perspective. This time, James and Tom Hill from Intel's Programmable Logic Group give a detailed, practical guide to help you get started with FPGA Programming with OpenCL*. Open CL (Open Computing Language) is an "open standard for parallel programming of heterogeneous systems." For many of us, our first foray into FPGA programming is likely to be with OpenCL.

In 2018, we'll continue to cover programming tools and programming models. Two articles in this issue discuss new features in Intel® Software Development Tools: Speeding Algebra Computations with the Intel® Math Kernel Library (Intel® MKL) Vectorized Compact Matrix Functions and Gaining Performance Insights Using the Intel® Advisor Python API. The former describes a new data layout designed to improve the performance of applications that compute over large groups of small matrices. The latter explores a new API to directly access the Intel Advisor database to do custom analytics or create custom visualizations of your application's performance.

Java* is one of the most popular programming languages in the world, but we don't often discuss it in this magazine. That's going to change in 2018. Intel® Parallel Studio XE is improving its Java tuning support and the Java JVM is improving its support for vector computation. Boosting Java Performance in Big Data Applications describes the latter enhancements.

Artificial intelligence (AI) isn't going away in 2018. Autonomous driving is becoming a reality because of advances in AI, but it requires high-performance computing. Accelerating the Eigen* Math Library for Automated Driving Workloads shows how to improve the performance of an important computational kernel. Finally, Welcome to the Intel® AI Academy gives an overview of Intel's new, comprehensive program for AI education, tools, and technology.

Future issues of The Parallel Universe will bring you articles on a wide range of topics, including the effective use of new non-volatile memory, tuning code for non-uniform memory access (NUMA) architectures, best practices for productivity languages like Python* and R (maybe even some articles about Go* and Julia*), and much more.

We're looking forward to another year of exploring the exciting future of software development with you.

 

Henry A. Gabb

January 2018

Read it

Subscribe

For more complete information about compiler optimizations, see our Optimization Notice.