New Parallel Studio: Intel Parallel Studio 2011

This month, we introduced Intel Parallel Studio 2011. It is a very worthy successor to the original Intel Parallel Studio by expanding both on the tooling and the parallel programming models it offers.

On the tooling, we have the Intel Parallel Advisor tool. It is an exciting tool that is a joy to use when considering where to add parallelism into an existing program. It has a straight-forward interface to find "hot spots" and add annotations about what you are considering doing to the program. Specifically, you can say "I'm thinking of doing this region in parallel" and "I'll put some sort of a lock around this code." Adding such annotations is done with a few mouse clicks and no work on syntax. Parallel Advisor then offers interactive estimates of speed-up and options to improve, as well as feedback on the correctness of the algorithm. If you forget a lock, you may see great speed-up estimates but will get exact feedback on where race conditions will exist (errors!!!). Having this tool can change lives of programmers adding parallelism to programs. The five steps to success in the tool are:

    1. Survey Target – This step helps you focus on the hot call trees and loops as locations to experiment with parallelism.

    1. Annotate Sources – Here you add Advisor annotations into source code to describe the parallel experiment.  You do this without modifying your source code!

    1. Check Suitability – This step evaluates the performance of parallel experiment. It displays performance projection for each parallel site and shows how each impact the entire program.  This way you can pick the areas that have the most performance impact.

    1. Check Correctness - Identifies data issues (races) of each parallel experiment so you can fix these before committing your changes to code.

    1. Add Parallel Framework – After you have corrected any correctness issues, you replace the Advisor framework with real parallel code using a variety of methods.



The other BIG addition with Intel Parallel Studio 2011 is the expansion of programming model support. We have introduced an umbrella project called Intel Parallel Building Blocks (Intel PBB). It is a collection of three offerings that include and build upon Intel Threading Building Blocks (Intel TBB). Intel TBB is in its fifth year and is more popular than ever. Intel TBB, by design, leaves two opportunities for us to address with complementary models. First, we introduce Intel Cilk Plus to show what can be done by implementing extensions in a compiler instead of the compiler-independent (and highly portable) approach used by Intel TBB. Secondly, we introduce Intel Array Building Blocks (ArBB) to tackle data parallelism directly. Specifically, Intel ArBB focuses on using SIMD parallelism (such as SSE and AVX) in conjunction with multicore parallelism. In other words, it takes simple looks programs and automatically vectorizes and parallelizes the work to be done. Previously, this was best done by making your source code complex and difficult to read.

Intel Cilk Plus is the product results of the combination of our compiler efforts with the team acquired from Cilk Art a year ago, all based on the award winning Cilk research that began around 1995 at MIT.

Intel Array Building Blocks is the result of the combination of the Intel Ct research project with the RapidMind team also acquired a year ago. The product experience of the RapidMind team form a solid foundation for this new offering. Intel ArBB is "beta" - and anyone can ask to join our beta.

Intel Parallel Studio 2011 maintains full compatibility with Microsoft Visual Studio 2005 and 2008, and adds support for 2010 which was released by Microsoft earlier this year.

I look forward to experiences and feedback. There is a lot more to write about the gems of this release... I'll work on posting more thoughts and experiences in the future.

By the way - I was on sabbatical this summer... hence my being behind on answering email and calls.  I'm catching up. Ask again if you don't hear back soon!

For more complete information about compiler optimizations, see our Optimization Notice.