Parallel Programming Talk #68 - Data Parallelism (Ct & RapidMind) with Intel Engineer Mike McCool

Welcome to Show 68 of Parallel Programming Talk was broadcast on March 16th.

On this episode Clay and Aaron talk Data Parallelism (Ct & RapidMind) with Intel Engineer Dr. Mike McCool.

Download the MP4 Video (Large):

Download the MP3 Audio:

On Today's Show:

Data Parallelism (Ct & RapidMind) with Intel Engineer Mike McCool

What is Ct?

Ct allows programing parallel application relatively easily with a high level language. The Intel and RapidMind teams have joined forces to accelerate integrate the RapidMind technology into Ct base product.

Both companies technology's are very similar and designed to address the same problems. Write your program once and run it anywhere, both Ct and RapidMind are not architecture specific and have similar high level specifications. The ultimate goal is to spend your time writing what you want to do not specifying how you want to do it (specifying threads and intrinsics). The language was designed to address the developer's algorithm and inherent parallelism of the problem and then transforming it and address it very efficiently as an application on many-core processors.

Ct was built with Intel Architecture in mind but the RapidMind team brings there knowledge of other architecture to Intel too allow for a stronger more broad based implementation for a wider variety of platforms. Ct & RapidMind both apply to a wide range of applications such as Graphics (ray tracers and shaders), biomedical imaging and finance. In the past data parallelism was associated with narrow applicably because of it's association with domain specific processors but the current approach allows the application of data parallel techniques to a wide range of problems and get really good performance on multi-core processors.

There is a lot of technology that is being cross leveraged to bring the best performance to Ct. The Ct run time layer is based on Threading Building Blocks (TBB) to leverage the goodness of the scheduler and have interoperability with the Pattern Task Parallelism that is available in TBB with the data parallelism that you get out of Ct. Intel has also recently acquired the Cilk technology and is looking at ways to best integrate it effectively to give developer the best of both worlds.

Both Ct and RapidMind take standard C++ compilers and layer the technology into it to allows developer to continue to use the tools that have worked with during their careers. The data parallel technology works with Intel C/C++ complier, Parallel Composer, Microsoft Visual Studio and GCC. The technology takes advantage of the wide vector units in Intel AVX and other new hardware but also maintains it's independence to work on existing architecture.

Keep up to date on the latest development for Ct and data-parallel programming on our web site and by subscribing to the Ct newsletter. Your are also encouraged to sign up for beta consideration.

Where can you go to learn more?

If you have questions you'd like to see up discuss, ideas for show topics or just want to send fan mail....

Our Next Parallel Programming Talk will be Tuesday at 8:00AM PT. Mark your calendars and tune in!

And remember, let's be thread safe out there.

Einzelheiten zur Compiler-Optimierung finden Sie in unserem Optimierungshinweis.