# Calling all parallel languages & libraries!

As I mentioned in my previous post, I have a set of lectures on pthreads that I've reworked to try to challenge students, since it turned out that several of my colleagues and I were doing almost the same content several classes in a row.  There's another things I do there, though, that I thought I'd save for a separate post.

I also have a section where we go through and look at alternative ways to write a simple data-parallel code -- the core of which comes from work that Michael Wrinn and his team put together. Nothing makes the trade-off between ease of programming vs ease of implementation clear like having to look at a set of alternatives.  (or paging through multiple pages of pthreads code vs fitting the whole OpenMP code on one screen.)

As I tell class, everyone knows that the integral from 0 to 1 of 4/1+x^2 is pi, and pi is just 3.1415926535897932384626 and a bit. (yes, that's from memory.  And yes, it was a *very* long study hall my freshman year in high school)  Since they've survived Calc 1, as well, the idea of a Riemann sum should also be old hat.  So we take a look at what that simple numerical integration code looks like in pthreads, OpenMP, CUDA, MPI, and so on.

The core pseudo-C looks like this:

dx = 1.0/1000000

for ( i=0; i<1000000; i++) {

x=(i+.5)*dx;

sum += 4/(1 + x*x);

}

pi = dx*sum;

I'm working on adding a couple of more during this break -- maybe adding a SysV shared memory IPC version, maybe brush up on UPC.  But I open up the call -- anyone want to submit an example in OpenCL, Ct, Concurrent Collections, X10, ... ?

For more complete information about compiler optimizations, see our Optimization Notice.

## 1 comment

Top

Excellent Matt - I am looking forward to seeing who takes up the challenge and what languages they choose.

## Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.