mpi on single machine Dual Core or Quad Core

mpi on single machine Dual Core or Quad Core

I currently use MKL on my machine that has 2 Dual Core Xeon processors. I also have a machine with 2 Quad Core. I want to being to learn MPI so I can use scalapack. Is it possible for me to install the Intel cluster toolkit (i.e., MPI and the other libs) on these machines and leverage the ~4 and ~8 processors?

6 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Yes, I frequently work on performance of MPI applications on a Core 2 Duo. MPI often is recommended over OpenMP threading for better performance when running on an 8 core machine.

Thanks a lot for
answering my question! This leads me to two other very naive
question. Given, that Im primarily interested in scalapack matrix operations on large

First: So given I
get the cluster toolkit installed on these machines, which should theoretically perform
the operations more quickly: 1) the machine with 2 Xeon Dual Core 3.0 Ghz
(i.e., 4 cpu); or, 2) the machine with 2 Xeon Quad Core 2.33 Ghz (i.e., 8 cpu)?

Second: Dose the
icc compiler have some maximum limit on size of matrices allocated in the code (assuming a 64bit linux machine)?

Thanks again!

There isn't enough information to answer either of those questions. On the first, it may be a close call. On the second, (for x86-64) if you address the array using int indices, that would limit it to 2GB times sizeof(data type), or 8GB for an array of 32-bit objects. With long indices, it could be far larger, possibly even up around a terabyte. This is assuming dynamic (heap) allocation. If, instead, you mean a static array, it would be limited to 2GB under the default (small) memory model, regardless of the data types. The limits are the same as other compilers for x86-64.

> I want to being to learn MPI so I can use scalapack.

You dont 'really' have to learn MPI to be able to use ScaLAPACK. All interprocessor communication in ScaLAPACK is handled by BLACS (which is built on top of MPI). So basically with the possible exception of setting the problem up it wont be of much use in the ScaLAPACK context.

I have some rough notes in case you're interested in building it from source.

Btw I dont really deal with large dense matrices.

Thanks a lot for your help. I just assumed that MPI was a prerequisite for using scaLAPACK. Also thanks for your note.
Kind regards-

Leave a Comment

Please sign in to add a comment. Not a member? Join today