Intel MPI and Infiniband: MTU optimisation doc ?

Intel MPI and Infiniband: MTU optimisation doc ?

Hi,

We have a little cluster:
- master
- 8 nodes with 12 cores pro nodes
- Infiniband connectx DDR 2X
- linux CentOS 5.5
- Infiniband stack from OFED
- Intel MPI

At the moment we have a normal basic install of the infiniband packages. So we are in connected mode with a MTU of ~65k. Has anybody info, documentation, links on the optimization of the MTU ? We are using it for Computational Fluid Dynamics computations.

Thx in advance,
Best regards,
Guillaume

5 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Hi Guillaume,

It seems to me that you'd better ask this question on a forum dedicated to CFD, might be people from MTU can help you. Here you rather ask engineers from Intel familiar with an Intel product.

You can use Intel MPI Library with default settings - that should be OK. But to be sure that you are using fast fabrics you can set I_MPI_FALLBACK=0 and I_MPI_FABRICS=shm:dapl

Regards!
Dmitry

Hi Dmitry,

A lot of CFD engineers are using intel mpi...so perhaps one or two will look at this thread ;)

thx for your answer.
regards
Guillaume

Hi Dmitry,

I have another question: I'm suing the OFED stack for infiniband: Which I_MPI_FABRICS should I choose ? shm:dapl or shm:ofa ?

Thx,
best regards,
Guillaume

Hi Guillaume,

You can use any.
OFA fabric gives you ability to use multi-rail feature (if you have 2 IB cards on a node or cards with 2 ports). Just compare performance of dapl and ofa with your application and use the fastest.

Regards!
Dmitry

Leave a Comment

Please sign in to add a comment. Not a member? Join today