MPI one sided communication with malloc vs MPI_Alloc_mem

MPI one sided communication with malloc vs MPI_Alloc_mem

Hello,
Does Intel MPI provide support for one sided communication (MPI_Win_lock, MPI_Put, MPI_Win_unlock) with malloc or should the allc be done with MPI_Alloc_Mem? The standard is very implementor biased and I couldn't find any help through the forums or documentation.

Thanks,
CSN

3 帖子 / 0 全新
最新文章
如需更全面地了解编译器优化,请参阅优化注意事项

Hi CSN,

Either method of allocation worksin the Intel MPI Library. In a small test (2 processes on the same computer, tried in Windows* and Linux*, just transferring a single float array), I did not see a performance difference between MPI_Alloc_mem and malloc. This could very well change for a different scenario, but switching between the two is not difficult.

float *a;
int nElements;



#ifdef USE_MPI_MALLOC

   MPI_Alloc_mem(nElements*sizeof(float), MPI_INFO_NULL, &a);

#else

   a=(float*) malloc(nElements*sizeof(float));

#endif
...
#ifdef USE_MPI_MALLOC

   MPI_Free_mem(a);

#else

   free(a);

#endif

Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools

Hi James,
Thanks for the quick response, this is great news. I have my own array library and this requires support for upto 5D arrays and so I didn't want to re write the library w/o confirmation one way or the other.

Cheers,
C.S.N

发表评论

登录添加评论。还不是成员?立即加入