Java* MPI Applications Support
- Sourcempivars.shfrom the Intel® MPI Library package to set up all required environment variables, includingLIBRARY_PATHandCLASSPATH.
- Build your Java MPI application as usual.
- UpdateCLASSPATHwith the path to thejarapplication or pass it explicitly with the-cpoption of thejavacommand.
- Run your Java MPI application using the following command:$ mpirun <options> java <app>where:
For example:$ mpirun -n 8 -ppn 1 -f ./hostfile java mpi.samples.Allreduce
- <options>is a list ofmpirunoptions
- <app>is the main class of your Java application
- To reduce memory footprint, you can use Java direct buffers as buffer parameters of collective operations in addition to using Java arrays. This approach allows you to allocate the memory out of the JVM heap and avoid additional memory copying when passing the pointer to the buffer from JVM to the native layer.
- When you create Java MPI entities such asGroup,Comm,Datatype, and similar, memory is allocated on the native layer and is not tracked by the garbage collector. Therefore, this memory must be released explicitly. Pointers to the allocated memory are stored in a special pool and can be deallocated using one of the following methods:
- entity.free(): frees the memory backing theentityJava object, which can be an instance ofComm,Group, etc.
- AllocablePool.remove(entity): frees the memory backing theentityJava object, which can be an instance ofComm,Group, etc.
- AllocablePool.cleanUp(): explicitly deallocates the memory backing all Java MPI objects created by that moment.
- MPI.Finalize(): implicitly deallocates the memory backing all Java MPI objects and that has not been explicitly deallocated by that moment.