Intel Micro Benchmark Result Problem with PCI-passthrough via FDR infiniband

Intel Micro Benchmark Result Problem with PCI-passthrough via FDR infiniband

Hi everyone, I still evaluate cluster performance. For now, i move on virtualization with PCI-passthrough via FDR infiniband on KVM hypervisor.

My problem is Sendrecv throughput that decrease by half when compare with physical machine and i use 1 rank/node. For example

Node           Bare-metal (MB/s)               PCI-passthrough (MB/s)

    2                14,600                                        13,000

   4                 14,500                                        12,000

  16                14,300                                        11,000

  32                14,290                                        10,000

  64                14,200                                         7,100

What do you think about this behavior ? Is it about mellanox software or other things or overhead ?

Thank you

Cartridge Carl

4 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

I'll check with our developers and see if they have any ideas.

What fabric are you using?  Can you attach output with I_MPI_DEBUG=5?

I use openmpi so this is one reason for this problem which i cannot attatch output with I_MPI_DEBUG=5. Anyway i use Mellanox ConnectX-3 FDR Infiniband HCA (56 Gbps)

Leave a Comment

Please sign in to add a comment. Not a member? Join today