Strange result on MPI::Gather benchmark

Strange result on MPI::Gather benchmark

Hi,

I try some basic benchmark on a small cluster (educational). It's P4 2,4GHz, 512MRam with National Semiconductor 83820 ethernet Gb card (mtu 1500 for this test).

I do a benchmark on 2, 4 ,6 , 8 , 10 and 12 nodes. With 2 nodes I have this strange result:
http://aspirine.li/mesure2.pdf
on Ox it's the size of the packets in bytes, on Oy it's time in seconds.

We see very well the mtu effects, but I can't explain the two curves!?
This is what I have done:
-----------------------------------------
MPI::COMM_WORLD.Barrier();
starttimegather = MPI::Wtime();
MPI::COMM_WORLD.Gather(sendbuff,
buffsize,
MPI::FLOAT,
recvbuff[0],
buffsize,
MPI::FLOAT,
master);
stoptimegather = MPI::Wtime() - starttimegather;
totaltimegather += stoptimegather;
-----------------------------------------
This is a part of the results, where we see that every 10bytes there is a bigger result:

nodes ; time[s] ; size [bytes]
-------------------------------------------
2;0.000358077;16404<--*****
2;0.000343283;16408
2;0.000342555;16412
2;0.000342477;16416
2;0.000342885;16420
2;0.000342765;16424
2;0.00034233;16428
2;0.00034231;16432
2;0.000342906;16436
2;0.000342528;16440
2;0.000360112;16444<--*****
2;0.000343095;16448
2;0.000342613;16452
2;0.000342823;16456
2;0.000342695;16460
2;0.000342781;16464
2;0.000342082;16468
2;0.000342706;16472
2;0.000342361;16476
2;0.000342908;16480
2;0.000358126;16484<--*****
2;0.000342697;16488
2;0.000342495;16492
2;0.000342806;16496
2;0.000342006;16500
2;0.000342573;16504
2;0.000343114;16508
2;0.000343439;16512
2;0.000343044;16516
2;0.000343867;16520
2;0.000357508;16524<--*****
-----------------------------------------------
If somebody could explain me?

OS: Linux debian 2.6.5
MPI: lam-mpi 7.1.1

Thanks

David

1 post / 0 new
For more complete information about compiler optimizations, see our Optimization Notice.