Poor NFS performance

Poor NFS performance

This has been a topic of discussion previously [1][2], but I haven't seen any comment from anybody at Intel regarding it: is there anything that can be done about the poor performance of NFS on the MIC? I timed copying a 500 MB file from the host over NFS and got about 20MB/s, which is far too slow to drive a native application's I/O. I was hoping for at least an order of magnitude faster, even though the PCI express bus should be able to sustain at least 2 orders of magnitude more. Can it be done?

What is the recommended alternative to doing I/O natively? For example, should I be using SCIF with a small application running on the host that performs the I/O for the native application? Should I be using MPI? I was hoping that with NFS I could get away with not using any cores on the host, but it appears that might not be possible.

[1]: https://software.intel.com/en-us/forums/topic/382695

[2]: https://software.intel.com/en-us/forums/topic/404743

6 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

I found the following post: https://software.intel.com/en-us/articles/building-a-native-application-... which says "A good method for handling input and output of large data sets is to mount a folder from the host file system to the coprocessor and access the data from there." She uses the following options for mounting the NFS share:

host:/mydir /mydir nfs rsize=8192,wsize=8192,nolock,intr 0 0

I added these options and I do notice a bump in the throughput (it is now around 40MB/s), but still not enough to sustain the ~200 threads required on the MIC. I also notice that doubling the rsize and wsize given above is slightly better on my machine.

That read number (20 MB/s) seems to be too low. Is this on MPSS 3.x software stack ? If yes, could you check to see whether tcp_sack gets turned off in *both* Phi and host. If it is, turn it *on* to see whether it makes any difference.

  • [root] # /sbin/sysctl net.ipv4.tcp_sack  /* check its default value */
  • net.ipv4.tcp_sack = 0                         /* it gets turned off */
  • [root]# /sbin/sysctl net.ipv4.tcp_sack=1   /* turn it on */
  • net.ipv4.tcp_sack = 1

(sorry - remove duplicated post as I hit submit twice)

I am using MPSS 3.2.1.  tcp_sack = 1 on the Phi already.  The host is windows, is there a setting there that I should verify? What bandwidth do you see on your cards over NFS?

The registry key


is not present, and from the documentation I gather that the default is on.

Leave a Comment

Please sign in to add a comment. Not a member? Join today