Avoiding Potential Problems - Memory Limits on the Intel(r) Xeon Phi(tm) Coprocessor


As with any Linux* system, the operating system on the coprocessor will sometimes allow you to allocate more memory space than is physically available. On many systems this is taken care of by swapping some pages of memory out to disk. On those systems, if the amount of physical memory plus the amount of swap space is exceeded, the operating system will begin killing off jobs. 

The situation on the Intel Xeon Phi coprocessor is complicated by the lack of directly attached disks. So:

  • there is, currently, at most 8GB of physical memory available (I say this, having come from a generation where my first personal computer had 4KB of memory)
  • by default, some of the memory is used for the coprocessor's file system
  • by default, there is no swap space available for the coprocessor


The Intel Xeon Phi coprocessor does not have any directly accessible disk drives. Because of this, the default root file system for the coprocessor is stored on a RAM disk. This not only affects file storage but reduces the memory available to running programs.

The size of the root file system is kept small by using BusyBox to replace many of the common Linux commands, such as sh, cp and ls and by limiting the number of shared libraries that are copied to the root file system. Even given these reductions, about 10 MBytes of memory is still consumed for file storage.

Most user programs, whether run in offload or native mode on the coprocessor will require that additional libraries be copied to the root file system, consuming more memory space and further limiting the space available for user programs. Additionally, programs running in native mode will require access to data files and temporary files, consuming additional space.

The df command will tell you how must space you are using for files:

[knightscorner5-mic0]$ df -h

Filesystem                Size      Used Available Use% Mounted on

none                      7.6G         0      7.6G   0% /dev

none                     12.9G     77.1M     12.8G   1% /

none                      7.6G         0      7.6G   0% /dev

none                      7.6G         0      7.6G   0% /dev/shm

The amount of memory consumed for file storage can be reduced by using network file storage. Besides NFS, which comes as part of the standard Linux releases, a number of different network file systems have been successfully used. For examples of using Lustre and Panasas, see Configuring Intel® Xeon PhiTM Coprocessors Inside a Cluster under at http://software.intel.com/en-us/mic-developer under the Case Studies tab. 

Good candidates for networked file systems are:

  • home directories
  • dedicated data storage
  • shared libraries associated with compilers and tools such as MPI

It is also possible to use an NFS mounted file system as the root partition. This requires you to set aside a directory on the host for each coprocessor card and populate those directories with the contents of the root directory for each individual coprocessor. 

When an NFS mounted root is used, the coprocessor will first boot with a very minimal initial root in RAM. This initial root mounts the NFS file system and uses the Linux switch_root command to make this file system the new root. The initial RAM device is then removed, freeing up the memory for use by programs.

A disadvantage to using a networked file system is the increased latency involved in reading from a file. For large data files, use of file systems optimized for large transfers, such as Lustre, can help. For a networked root file system, which requires NFS, keeping the physical disk space close to the coprocessor - in other words, on the host - can help cut down latency. However, any file which will be read from or written to frequently may be better off remaining in a RAM disk on the coprocessor.

Directions for setting up and using networked file systems with the Intel Xeon Phi coprocessor can be found in the Intel(r) Xeon Phi(tm) Coprocessor Intel(r) Manycore Platform Software Stack (Intel(r) MPSS) Boot Configuration Guide which comes with each MPSS release.


Because the coprocessor has no directly attached disks, there is no default disk space for swapping out pages of memory from a running process. It is possible, however to add networked swap space by using a virtio block device on the host.

After creating the swap space on the host, the steps found in the MPSS readme file can be used to configure the coprocessor for swapping. The commands below show an example of setting up 4GB of swap space:

sudo service mpss start  

# you will need a disk (/dev/<disk_name>), disk partition (/dev/<partition_name>), logical volume (/dev/mapper/<volume_name>

# or a regular file (preferably preallocated; for example, 'dd bs=1G if=/dev/zero of=/srv/VirtblkSwap count=4) to use as swap space.

sudo bash  

echo <full_path_name_to_swap_device>  >/sys/classl/mic/mic0/virtblk_file  


ssh root@mic0 modprobe mic_virtblk  

ssh root@mic0 mkswap /dev/vda  

ssh root@mic0 swapon /dev/vda  

ssh root@mic0 cat /proc/swaps   

Although adding swap space will increase the size of the programs the coprocessor can run, it will also degrade perform each time the swap space is used. It should only be used when necessary. 

When deciding whether or not to set up swap space, consider the data access patterns a program will be using. If it is possible to partition up the work so that the amount of data required in memory at any one time is below the maximum available, this is preferable to using swap space. Also, if memory is being oversubscribed because of the number of processes running on the coprocessor at one time, it may be advantageous to limit the number of processes running on the coprocessor at one time rather than using swap space.

Finally, because of the mapping required between addresses on the host and coprocessor, swap space cannot be used by jobs using the offload model of programming.

Для получения подробной информации о возможностях оптимизации компилятора обратитесь к нашему Уведомлению об оптимизации.