Configuring Intel® Cluster Checker's clean_ipc Test Module

This article shows some insights on how to configure Intel Cluster Checker to run over Red Hat Enterprise Linux Server 5.6.
When running Intel Cluster Checker in a system built using Red Hat Enterprise Linux Server 5.6 with hardware having SATA controllers, the clean_ipc test module needs to be configured properly as the operating system will leave shared memory segments running by default.
$ /sbin/lspci | grep SATA
00:1f.2 SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller
$ lsb_release --all | grep Description
Description: Red Hat Enterprise Linux Server release 5.6 (Tikanga)
The clean_ipc test module checks by default that no Inter Process Communication (IPC) facilities are open, meaning that the subsystem is clean in all compute nodes in the cluster. The test module executes the ipcs command to get a list of Shared Memory Segments, Semaphore Arrays, and Message Queues. If there are any entries, it will flag them and fail, unless explicitly configured to allow an exact quantity of active entries.
The initial output of the Intel Cluster Checker tool provided the following diagnostics information:
System V Interprocess Communication, (clean_ipc)....................................................................................................FAILED
subtest 'Shared Memory Segments' failed
- failing All hosts returned: 'found 3 entries, target was 0'

After checking the test module manual and the associated debug file, it can be seen that the command used to display
IPC status information is the following:

[root@compute-0-0 ~]# ipcs -a
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
------ Semaphore Arrays --------
key        semid      owner      perms      nsems
0x000000a7 0          root      600        1
------ Message Queues --------
key        msqid      owner      perms      used-bytes   messages
Once the manual page of the ipcs command is reviewed, an approach to find the offending process ID can be the following. Then just by checking the ps manual page it is possible to get the actual process name from that original process ID.
$ ipcs -s -i 0
Semaphore Array semid=0
uid=0    gid=0   cuid=0  cgid=0
mode=0600, access_perms=0600
nsems = 1
otime = Thu May  5 15:29:29 2011
ctime = Thu May  5 15:29:28 2011
semnum     value      ncount     zcount     pid
0          1          0          0          13499
$ ps -ef | grep 13499
root      5094  4938  0 18:36 pts/1    00:00:00 grep 13499
root     13499     1  0 15:29 ?        00:00:00 iscsid
The process name is already pointing out that the SCSI subsystem is used that IPC item, but just in case the installed packages database can be queried to find out which is the RPM package owning that process daemon. It is also good to know if the package is in use by other packages as a dependency.
$ rpm -qf /sbin/iscsid
$ rpm -e --test iscsi-initiator-utils-
error: Failed dependencies:
iscsi-initiator-utils is needed by (installed) mkinitrd-
iscsi-initiator-utils is needed by (installed) mkinitrd-
It is safe then to assume that the package is required by the operating system to be there, therefore the Intel Cluster Checker test module should be configured to allow those IPC items. More details on the configuration of the clean_ipc test module can be found here.
For more complete information about compiler optimizations, see our Optimization Notice.