Intel® Clusters and HPC Technology

weird Cluster OMP problem

We have two users on our cluster. One can compile and run Intel cluster open MP code just fine (user A). The other can't (user B). Same source, same config file (kmp_cluster.ini). (see note below: it's something in .bashrc or .bash_profile, not surprisingly)

When user B compiles source, everything seems fine (simple "hello world" program), but trying to run executable, he gets this:

cpufreq for Xeon 5500 on linux

I have a Xeon 5500 with Linux installed on it and I wish to use the ondemand cpufreq governor to save power consumption of the server.
I had questions about the cpufreq software:
1. The cpufreq governor (on demand) shows /sys entries, one of which is from /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq. Does this file contain only the available frquencies or even the voltages ?
Does the cpufreq software allow to modify both voltage and frequency or only frequency.
I could not locate the available voltages and how we can change them.

Large Matrix inversion using parallel computing in ifort

I want to invert a large matrix using parallel computing. I am working with ifort (fortran 90 compiler)on a cluster with multiple nodes. There are 8 processors per node. The memory for each node is shared among its processors. I have a general ideahow the program should work, i.e., how to break the program into tasks and assign tasks to each processors.

Unable to read binary file and giving error forrtl: severe (67): input statement requires too much data


I am a biginner in using clusters. We have 24 nodes cluster intel xeon X86 processors operating with Linux RHEL5.2 which uses infiniband for applications and Ethernetport for management. Installed with mvapich-1.1_intel, fullpackage of intel compiler.

wrong job dispatching on cpu usinig IntelMPI2.0

When I run a job with IntelMPI2.0 using a file referencing two machines with 4 cpu each. I can see that on the first machine only two jobs are running and on the second machine this is 6 jobs instead of 4, one the first machine and 4 on the other one. Can you explain me a reason for this behaviour ?

Intel MPI 2.0 : unable to ping

My application (using IntelMPI2.0)works OK on Linux workstation but failed on Clusters with the following message:
"mpdboot_gtda127_0 (mpdboot 499): problem has been detected during mpd(boot) startup at 1 gtda128; output:
mpdboot_gtda128_1 (err_exit 526): mpd failed to start correctly on gtda128
reason: 1: unable to ping local mpd; invalid msg from mpd :{}:
mpdboot_gtda128_1 (err_exit 539): contents of mpd logfile in /tmp:
logfile for mpd with pid 23309
mpdboot_gtda127_0 (err_exit 526): mpd failed to start correctly on gtda127

Intel MPI with pthread

I am trying to run a program that uses Pthread with Intel MPI. The program was compiled and linked successfully. I ran it on a dual-socket machine with two quad-core processors, but no threads seemed to be created. Below is the command I used:

mpirun -n 2 exectable

The program is supposed to generate 8 threads in one of the 2 processes. Thanks.

Suscribirse a Intel® Clusters and HPC Technology