Anybody use IFC under Windows 2003 server cluster system.
I want to know that
1. IFC can use benefit of clustering system.
2. the cluster system improve performance of your program.
The compiler does not, by itself, take advantage of a cluster. You can write your applications to do so. A cluster does not improve performance for applications not explicitly written for distributed computation.
Argonne mpich is the usual framework for supporting Windows cluster computing with Intel Fortran. With XP Pro 32-bit, cluster support up to 8 CPUs is good, and such installations are fairly common. Satisfactory performance up to 16 CPUs has been demonstrated with a single Windows Server node, and the remainder XP. Microsoft only recently began considering their strategy for supporting larger clusters and 64-bit Windows. There are existing large production Windows clusters which employ Samba servers, as both price and performance have favored that over 2003 Server. Clusters of12 or more CPUs so far have performed better under linux, as far as I am aware.
Systems which might employ OpenMP as a means for automatic implementation of MPI are still in a research phase. Until then, MPI requires a significant implementation effort. As you may be aware, most automotive design analyses and weather prediction now is done with MPI, to mention only 2 technical fields where it has proven successful.
Evidently, you could run up to at least 8 nodes doing independent analyses under 2003 Server, if your application justifies the cost.
Thank you for your answers, guys.
What I really want to do is FEM program and some calculation need a lot of memory ( multi-dimension). I think a single computer has limitation.
That is why I tried to use cluster system.
Also, I want to reduce running time.
So, cluster system can do that?
Yes, a cluster can do that. For years I have wanted to have a cluster where I can simply attach another system and have it automatically used in my calculations. After all, it is just a processor and some memory, right? But this seems to be an area where the fundamental complexity of the system has exceeded the limit of human software engineering capability so far. It has always taken significant work to run my codes on a cluster.But before you spend the time, there is something you should think hard about. Commodity clusters (PCs with Ethernet interconnects) are very cost effective and efficient for solving what is termed "embarrassingly parallel" problems. These are problems where each processor can work on an independent part of the problem for a very long time compared to the time it takes to transmit the program and data to it over the interconnect. MonteCarlo simulations, global optimization problems and brute force encryption attacks fall into this category. Problems such as solving field equations or linear algebra on VERY large systems probably don't. If each CPU spends milliseconds solving part of a distributed matrix inversion (the core of FEM, right?) then waits minutes for the cluster to pass data around for the next step, you won't be happy with the results. Also, to run VERY large problems, you will have to handle the distributed memory issues explicitly, I think. Good Luck, Cliff