virtualization understanding

virtualization understanding

could you help me to understand scope of virtualization.
We have a software (Windows, C++ and .NET), which can use all cores. For example, if a computer has a 8 cores, then the software can start 8 threads (or processes) and utilize all cpu resources.
Unfortunately, 8 cores it is not enough and we use a HPC/Grid systems. The HPC/Grid systems has a additional overhead and we search a way to avoid it.
So, can we take a few 8-cores computers, build and configure a system, so that, our software will use all cores (8 * number of computers), as if the software works on a single computer with many cores? If yes, could you send links to additional materials (samples, descriptions, products).

Thank you for any help,

4 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

You seem to be raising a very broad topic. A lot of effort has gone into use of virtualization, as in coexistence of multiple OS, on multi-core platforms, without fully solving the overhead problem, even without dealing with the built-in overhead of Windows managed code.

Intel Cluster OpenMP, supported by several Intel compilers, is one way, short of applying MPI,of compiling an application to distribute threads across multiple nodes, which seems to match one of your requests. You can find references by the search windows on these sites.

First of all, thank you for your answer.

Let's divide broad topic into few small questions:

1. As I read, virtualization means "One physical machine functions as multiple "virtual" machines.". Has Intel a technology to solve another problem: "Multiple physical machines functions as one "virtual" machine." ? In another words, I want to start my executable (wihout s/w changes) on "virtual" machine with, for example, 80 cores (10 physical computers, eachwith 8 cores). My executable can start defined number of threads or processes.

2. To solve our problems we use cluster/grid solutions. Unfortunately, such solutions have a additional (enough big) overhard (network communication, data transfer, data serializtion, zip/unzip, etc.). For a part of our jobs the overhard doesn't influence tofull executiontime. But for time-critical jobs we search a solution with minimal overhead. What technology Intel can suggest(under Windows) to help us?


Hello Igor,

The problem of automatically parallelizing an application remains unsolved, whether on a single system or a virtual system comprised of several physical computers. The state-of-the-art in automatic parallelization is still limited to data parallel loops running on shared-memory systems. The programmer must modify the software to take advantage of multiple cores and multiple distributed systems. Tim is right that Cluster OpenMP is the closest Intel has come to creating a virtual address space from distributed-memory systems but the programmerstill has to add the OpenMP directives. The compiler cannot do it automatically.

All cluster parallelization methods have some overhead. The degree of overhead depends on the application characteristics that you mention: I/O and data transfer to remote systems, network communication, synchronization requirements, granularity, etc.). I can only recommend the common parallelization methods: OpenMP, MPI, DCOM, Windows threads, Threading Building Blocks. Which one is best depends on your application but only MPI and DCOM can take advantage of a cluster.

Best regards,

Henry Gabb

Intel Cluster Software and Technologies

Leave a Comment

Please sign in to add a comment. Not a member? Join today