The Intel® Xeon Phi™ coprocessor platform has a software stack that enables new programming models. One such model is offload of computation from a host processor to a coprocessor that is a fully-functional Intel® Architecture CPU, namely, the Intel® Xeon Phi™ coprocessor. The purpose of that offload is to improve response time and/or throughput. The attached paper presents the compiler offload software runtime infrastructure for the Intel® Xeon Phi™ coprocessor, which includes a production C/C++ and Fortran compiler that enables offload to that coprocessor, and an underlying Intel
Intel® Advisor XE 2013
Intel® Advisor XE 2013 guides developers to add parallelism to their existing C/C++, Fortran, or C# programs.
New in Update 2!
· New Pause/Resume API and GUI functionality
How to Detect and Repair Correctness Issues in Code to Run on the Intel® Xeon Phi™ Coprocessor Architecture with Intel® Inspector XE
Intel® Xeon Phi™ coprocessors combine advanced power performance with the benefits of standard CPU programming models. Developing and tuning for Intel® Xeon Phi™ coprocessors means you get both great coprocessor performance and improved performance on Intel® Xeon® processors.
With the added support for the Intel® Xeon Phi™ coprocessor in the lntel® MPI Library product, we've introduced a few new environment variables to help ease running MPI jobs on the new architecture. For full details on how to launch an Intel MPI Library job on a cluster containing Intel Xeon Phi coprocessor cards, check out this article.
If you're planning on running with the Intel® MPI Library in an environment that includes Intel® Xeon Phi™ coprocessor cards, you need to make sure all cards contain the appropriate libraries and scripts for the MPI job. This process is probably easier than you think, as shown below. Note that all steps shown here need to be repeated for all Xeon Phi coprocessor cards in your system.
This blog contains additional content for the article "Advanced Vectorization" from Parallel Universe #12: