Intel® Developer Zone:
Performance

Highlights

Just published! Intel® Xeon Phi™ Coprocessor High Performance Programming 
Learn the essentials of programming for this new architecture and new products. New!
Intel® System Studio
The Intel® System Studio is a comprehensive integrated software development tool suite solution that can Accelerate Time to Market, Strengthen System Reliability & Boost Power Efficiency and Performance. New!
In case you missed it - 2-day Live Webinar Playback
Introduction to High Performance Application Development for Intel® Xeon & Intel® Xeon Phi™ Coprocessors.
Structured Parallel Programming
Authors Michael McCool, Arch D. Robison, and James Reinders uses an approach based on structured patterns which should make the subject accessible to every software developer.

Deliver your best application performance for your customers through parallel programming with the help of Intel’s innovative resources.

Development Resources


Development Tools

 

Intel® Parallel Studio XE ›

Bringing simplified, end-to-end parallelism to Microsoft Visual Studio* C/C++ developers, Intel® Parallel Studio XE provides advanced tools to optimize client applications for multi-core and manycore.

Intel® Software Development Products

Explore all tools the help you optimize for Intel architecture. Select tools are available for a free 30-day evaluation period.

Tools Knowledge Base

Find guides and support information for Intel tools.

Intel® Xeon Phi™ Coprocessor code named “Knights Landing” - Application Readiness
By Indraneil Gokhale (Intel)Posted 09/15/20140
As part of the application readiness efforts for future Intel® Xeon® processors and Intel® Xeon Phi™ coprocessors (code named Knights Landing), developers are interested in improving two key aspects of their workloads: Vectorization/code generation Thread parallelism This article mainly talks a...
Using SPIR for fun and profit with Intel® OpenCL™ Code Builder
By Robert Ioffe (Intel)Posted 02/23/20150
Introduction What is SPIR? How is SPIR binary different from Intermediate Binary? How to produce SPIR binary with an Intel command line compiler? How to produce SPIR binary with Intel® INDE's Kernel Builder? How to consume SPIR binary in your OpenCL™ program? Advantages of a SPIR Binary Di...
Sharing Surfaces between OpenCL™ and DirectX* 11 on Intel® Processor Graphics
By Adam Lake (Intel)Posted 02/23/20150
Download PDF Download code sample Content Introduction Intel® Processor Graphics with Shared Physical Memory Synchronization between OpenCL and DirectX 11 Overview of Surface Sharing between OpenCL and DirectX 11 Initialization Writing to the shared surface The Render Loop Shutdo...
Bitonic Sorting
By Vadim Kartoshkin (Intel)Posted 02/12/20150
Demonstrates how to implement an efficient sorting routine with the OpenCL™ technology that operates on arbitrary input array of integer values. The sample uses properties of bitonic sequence and principles of sorting networks and enables efficient SIMD-style parallelism through OpenCL vector dat...
Subscribe to Intel Developer Zone Articles
ADVANCED COMPUTER CONCEPTS FOR THE (NOT SO) COMMON CHEF: INTRODUCTION
By Taylor Kidd (Intel) Posted on 02/20/15 2
  TITLE: INTRODUCTION ADVANCED COMPUTER CONCEPTS FOR THE (NOT SO) COMMON CHEF While talking to a very intelligent but non-engineer colleague, I found myself needing to explain the threading and other components of the Intel® Xeon Phi™ ⅹ100 and ⅹ200 architectures. The first topic that came up ...
Introduction to OpenMP* on YouTube
By Mike Pearce (Intel) Posted on 12/03/14 0
Tim Mattson (Intel), has authored an extensive series of excellent videos as in introduction to OpenMP*. Not only does he walk through a series of programming exercises in C, he also starts with a background introduction on parallel programming. Check out the series: https://www.youtube.com/watc...
Benefits of Intel(R) Cache Monitoring Technology in the Intel(R) Xeon(TM) Processor E5 v3 Family
By Khang Nguyen (Intel) Posted on 09/08/14 0
Introduction The number of cores is increasing with the introduction of new processors.  As more cores are added, the number of diverse workloads that potentially can run simultaneously is also increasing.  Workloads can be single-threaded or multi-threaded applications and they can run in nativ...
Web Resources about Intel® Transactional Synchronization Extensions
By Roman Dementiev (Intel) Posted on 07/28/14 3
Short URL for this page: www.intel.com/software/tsx In this blog I list useful technical resources related to Intel® Transactional Synchronization Extensions (Intel TSX). I will try to keep the list up-to-date as new material becomes available (subscribe to this page below to get update notifica...
Subscribe to Intel Developer Zone Blogs
Intel® Parallel Studio XE SP1 & Intel® Cluster Studio XE SP1
By kathy-farrel (Intel)0
Intel® Parallel Studio XE SP1 & Intel® Cluster Studio XE SP1 - What's New - Webinar Tuesday, September 17 9am PDT Please join us for a technical presentation on the new features found in the recently released Intel® Parallel Studio XE 2013 SP1 Intel® Cluster Studio XE SP1. This release includes support for compilers and performance analysis on Intel® Xeon Phi™ on Windows*. The technical presentation will briefly cover new features for both C++ and Fortran on Linux*, Windows*, and OS X* operating systems as well as error checking and performance profiling tools. Learn how to efficiently boost your application performance! Not too late! - Register Now  Learn about Upcoming Webinars
linking with two versions of mkl (multi threaded and single threaded) in one application
By Michal K.3
Hi, Is it possible to use both the single threaded version of mkl library and the multi threaded version of mkl in one application? I need the single threaded version to use with PLASMA library, yet at some other part of my code, I need use mkl PARDISO, for which I need the multi threaded version. Any help will be greatly appreciated. Cheers Michal  
PCIe 3.0 reference clock jitter tool
By Sonal C.0
Where can I access the Intel PCIe clock jitter tool
Memory to CPU (mov) bandwidth limitations
By albus d.3
(sorry for weak english I am not native english, Not sure if right forum, first time here - This is general about some hardware limits i do not understand technical reason and I would very like to know) We have now parallelised SIMD arithmetic (like 8 float mulls or divisions in one step) theoretical (but also nearly practical) arithmetical bandwidth per core is thus like 4GHz * 8 floats = about 30 GFLOPS per core or something like that But we still AFAIK have quite low RAM to CPU bandwidth at the level of read or write of 1 or 2 int of float per nanosecond, such ram-2-cpu bandwidth when i am testing it is like only 2 GLOP per second per core or something like that; (both those values are rough but this difference seem to be physical truth at least from my experience) I mean arithmetic can be paralelised (like 8-vectorised) but load/store movs are not - thus SIMD paralistation has obly a fraction of its potential power This is extremally crusial to increase this memory bandwith (muc...
speedup problem using openMP in intel fortran
By bohluly2
Dear all, I have developed  a program and unfortunately I have speedup problem in it. My program is so big so I have tried to write a sample similar to my program, fortunately this simple program has a same problem with my program.  I need other experiences and your help if it is possible. Thanks, I am using VS2010 and Intel FORTRAN XE 2011 Program:     TYPE var         REAL(8),POINTER :: A, B, C      END TYPE var      REAL(8),POINTER :: A(:), B(:), C(:)      TYPE(var),POINTER  :: vars(:)        TYPE(var),POINTER :: varOMP            REAL*8  t1,t2 ,ai,bi,ci,di,ei,fi        INTEGER(4) c1,c2      INTEGER N, CHUNKSIZE, I, id, f , l      PARAMETER (N=200)      PARAMETER (CHUNKSIZE=10)            Allocate (A(N), B(N), C(N),vars(N)) !     initializations         DO I = 1, N          A(N)      =   I * 1.0          B(N)      =   A(N)          vars(I)%A =>  A(N)          vars(I)%B =>  B(N)          vars(I)%C =>  C(N)          vars(I)%A = 0.51          vars(I)%B...
How can I verify license key?
By Aleksandr S.1
I have bought few Xeon Phi units. The reseller provided with keys for Intel Parallel Studio. I think they are 6 months demo. However I'd like to know for sure. Is there a way I can check the terms of these keys without activating them, directly with Intel?
Doubts before buy Intel Studio
By Marcelo C.2
Hi All   I have some doubts regarding the Intel software studio for parallel arch and the Brazilian seller is not able to answer. I need to solve these doubts before buy the Studio for my company. Can somebody help me? 1- Currently we are using OpenMPI. Which advantages Intel MPI provides over OpenMPI? 2- OpenMPI error handling is not good. The MPI Lib from Intel is better for error handling and recovering? For example, if one rank in my mpi comm world dies how can I handle this using Intel lib? 3- Currently we use GCC. Intel compiler is better? We are running in a cluster with several nodes, with MPI doing the communication between the nodes.  Any other recommendations? We host our application at Amazon.  Thank you all in advance!  
Openmp task and parallel construct
By Patrice l.1
Hi, I am trying to understand the behavior of the Openmp implementation when a parallel do is enclosed in a task. When using nested  the parallel do uses multiple threads. The first question is is that possible to restrict the number of threads to the original thread pool (hardware thread), so that they work on the parallel construct has they become available after completing other task ? (see code below) From reading the forum, i suspect the answer will be no, then what is the best way to combine task and parallel do , inside a task and outside a task. Is it worth it to close the master or single region to do a parallel one, and reopen it right after ? Last question, is there any  becnhmark of using task for a loop instead of a classic parallel do , in both case, fixed work load, and variable work load for each iteration ?   Thanks program omptest use omp_lib implicit none integer :: i !$omp parallel !$omp master print *,'omp get max threads',omp_get_max_thr...
Subscribe to Forums

Highlights