Intel® Developer Zone:
Performance

Highlights

Just published! Intel® Xeon Phi™ Coprocessor High Performance Programming 
Learn the essentials of programming for this new architecture and new products. New!
Intel® System Studio
The Intel® System Studio is a comprehensive integrated software development tool suite solution that can Accelerate Time to Market, Strengthen System Reliability & Boost Power Efficiency and Performance. New!
In case you missed it - 2-day Live Webinar Playback
Introduction to High Performance Application Development for Intel® Xeon & Intel® Xeon Phi™ Coprocessors.
Structured Parallel Programming
Authors Michael McCool, Arch D. Robison, and James Reinders uses an approach based on structured patterns which should make the subject accessible to every software developer.

Deliver your best application performance for your customers through parallel programming with the help of Intel’s innovative resources.

Development Resources


Development Tools

 

Intel® Parallel Studio XE ›

Bringing simplified, end-to-end parallelism to Microsoft Visual Studio* C/C++ developers, Intel® Parallel Studio XE provides advanced tools to optimize client applications for multi-core and manycore.

Intel® Software Development Products

Explore all tools the help you optimize for Intel architecture. Select tools are available for a free 30-day evaluation period.

Tools Knowledge Base

Find guides and support information for Intel tools.

Transitioning from Valgrind* Tools to Intel® Inspector XE
By Holly Wilper (Intel)Posted 08/19/20140
The open source Valgrind* framework supports several tools for checking the memory and threading correctness of your code. Intel® Inspector XE has that same functionality but supports additional operating systems (Linux* and Microsoft Windows*), languages (C, C++, Microsoft .NET*, Fortran), and t...
Improving Performance with MPI-3 Non-Blocking Collectives
By James Tullos (Intel)Posted 08/01/20140
The new MPI-3 non-blocking collectives offer potential improvements to application performance.  These gains can be significant for the right application.  But for some applications, you could end up lowering your performance by adding non-blocking collectives.  I'm going to discuss what the non-...
What's new? - Intel® Inspector XE 2015
By Holly Wilper (Intel)Posted 07/24/20140
What's new in Intel® Inspector XE 2015 New uninitialized memory error detection algorithm that uses deeper analysis method to substantially reduce the number of false positives. Independent control of uninitialized memory analysis, which is off by default Improved on-demand leak detection and ...
Intel® Inspector XE 2015 Release Notes
By Holly Wilper (Intel)Posted 07/24/20140
This page provides the current Release Notes for the Intel® Inspector XE 2015 for Linux* and Windows* products.     Intel® Inspector XE 2015for Windows* Intel® Inspector XE 2015for Linux*  What's New! English English English ...
Subscribe to Intel Developer Zone Articles
Fun with Intel® Transactional Synchronization Extensions
By Wooyoung Kim (Intel) Posted on 07/25/13 0
By now, many of you have heard of Intel® Transactional Synchronization Extensions (Intel® TSX). If you have not, I encourage you to check out this page (http://www.intel.com/software/tsx) before you read further. In a nutshell, Intel TSX provides transactional memory support in hardware, making t...
AVX-512 instructions
By James Reinders (Intel) Posted on 07/23/13 15
Intel® Advanced Vector Extensions 512 (Intel® AVX-512) The latest Intel® Architecture Instruction Set Extensions Programming Reference includes the definition of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instructions. These instructions represent a significant leap to 512-bit SIMD s...
Figures/Tables for presentations from Xeon Phi Book
By James Reinders (Intel) Posted on 07/18/13 0
The figures, tables, drawings, etc. used in our book can be downloaded from the book's website. We appreciate attribution, but there are no restrictions on use in educational material (presentations)! Suggestion attribution: (c) 2013 Jim Jeffers and James Reinders, used with permission.        
Go Parallel
By Dmitry Vyukov Posted on 06/18/13 20
This is a first post in a series of posts about parallel programming with Go language. What is Go? You may ask. Go is a language with the cutest mascot ever: As you may see, it also supports parallel programming: as well as concurrent programming: I am sure you are already excited by the langu...
Subscribe to Intel Developer Zone Blogs
Locking CPU cache lines for a thread ( L1)
By Younis A.14
Hi I'm working on securing access to L1 cache by locking it line by line. Is there any way to do it? For example, two threads accessing the L1 and L1 lines are locked for a certain time to each thread accessed them. Regards, Younis
Responsive OpenMP Theads in Hybrid Parallel Environment
By Don K.1
I have a Fortran code that runs both MPI and OpenMP.  I have done some profiling of the code on an 8 core windows laptop varying the number of mpi  tasks vs. openmp threads and have some understanding of where some performance bottlenecks for each parallel method might surface.  The problem I am having is when I port over to a Linux cluster with several 8-core nodes.  Specifically, my openmp thread parallelism performance is very poor.  Running 8 mpi tasks per node is significantly faster than 8 openmp threads per node (1 mpi task), but even 2 omp threads + 4 mpi tasks runs was running very slowly, more so than I could solely attribute to a thread starvation issue.  I saw a few related posts in this area and am hoping for further insight and recommendations in to this issue.  What I have tried so far ... 1.  setenv OMP_WAIT_POLICY active      ## seems to make sense 2.  setenv KMP_BLOCKTIME 1          ## this is counter to what I have read but when I set this to a large number (2500...
Optimizing cilk with ternary conditional
By Fabio G.3
What is the best way to optimize the cycle cilk_for(i=0;i<n;i++){ x[i]=x[i]<0?0:x[i]; }or somethings like that? Thanks, Fabio
have asked them to
By Robert P.0
ICC t20 World Cup 2014 Live StreamIndia vs Pakistan Live Stream
Optimizing reduce_by_key implementation using TBB
By Shruti R.0
Hello Everyone, I'm quite new to TBB & have been trying to optimize reduce_by_key implementation using TBB constructs. However serial STL code is always outperforming the TBB code! It would be helpful if I'm given an idea about how reduce_by_key can be improvised using tbb::parallel_scan. Any help at the earliest would be much appreciated. Thanks.
reading a shared variable
By VIKRANT G.4
hello everyone I am relatively new to parallel programming and have the following doubt:- is reading a shared variable(that is not modified by any thread) without using locks a good practice thanks for the help in advance  
Weird Openmp bug
By Cheng C.1
Dear all, I want to combine OpenMP and RSA_public_encrypt and RSA_private_decrypt routines. However, I was confused by a weird bug for a few days.    In the attached program, if I generated 2 threads for parallel encryption and decryption, everything works well. If I generated 3 or more threads, the RSA_public_encrypt routine works fine. All strings are successfully encrypted (encrypt_len=256). However, the RSA_private_decrypt routine went wrong, that is, only one thread works properly, all the other threads failed to decrypt some of the strings (decrypt_len=-1, rsa_eay_private_decrypt padding check failed). If there are 1000 strings and 4 threads, the total number of string failed to decrypt went around 710 (some times as low as around 200). So as expected, if I use 4 threads for parallel RSA_public_encrypt and one thread for RSA_private_decrypt, nothing went wrong.   It would be great if you could give some ideas. Thanks very much.    #include <openssl/rsa.h> #include <...
performance loss
By Bo W.8
Hi, some interesting performance loss happened with my measurements. I have a system with two sockets, each socket is a E5-2680 processor. Each processor has 8 cores and with hyper-threading. The hyper-threading was ignored.  On this system, I started a program 16 times at the same time and each time pinned the program to different cores. At first, i set all cores to 2.7GHz and saw : Program 0 Runtime 7.7s Program 8 Runtime 7.63s And then, i set  cores on the second socket  to 1.2GHz and saw: Program 0 Runtime 12.18s Program 8 Runtime 15.73s The program 8 ran slower. It is clear, because core 8 had lower frequency. But why was program 0 also slower? Its frequency wasn't touched.   Regards, Bo
Subscribe to Forums

Highlights