Intel® Developer Zone:
Performance

Highlights

Just published! Intel® Xeon Phi™ Coprocessor High Performance Programming 
Learn the essentials of programming for this new architecture and new products. New!
Intel® System Studio
The Intel® System Studio is a comprehensive integrated software development tool suite solution that can Accelerate Time to Market, Strengthen System Reliability & Boost Power Efficiency and Performance. New!
In case you missed it - 2-day Live Webinar Playback
Introduction to High Performance Application Development for Intel® Xeon & Intel® Xeon Phi™ Coprocessors.
Structured Parallel Programming
Authors Michael McCool, Arch D. Robison, and James Reinders uses an approach based on structured patterns which should make the subject accessible to every software developer.

Deliver your best application performance for your customers through parallel programming with the help of Intel’s innovative resources.

Development Resources


Development Tools

 

Intel® Parallel Studio XE ›

Bringing simplified, end-to-end parallelism to Microsoft Visual Studio* C/C++ developers, Intel® Parallel Studio XE provides advanced tools to optimize client applications for multi-core and manycore.

Intel® Software Development Products

Explore all tools the help you optimize for Intel architecture. Select tools are available for a free 30-day evaluation period.

Tools Knowledge Base

Find guides and support information for Intel tools.

Abstraction
By Posted 05/06/20080
Abstraction can have several meanings depending on the context. In software, it often means combining a set of small operations or data items and giving them a name. For example, control abstraction takes a group of operations, combines them into a procedure, and gives the procedure a name. As ...
Abstract data type
By Posted 05/05/20080
An abstract data type (ADT) is a data type defined by its set of allowed values and the available operations on those values. The values and operations are defined independently of a particular representation of the values or implementation of the operations. In a programming language that dire...
SOA? ESB? What is all this?
By Mahesh Bhat (Intel)Posted 05/02/200810
Lots of nice articles have been published on the net on both Service Oriented Architecture (SOA) and Enterprise Server Bus (ESB). This topic is being discussed quite heavily for last few years but started gaining weight as ESBs started getting more and more matured. To start this series, I am pl...
Intel® AVX: New Frontiers in Performance Improvements and Energy Efficiency
By Mark Buxton (Intel)Posted 03/31/20081
Download PDF Intel® AVX: New Frontiers in Performance Improvements and Energy Efficiency [PDF 72KB] Introduction As the need for more computing performance continues to grow across industry segments, Intel continues to lead in innovation and the delivery of greater compute capacity to support the...
Subscribe to Intel Developer Zone Articles
Something Serial This Way Comes
By Clay Breshears (Intel) Posted on 07/17/08 2
So you've slaved over the parallelization of your application and you've gotten some measure of speedup.  Is it enough?  If so, skip to the next blog post.  Or, maybe you can give some further SF reading suggestions for Aharon Robbins's daughter. For those of you left reading this paragraph, let ...
How we get improved performance on a single core - Part 3
By Stefanus Du Toit (Intel) Posted on 07/15/08 0
This post was originally published on blogs.rapidmind.com. RapidMind was acquired by Intel Corporation in August of 2009, and the RapidMind Multi-core Platform will merge with Intel Ct technology. Before joining Intel as part of the acquisition, Stefanus was a co-founder of RapidMind. This is the...
Assessing the accelerator buzz: Vectorization of Monte Carlo algorithms
By Michael Stoner (Intel) Posted on 07/15/08 0
Now we’ll take a look at optimizing something more interesting and complex.  Since we can’t show much of the customer source we work on, we’ll look at some public domain code from the internet, specifically this Box Muller random number transformation from http://www.taygeta.com/random/gaussian.h...
Intel® Concurrent Collections Grain Size
By Steven Lang (Intel) Posted on 07/14/08 4
An important consideration when using CnC is grain size. You need to make sure you have enough serial code in each step to hide the scheduling overhead as well as the overhead of performing each get and put made by the step.  To illustrate this issue, consider the following simple algorithm for p...
Subscribe to Intel Developer Zone Blogs
Locking CPU cache lines for a thread ( L1)
By Younis A.14
Hi I'm working on securing access to L1 cache by locking it line by line. Is there any way to do it? For example, two threads accessing the L1 and L1 lines are locked for a certain time to each thread accessed them. Regards, Younis
Responsive OpenMP Theads in Hybrid Parallel Environment
By Don K.1
I have a Fortran code that runs both MPI and OpenMP.  I have done some profiling of the code on an 8 core windows laptop varying the number of mpi  tasks vs. openmp threads and have some understanding of where some performance bottlenecks for each parallel method might surface.  The problem I am having is when I port over to a Linux cluster with several 8-core nodes.  Specifically, my openmp thread parallelism performance is very poor.  Running 8 mpi tasks per node is significantly faster than 8 openmp threads per node (1 mpi task), but even 2 omp threads + 4 mpi tasks runs was running very slowly, more so than I could solely attribute to a thread starvation issue.  I saw a few related posts in this area and am hoping for further insight and recommendations in to this issue.  What I have tried so far ... 1.  setenv OMP_WAIT_POLICY active      ## seems to make sense 2.  setenv KMP_BLOCKTIME 1          ## this is counter to what I have read but when I set this to a large number (2500...
Optimizing cilk with ternary conditional
By Fabio G.3
What is the best way to optimize the cycle cilk_for(i=0;i<n;i++){ x[i]=x[i]<0?0:x[i]; }or somethings like that? Thanks, Fabio
have asked them to
By Robert P.0
ICC t20 World Cup 2014 Live StreamIndia vs Pakistan Live Stream
Optimizing reduce_by_key implementation using TBB
By Shruti R.0
Hello Everyone, I'm quite new to TBB & have been trying to optimize reduce_by_key implementation using TBB constructs. However serial STL code is always outperforming the TBB code! It would be helpful if I'm given an idea about how reduce_by_key can be improvised using tbb::parallel_scan. Any help at the earliest would be much appreciated. Thanks.
reading a shared variable
By VIKRANT G.4
hello everyone I am relatively new to parallel programming and have the following doubt:- is reading a shared variable(that is not modified by any thread) without using locks a good practice thanks for the help in advance  
Weird Openmp bug
By Cheng C.1
Dear all, I want to combine OpenMP and RSA_public_encrypt and RSA_private_decrypt routines. However, I was confused by a weird bug for a few days.    In the attached program, if I generated 2 threads for parallel encryption and decryption, everything works well. If I generated 3 or more threads, the RSA_public_encrypt routine works fine. All strings are successfully encrypted (encrypt_len=256). However, the RSA_private_decrypt routine went wrong, that is, only one thread works properly, all the other threads failed to decrypt some of the strings (decrypt_len=-1, rsa_eay_private_decrypt padding check failed). If there are 1000 strings and 4 threads, the total number of string failed to decrypt went around 710 (some times as low as around 200). So as expected, if I use 4 threads for parallel RSA_public_encrypt and one thread for RSA_private_decrypt, nothing went wrong.   It would be great if you could give some ideas. Thanks very much.    #include <openssl/rsa.h> #include <...
performance loss
By Bo W.8
Hi, some interesting performance loss happened with my measurements. I have a system with two sockets, each socket is a E5-2680 processor. Each processor has 8 cores and with hyper-threading. The hyper-threading was ignored.  On this system, I started a program 16 times at the same time and each time pinned the program to different cores. At first, i set all cores to 2.7GHz and saw : Program 0 Runtime 7.7s Program 8 Runtime 7.63s And then, i set  cores on the second socket  to 1.2GHz and saw: Program 0 Runtime 12.18s Program 8 Runtime 15.73s The program 8 ran slower. It is clear, because core 8 had lower frequency. But why was program 0 also slower? Its frequency wasn't touched.   Regards, Bo
Subscribe to Forums

Highlights