Intel® Developer Zone:
Performance

Highlights

Just published! Intel® Xeon Phi™ Coprocessor High Performance Programming 
Learn the essentials of programming for this new architecture and new products. New!
Intel® System Studio
The Intel® System Studio is a comprehensive integrated software development tool suite solution that can Accelerate Time to Market, Strengthen System Reliability & Boost Power Efficiency and Performance. New!
In case you missed it - 2-day Live Webinar Playback
Introduction to High Performance Application Development for Intel® Xeon & Intel® Xeon Phi™ Coprocessors.
Structured Parallel Programming
Authors Michael McCool, Arch D. Robison, and James Reinders uses an approach based on structured patterns which should make the subject accessible to every software developer.

Deliver your best application performance for your customers through parallel programming with the help of Intel’s innovative resources.

Development Resources


Development Tools

 

Intel® Parallel Studio XE ›

Bringing simplified, end-to-end parallelism to Microsoft Visual Studio* C/C++ developers, Intel® Parallel Studio XE provides advanced tools to optimize client applications for multi-core and manycore.

Intel® Software Development Products

Explore all tools the help you optimize for Intel architecture. Select tools are available for a free 30-day evaluation period.

Tools Knowledge Base

Find guides and support information for Intel tools.

Barrier
By Posted 05/06/20080
A synchronization mechanism applied to groups of units of execution (UEs), with the property that no UE in the group may pass the barrier until all UEs in the group have reached the barrier. In other words, UEs arriving at the barrier suspend or block until all UEs have arrived. They may then a...
Autoboxing
By Posted 05/06/20080
A language feature available in Java 2 1.5 that provides automatic conversion of data of a primitive type to the corresponding wrapper type, for example from int to Integer.
Atomic
By Posted 05/06/20080
Atomic has slightly different meanings in different contexts. An atomic operation at the hardware level is uninterruptible, for example load and store, or atomic test-and-set instructions. In the database world, an atomic operation (or transaction) is one that appears to execute completely or n...
AND parallelism
By Posted 05/06/20080
This is one of the main techniques for introducing parallelism into a logic language. Consider the goal A: B,C,D (read ”A follows from B and C and D”) which means that goal A succeeds if and only if all three subgoals B and C and D succeed. In AND parallelism, subgoals B, C, and D are evaluated...

Pages

Subscribe to
GCDC'08: optimized games on Intel platform
By mike-huelskoetterPosted 08/18/20080
Day one at GCDC ’08 began with a bunch of interesting Intel demos which we took a quick look at. One surprising thing we discovered is the ability of Intel Integrated Graphics running complex games like Sacred 2 by Ascaron. Ok, not in full detail but in a lower resolution with at least 25 frames...
GCDC’08: Leipzig, we are coming!
By mike-huelskoetterPosted 08/16/20081
This was really an exciting week: I blogged like hell about the different topics Intel will cover at Games Convention Developers Conference 2008:When will Intel presenters talk about their favourite subjects?How do you multi-thread your single-threading games and 3D applications?What kind of Inte...
So how are P-states related to power management?
By Taylor Kidd (Intel)Posted 08/15/20080
This relationship between P-states, voltage and frequency is well and good, but how does this relate to power management? Power is literally, energy usage per unit of time. To get the total energy usage, you integrate the instantaneous power over the interval you're interested in, i.e. get the ar...
GCDC’08: Intel Tools – turbo for your gaming software
By mike-huelskoetterPosted 08/15/20082
When Intel‘s Jérôme Muffat-Méridol and Basher Khan talk about „Intel Tools – Accelerate your PC Software Performance“ during GCDC‘08, they will present two things. First: which Intel tuning software can you use in order to optimize your gaming apps for Intel platforms. And second Jérôme and Bashe...

Pages

Subscribe to Intel Developer Zone Blogs
algorithms
By lara h.0
Hello, look down the the following link...it's about parallel partition... http://www.redgenes.com/Lecture-Sorting.pdf I have tried to simulate this parallel partition method ,but i don't think it will scale cause we have to do a merging,which essentially is an array-copy operation but this array-copyoperations will be expensive compared to an integer compareoperation that you find inside the partition fuinction, and it will stillbe expensive compared to a string compare operation that you findinside the partition function. So since it's not scaling i have abondonedthe idea to implement this parallel partition method in my parallelquicksort. I have also just read the following paper about Parallel Merging: http://www.economyinformatics.ase.ro/content/EN4/alecu.pdf And i have implemented this algorithm just to see what is its performance.and i have noticed that the serial algorithm is 8 times slower than themergefunction that you find in the serial mergesort algorithm.So 8 times slow...
Complexity rank of cache locking
By Klara Z.3
Welcome, I know CPU cycles needed by locking vary, but I need some general picture about how heavy is cache locking. Particularly, for P6+ chip, what rank of the number of cycles consumed by LOCK BTS / INC / DEC would be, if the operand is already cashed memory? By rank I mean, would it be like 10 or rather 100?
Why Sequential Semantic on x86/x86_64 is using through MOV [addr], reg + MFENCE instead of +SFENCE?
By AlexeyAB0
At Intel x86/x86_64 systems have 3 types of memory barriers: lfence, sfence and mfence. The question in terms of their use. For Sequential Semantic (SC) is sufficient to use MOV [addr], reg + MFENCE for all memory cells requiring SC-semantics. However, you can write code in the whole and vice versa: MFENCE + MOV reg, [addr]. Apparently felt, that if the number of stores to memory is usually less than the loads from it, then the use of write-barrier in total cost less. And on this basis, that we must use sequential stores to memory, made another optimization - [LOCK] XCHG, which is probably cheaper due to the fact that "MFENCE inside in XCHG" applies only to the cache line of memory used in XCHG (video where on 0:28:20 said that MFENCE more expensive that XCHG). GCC 4.8.2 uses this approach of using: LOAD(without fences) and STORE + MFENCE, such as writen there: http://www.cl.cam.ac.uk/~pes20/cpp/cpp0xmappings.html C/C++11 Operation x86 implementation Load Seq_Cst: MOV (from memory) ...
OpenMP does not like fmax/fabs
By Jon U.9
We have a code that is exhibiting greatly different runtimes between a Fortran and C version. The problem has been isolated to one simple loop: #pragma omp parallel for reduction(max:dt) for(i = 1; i <= NR; i++){ for(j = 1; j <= NC; j++){ dt = fmax( fabs(t[i][j]-t_old[i][j]), dt); t_old[i][j] = t[i][j]; } } Which runs about 12 times slower than the equivalent Fortran loop: !$omp parallel do reduction(max:dt) Do j=1,NC Do i=1,NR dt = max( abs(t(i,j) - told(i,j)), dt ) Told(i,j) = T(i,j) Enddo Enddo !$omp end parallel do Removing the dt assignment eliminates the disparity. Also, running these as serial codes shows no disparity, do the problem is not that the actual C implementation is just so bad. Also, eliminating just the reduction does not close the gap, so it is not the reduction operation itself. All of those tests lead us to the conclusion that there is some terrible interaction between OpenMP and fmax/abs. Any...
Parallelizing my existing code in TBB please help me with this errors
By Girija B.3
Hi , I am new to TBB and working on parallelizing my existing code. I could easilt paralleize with OpenMP but we need to check the performance of our code in Both TBB and OpenMP after parallelization hence i tried parallelizing the code but i am getting errors which i am not able to reslove please help kindly help me with these errors.My code is as below just using a parallel for loop and lambda function i ahve all serial , openmp and tbb changes i have made please do look at teh code and tell me what else i shud change for tbb to work.         case openmp:        {            #pragma omp parallel for private (iter, currentDB, db)            for (iter = 1; iter < numDB; iter++)            {                 currentDB = this->associateDBs->GetAssociateDB(iter);                db = this->dbGroup.getDatabase( currentDB );                GeoRanking::GeoVerifierResultVector  resLocal;                db->recog( fg, InternalName, resLocal );                LOG(info,omp_get_t...
Selecting custom victim in job scheduling on NUMA systems
By kadir.akbudak1
I have a NUMA system. There is a thread for each core in the system. Threads that process similar data are assigned to the same node to reuse the data in the large L3 cache of the node. I want threads that are assigned to the same node should steal each other's jobs. If all jobs on a node have finished, these threads should steal jobs assigned to threads on other nodes. How can I implement this via OpenMP?
cache topology
By Ilya Z.13
hi, I'm writting cpuid program. I need help with getting number of each type of cache. not its size, but the number. for example i need get info such as below: L1 data cache = 2 x 64KB. CPUID will give me the size of each sort of cache, but not its number. On MSDN i've found that GetLogicalProcessorsInformationEx proc might be helpful to get that number. but i'm not sure do i understood it right. I guess, that member of CACHE_RELATIONSHIP structure, the GROUP_AFFINITY will be related with quantity. Could some give me some hints or explain what this proc exactly does or tell me were else find such infos. thanks in advance
Poor openmp performance
By Ronglin J.5
We have E5-2670 * 2, 16 cores in total.We get the openmp performance as follows (the code is also attached below):  NUM THREADS:           1 Time:    1.53331303596497    NUM THREADS:           2 Time:   0.793078899383545  NUM THREADS:           4 Time:   0.475617885589600  NUM THREADS:           8 Time:   0.478277921676636  NUM THREADS:          14 Time:   0.479882955551147  NUM THREADS:          16 Time:   0.499575138092041      OK, this scaling is very poor when the thread number larger than 4. But if I uncomment the lines 17 and 24, let the initialization is also done by openmp. The different results are:  NUM THREADS:           1 Time:    1.41038393974304  NUM THREADS:           2 Time:   0.723496913909912  NUM THREADS:           4 Time:   0.386450052261353  NUM THREADS:           8 Time:   0.211269855499268  NUM THREADS:          14 Time:   0.185739994049072  NUM THREADS:          16 Time:   0.214301824569702 Why the performances are so different? Some information:ifort v...

Pages

Subscribe to Forums

Highlights