Intel® Developer Zone:
Performance

Highlights

Just published! Intel® Xeon Phi™ Coprocessor High Performance Programming 
Learn the essentials of programming for this new architecture and new products. New!
Intel® System Studio
The Intel® System Studio is a comprehensive integrated software development tool suite solution that can Accelerate Time to Market, Strengthen System Reliability & Boost Power Efficiency and Performance. New!
In case you missed it - 2-day Live Webinar Playback
Introduction to High Performance Application Development for Intel® Xeon & Intel® Xeon Phi™ Coprocessors.
Structured Parallel Programming
Authors Michael McCool, Arch D. Robison, and James Reinders uses an approach based on structured patterns which should make the subject accessible to every software developer.

Deliver your best application performance for your customers through parallel programming with the help of Intel’s innovative resources.

Development Resources


Development Tools

 

Intel® Parallel Studio XE ›

Bringing simplified, end-to-end parallelism to Microsoft Visual Studio* C/C++ developers, Intel® Parallel Studio XE provides advanced tools to optimize client applications for multi-core and manycore.

Intel® Software Development Products

Explore all tools the help you optimize for Intel architecture. Select tools are available for a free 30-day evaluation period.

Tools Knowledge Base

Find guides and support information for Intel tools.

AND parallelism
By Posted 05/06/20080
This is one of the main techniques for introducing parallelism into a logic language. Consider the goal A: B,C,D (read ”A follows from B and C and D”) which means that goal A succeeds if and only if all three subgoals B and C and D succeed. In AND parallelism, subgoals B, C, and D are evaluated...
Amdahl's law
By Posted 05/06/20080
A law stating that (under certain assumptions) the maximum speedup that can be obtained by running an algorithm on a system of P processors is where ? is the serial fraction of the program, and T(n) is the total execution time running on n processors. See speedup and serial fraction.
Address space
By Posted 05/06/20080
The range of memory locations that a process or processor can access. Depending on context, this could refer to either physical or virtual memory
Abstraction
By Posted 05/06/20080
Abstraction can have several meanings depending on the context. In software, it often means combining a set of small operations or data items and giving them a name. For example, control abstraction takes a group of operations, combines them into a procedure, and gives the procedure a name. As ...

Pages

Subscribe to
GCDC’08: Intel gets ready for Nehalem
By mike-huelskoetterPosted 08/14/20080
Three days ago Intel heralded a new processor era when they officially announced their next microarchitecture which is known under the codename „Nehalem“. The brand will be named Intel Core processor with its first product being the Intel Core i7. Besides many new features like seven additional S...
Software Concurrency for undergrads? Panel discussion at IDF
By Michael Wrinn (Intel)Posted 08/13/20084
The Intel Developer Forum, in San Francisco August 19-21, brings this year a series of talks and workshops of particular interest to the academic community: a chalk talk on research collaboration in parallel computing, a technical session on expressing parallelism, a threading self-paced lab - de...
Parallel Programming on Multi-Core News - Parallel Radio, Larrabee, Emulator & New Bloggers
By aaron-tersteeg (Intel)Posted 08/13/20080
Parallel Programming Talk RadioClay Breshears and I did our second 15 min Parallel Programming Talk radio program this morning at 8:00AM PST. We had David Rich of Interactive Supercomputing on as a guest to talk about their Parallel Programming tools for engineers. I feel really good about how it...
Intel Software Guest Blogger Asaf Shelly - About Me
By Asaf ShellyPosted 08/13/20081
It is so easy to tell the difference between the office of a hardware developer to a software developer's. When you visit a hardware engineer it takes forever to find a place for the coffee cup because even the chairs have some box on them with wires coming out. When you visit a programmer you ca...

Pages

Subscribe to Intel Developer Zone Blogs
algorithms
By lara h.0
Hello, look down the the following link...it's about parallel partition... http://www.redgenes.com/Lecture-Sorting.pdf I have tried to simulate this parallel partition method ,but i don't think it will scale cause we have to do a merging,which essentially is an array-copy operation but this array-copyoperations will be expensive compared to an integer compareoperation that you find inside the partition fuinction, and it will stillbe expensive compared to a string compare operation that you findinside the partition function. So since it's not scaling i have abondonedthe idea to implement this parallel partition method in my parallelquicksort. I have also just read the following paper about Parallel Merging: http://www.economyinformatics.ase.ro/content/EN4/alecu.pdf And i have implemented this algorithm just to see what is its performance.and i have noticed that the serial algorithm is 8 times slower than themergefunction that you find in the serial mergesort algorithm.So 8 times slow...
Complexity rank of cache locking
By Klara Z.3
Welcome, I know CPU cycles needed by locking vary, but I need some general picture about how heavy is cache locking. Particularly, for P6+ chip, what rank of the number of cycles consumed by LOCK BTS / INC / DEC would be, if the operand is already cashed memory? By rank I mean, would it be like 10 or rather 100?
Why Sequential Semantic on x86/x86_64 is using through MOV [addr], reg + MFENCE instead of +SFENCE?
By AlexeyAB0
At Intel x86/x86_64 systems have 3 types of memory barriers: lfence, sfence and mfence. The question in terms of their use. For Sequential Semantic (SC) is sufficient to use MOV [addr], reg + MFENCE for all memory cells requiring SC-semantics. However, you can write code in the whole and vice versa: MFENCE + MOV reg, [addr]. Apparently felt, that if the number of stores to memory is usually less than the loads from it, then the use of write-barrier in total cost less. And on this basis, that we must use sequential stores to memory, made another optimization - [LOCK] XCHG, which is probably cheaper due to the fact that "MFENCE inside in XCHG" applies only to the cache line of memory used in XCHG (video where on 0:28:20 said that MFENCE more expensive that XCHG). GCC 4.8.2 uses this approach of using: LOAD(without fences) and STORE + MFENCE, such as writen there: http://www.cl.cam.ac.uk/~pes20/cpp/cpp0xmappings.html C/C++11 Operation x86 implementation Load Seq_Cst: MOV (from memory) ...
OpenMP does not like fmax/fabs
By Jon U.9
We have a code that is exhibiting greatly different runtimes between a Fortran and C version. The problem has been isolated to one simple loop: #pragma omp parallel for reduction(max:dt) for(i = 1; i <= NR; i++){ for(j = 1; j <= NC; j++){ dt = fmax( fabs(t[i][j]-t_old[i][j]), dt); t_old[i][j] = t[i][j]; } } Which runs about 12 times slower than the equivalent Fortran loop: !$omp parallel do reduction(max:dt) Do j=1,NC Do i=1,NR dt = max( abs(t(i,j) - told(i,j)), dt ) Told(i,j) = T(i,j) Enddo Enddo !$omp end parallel do Removing the dt assignment eliminates the disparity. Also, running these as serial codes shows no disparity, do the problem is not that the actual C implementation is just so bad. Also, eliminating just the reduction does not close the gap, so it is not the reduction operation itself. All of those tests lead us to the conclusion that there is some terrible interaction between OpenMP and fmax/abs. Any...
Parallelizing my existing code in TBB please help me with this errors
By Girija B.3
Hi , I am new to TBB and working on parallelizing my existing code. I could easilt paralleize with OpenMP but we need to check the performance of our code in Both TBB and OpenMP after parallelization hence i tried parallelizing the code but i am getting errors which i am not able to reslove please help kindly help me with these errors.My code is as below just using a parallel for loop and lambda function i ahve all serial , openmp and tbb changes i have made please do look at teh code and tell me what else i shud change for tbb to work.         case openmp:        {            #pragma omp parallel for private (iter, currentDB, db)            for (iter = 1; iter < numDB; iter++)            {                 currentDB = this->associateDBs->GetAssociateDB(iter);                db = this->dbGroup.getDatabase( currentDB );                GeoRanking::GeoVerifierResultVector  resLocal;                db->recog( fg, InternalName, resLocal );                LOG(info,omp_get_t...
Selecting custom victim in job scheduling on NUMA systems
By kadir.akbudak1
I have a NUMA system. There is a thread for each core in the system. Threads that process similar data are assigned to the same node to reuse the data in the large L3 cache of the node. I want threads that are assigned to the same node should steal each other's jobs. If all jobs on a node have finished, these threads should steal jobs assigned to threads on other nodes. How can I implement this via OpenMP?
cache topology
By Ilya Z.13
hi, I'm writting cpuid program. I need help with getting number of each type of cache. not its size, but the number. for example i need get info such as below: L1 data cache = 2 x 64KB. CPUID will give me the size of each sort of cache, but not its number. On MSDN i've found that GetLogicalProcessorsInformationEx proc might be helpful to get that number. but i'm not sure do i understood it right. I guess, that member of CACHE_RELATIONSHIP structure, the GROUP_AFFINITY will be related with quantity. Could some give me some hints or explain what this proc exactly does or tell me were else find such infos. thanks in advance
Poor openmp performance
By Ronglin J.5
We have E5-2670 * 2, 16 cores in total.We get the openmp performance as follows (the code is also attached below):  NUM THREADS:           1 Time:    1.53331303596497    NUM THREADS:           2 Time:   0.793078899383545  NUM THREADS:           4 Time:   0.475617885589600  NUM THREADS:           8 Time:   0.478277921676636  NUM THREADS:          14 Time:   0.479882955551147  NUM THREADS:          16 Time:   0.499575138092041      OK, this scaling is very poor when the thread number larger than 4. But if I uncomment the lines 17 and 24, let the initialization is also done by openmp. The different results are:  NUM THREADS:           1 Time:    1.41038393974304  NUM THREADS:           2 Time:   0.723496913909912  NUM THREADS:           4 Time:   0.386450052261353  NUM THREADS:           8 Time:   0.211269855499268  NUM THREADS:          14 Time:   0.185739994049072  NUM THREADS:          16 Time:   0.214301824569702 Why the performances are so different? Some information:ifort v...

Pages

Subscribe to Forums

Highlights