Intel® Developer Zone:


Frisch aus der Presse! Intel® Xeon Phi™ Coprocessor High Performance Programming 
Eignen Sie sich die Grundlagen der Programmierung für diese neue Architektur und neuen Produkte an. Neu!
Intel® System Studio
Das Intel® System Studio ist eine umfassende, integrierte Tool-Suite-Lösung für die Software-Entwicklung. Damit können Sie die Vorlaufzeit verkürzen sowie Systeme zuverlässiger, energieeffizienter und leistungsfähiger machen.
Für alle, die es verpasst haben – Aufzeichnung des zweitägigen Live-Webinars
Einführung zur Entwicklung von High-Performance-Anwendungen für Intel® Xeon und Intel® Xeon Phi™ Coprozessoren.
Structured Parallel Programming
Die Autoren Michael McCool, Arch D. Robison und James Reinders machen das Thema über strukturierte Muster für jeden Software-Entwickler zugänglich.

Bieten Sie Ihren Kunden bestmögliche Anwendungen an, durch Parallel Programming mithilfe der innovativen Ressourcen von Intel.




Intel® Parallel Studio

Intel® Parallel Studio bietet vereinfachte, durchgehende Parallelität für Microsoft Visual Studio* C/C++ Entwickler mit ausgeklügelten Tools zur Optimierung von Clientanwendungen für Multicore und Manycore.

Intel® Produkte für die Software-Entwicklung

Erkunden Sie alle Tools, die Ihnen bei der Optimierung für die Intel Architektur helfen können. Bestimmte Tools können 45 Tage lang kostenlos ausprobiert werden.


Anleitungen und Supportinformationen für Intel Tools.

Android* Application Optimization on Intel® Architecture Multi-core Processors
By YUMING L. (Intel)Posted 12/12/20130
1. Introduction The 4.1 version of Android* has a new improvement that optimizes multi-thread applications running on multi-core processors. The Android operating system can schedule threads to run on each CPU core. In addition, on Intel architecture (IA)-based devices, you have another way to imp…
Precision Memory Leak Detection Using the New On-Demand Leak Detection in Intel® Inspector XE
By christina-king-wojcik (Intel)Posted 12/06/20130
Intel® Inspector XE now gives you the ability to set and reset memory baselines and ask for memory leak information from your program whenever you want it. You will learn how to skip analysis of sections of the code you are not interested in, how to choose whether memory growth or on-demand leak de…
Porting and Tuning of Lattice QCD & MPI-HMMER for Intel® Xeon® Processors & Intel® Xeon Phi™ Coprocessors
By Frances Roth (Intel)Posted 12/06/20130
The Intel® Xeon Phi architecture from Intel Corporation features parallelism at the level of many x86-based cores, multiple threads per core, and vector processing units. Lattice Quantum Chromodynamics (LQCD) is of importance in studies of nuclear and high energy physics and MPI-HMMER is an open so…
Advanced Optimizations for Intel® MIC Architecture
By AmandaS (Intel)Posted 11/25/20130
Compiler Methodology for Intel® MIC Architecture Advanced Optimizations Overview This chapter details some of the advanced compiler optimizations for performance on Intel® MIC Architecture AND most of these optimizations are also applicable to host applications. This chapter includes topics such as…


Kein Inhalt gefunden
Intel Developer Zone Blogs abonnieren
Optimistic shychronization
By aminer100
Hello, I have come to an interresting subject... I was thinking more about my ParallelVarFiler and about database engines, first i have told you that since database engines are disk bound and/or memory bound that means that they are not scalable in multicores systems, but more than that that i was thinking moreand asking my self why to use optimistic synchronization mechanism like in transactional memory ? i think that an optimistic synchronization mechanism like in transactional memory can give better throughput and better speed, prof of that ? take a look at my ParallelVarFiler for exemple, since it can be disk bound when you are saving automaticly the data to a disk file that means that there is no need to use an RWLock since the hardisk is not trully parallel.. so a Lock will have the same performance as an RWLock in a disk bound scenario, it's why i have used a FIFO fair Lock inside my ParallelVarFiler, but if the hardisk or memory can be truly parallel an RWLock will be better ,…
SemaCondvar version 1.15
By aminer100
Hello, SemaCondvar version 1.15 Author: Amine Moulay Ramdane. Click here to download the zip file: Zip (for FreePascal and Lazarus and Delphi 7 to 2010) Click here to download the zip file: Zip (for Delphi XE1 to XE4) Description: SemaCondvar and SemaMonitor are new and portable synchronization objects , SemaCondvar combines all the charateristics of a semaphore and a condition variable and SemaMonitor combines all charateristics of a semaphore and an eventcount , they only use an event object and and a very fast and very efficient and portable FIFO fair Lock , so they are fast and they are FIFO fair. Now you can pass the SemCondvar's initialcount  and SemCondvar's MaximumCount to the construtor, it's like the Semaphore`s InitialCount and the Semaphore's MaximumCount. Like this: t:=TSemaMonitor.create(true,0,4); When you pass True in the first parameter of the constructor the signal(s) will not be lost even if there is no waiting threads, if it's False the signal(s) will …
Scalable Distributed Fair Lock 1.03
By aminer100
Hello, Scalable Distributed Fair Lock 1.03 Authors: Amine Moulay Ramdane. Click here to download the zip file: Zip (for FreePascal and Lazarus and Delphi 7 to 2007) Description: A scalable distributed Lock, it is as fast as the Windows critical section and it is FIFO fair when there is contention , so i think it avoids starvation and it reduces the cache-coherence traffic. My Scalable and distributed Lock uses now a Ticket mechanism , so it's FIFO fair when there is contention, but my Scalable distributed Lock is more efficient than the Ticket Spinlock cause it's distributed and hence the threads spins locally on variables on the local caches on the same core, so it minimizes efficiently the cache-cohence traffic, so my scalable distributed Lock is more efficient than the Ticket Spinlock. I have not used a Waitfree queue to implement my Scalable Distributed Lock,  but you can use a Waitfree queue so that my Scalable Distributed Lock will be more efficient. If you want to add a …
Scalable RWLock 1.14
By aminer100
Hello, Scalable RWLock 1.14 Authors: Amine Moulay Ramdane. Click here to download the zip file: Zip (for FreePascal and Lazarus and Delphi 7 to 2010) Click here to download the zip file: Zip (for Delphi XE1 to XE4) Description: Description:  A fast,  and scalable, and lightweight  Multiple-Readers-Exclusive-Writer Lock that is portable called LW_RWLock and that works across processes and threads and a fast  and also a scalable Multiple-Readers-Exclusive-Writer Lock that is portable called RWLock and that don't use spin wait but uses an event object and also my SemaMonitor and that consumes less CPU than the lightweight version and it processes now the writers on a FIFO order and that's also important and it works across threads . A Read/Write Lock is a performance improvement over a standard mutex for cases where reads outnumber writes. with a Read/Write Lock multiple simultaneous read locks may be held, but write locks are exclusively held. The exclusive writing lock ensures t…
Parallel archiver 1.93
By aminer100
Hello, PArchiver 1.93  (stable version) Author: Amine Moulay Ramdane Click here to download the zip file: Zip (for FreePascal and Lazarus and Delphi 7 to 2007) Parallel archiver was updated to version 1.93, it was working perfectly with FreePascal and Lazarus, but parallel LZMA was incompatible with Delphi , so i have corrected this , and now Parallel archive is compatible with FreePascal, Lazarus and Delphi7 to 2007 and it's now a stable version. Description: Parallel archiver using my Parallel LZO , Parallel LZ4 , Parallel Zlib ,  Parallel Bzip and Parallel LZMA compression algorithms.. Supported features: - Opens and creates archives using my Parallel LZ4 or Parallel LZO or Parallel Zlib or Parallel Bzip or Parallel LZMA compression algorithms. - Wide range of Parallel compression algorithms: Parallel LZ4, Parallel LZO, Parallel ZLib, Parallel BZip and   Parallel LZMA with different compression levels - Compiles into exe - no dll/ocx required. - 64 bit supports - let…
Databases engines
By aminer100
Hello, I was thinking more about databases, and i have thought to implement a database engine that supports INSERTS , DELETES and SEARCHES  and that supports indexed using the AND  and OR operator, and that supports replication accross computers, not the the master to slave replication but an enterprise master to master replication, but what will change in my database engine is that the indexes of the database will stay in-memory, this will speed the things more... but this will use more memory but this kind of implementation , in term of how much memory it will take, will be between the the MySQL engine that uses much more the harddisk and an in-memory database... in my databse engine all the tables and indexes will be stored in the same hardisk file and i will use the same method as is using my ParallelVarFiler , that means that all the in-memory indexes will be saved automaticly to the hardisk...and the enterprise master to master replication will use TCP/IP and the database will …
ParallelVarFiler 1.25
By aminer100
Hello, I have updated ParallelVarFiler to version 1.25, i have added carefully {Try Except}  in all the methods so that tocatch exceptions and release memory etc. so the user don't need to manage the exceptions from outside ParallelVarFilercause it's innefficient to do it like that, hence ParallelVarFiler has become more proffessional and more stable, so if you need to use ParallelVarFiler i advice you to upgrade to version 1.25 that is more stable now. As you have noticed you can save manually the parallel hashtable to the harddisk by using the SaveToStream() or SaveToString() or SaveToFile() methods, but to call those methods when you are running multiple threads, it's mandatory that you pass a file name to the constructor, if you don't pass a file name to the constructor that means you are using an in-memory parallel hashtable, so if you are using an in-memory parallel hashtable you have to stop your threads before calling  SaveToStream() or SaveToString() or SaveToFile() methods,…
About scalability...
By aminer100
Hello, I come to an interresting subject, i want to talk about the scalability on multicores of the parallel hashtables and databases engines... i am wondering how can you say that a parallelhashtable can scale on multicores if the random key acesses are cache unfriendly and the parallel hashtable is memory bound ? i think since the parallel hashtable is memory bound and the random accesses are cache unfriendly that means that the parallel hashtable will not scale on multicores, when you are testing your parallel hashtable, you have also to test the cache misses on the data, cause i think that the data is part of the hashtable application, i mean the parallel hashtable concept is highly interconnected with the way that the data is accessed randomly in a cache unfriendly manner and the  hashtables concept is highly interconnected with the way the hashtable data is accessed in a memory bound manner, so this is why i say that parallel hashtables are not scalable on multicores, it's the …


Foren abonnieren