parallel computing

Abaqus/Standard Performance Case Study on Intel® Xeon® E5-2600 v3 Product Family


The whole point of simulation is to model the behavior of a design and potential changes against various conditions to determine whether we are getting an expected response; and simulation in software is far cheaper than building hardware and performing a physical simulation and modifying the hardware model each time.

  • Разработчики
  • Партнеры
  • Профессорский состав
  • Студенты
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8.x
  • Сервер
  • Продвинутый
  • Средний
  • server
  • abaqus
  • abaqus/standard
  • AVX2
  • Xeon
  • Linux
  • parallel computing
  • vtune
  • Оптимизация
  • Параллельные вычисления
  • Scope Oriented Programming

    There is a long discussion talking about the advantages of Procedural Programming vs. the advantages of Object Oriented Programming. In previous posts I tried to show that although OOP is newer it is not superior. The posts were Flaws of Object Oriented Modeling and Flaws of Object Oriented Modeling Continue.

    Intel® Xeon Phi optimizations in Intel MKL

    The following components of Intel® MKL 11.0.1 and higher are tuned for the Intel® Xeon Phi Architecture:

  • Библиотека Intel® Math Kernel Library
  • MKL support for Xeon Phi
  • intel mkl
  • Intel MIC
  • Intel Xeon Phi Coprocessor
  • Intel Many Integrated Core architecture
  • parallel computing
  • От последовательного кода к параллельному за пять шагов c Intel® Advisor XE

    Если вы давно разрабатываете многопоточные приложения, наверняка вы сталкивались с распараллеливанием уже существующего последовательного кода. Или наоборот, вы новичок в параллельном программировании, а перед вами встали задачи оптимизации проекта и улучшения масштабируемости, которые тоже могут быть решены путём распараллеливания отдельных участков программы. 

    Новый инструмент Intel® Advisor XE поможет вам распараллелить приложение, потратив на это минимум сил и времени.

    Introduction to Parallel Programming with Java

    Develop programs that take advantage of multi-core platforms by applying fundamental concepts of parallel programming.

    After completing this course, you will be able to:

    • Recognize opportunities for parallel computing
    • Use basic implementations for domain and task parallelism
    • Ensure correctness by identifying and resolving race conditions and deadlocks
    • Improve performance by selective code modifications and load balancing

    Introduction to Parallel Programming video lecture series – Part 09 “Implementing a Task Decomposition”

    The lecture given here is the ninth part in the “Introduction to Parallel Programming” video series. This part describes how design and implement a task decomposition solution. An illustrative example for solving the 8 Queens problem is used. Multiple approaches are presented with the pros and cons for each described. After the approach is decided upon, code modifications using OpenMP are presented. Potential data race errors with a shared stack data structure holding board configurations (the tasks to be processed) are offered and a solution is found and implemented.

    Introduction to Parallel Programming video lecture series – Part 08 “OpenMP for Task Decomposition”

    The lecture given here is the eighth part in the “Introduction to Parallel Programming” video series. This part describes how the OpenMP task pragma works and how it is different from the previous worksharing pragmas. A small linked list processing code example is used to illustrate how independent operation within a while-loop can be parallelized. Since recursive functions, where the recursive calls are independent, can be executed in parallel, the OpenMP task construct is used to parallelize the computation of a desired member from the Fibonacci sequence.

    Подписаться на parallel computing