Article

Combining Linux* Message Passing and Threading in High Performance Computing

An article addressing thread and task parallelism. This article can be used to optimize framework methodology. Written by Andrew Binstock--Principal Analyst at Pacific Data Works LLC and lead author of "Practical Algorithms for Programmers."
Criado por Última atualização em 06/07/2019 - 16:22
Article

Threading Models for High-Performance Computing: Pthreads or OpenMP?

In recent years, Linux* has bolster its presence on the server, due to improved kernel support for threads. Along the way, Linux abandoned its original threading API (called Linux threads) and adopted Pthreads as its native threading interface, joining most of the UNIX variants available today. Linux developers-just like programmers working on UNIX and Windows*-can avail themselves of a second...
Criado por Última atualização em 06/07/2019 - 16:40
Article

Caffe* Training on Multi-node Distributed-memory Systems Based on Intel® Xeon® Processor E5 Family

Caffe is a deep learning framework developed by the Berkeley Vision and Learning Center (BVLC) and one of the most popular community frameworks for image recognition. Caffe is often used as a benchmark together with AlexNet*, a neural network topology for image recognition, and ImageNet*, a database of labeled images.
Criado por Gennady F. (Blackbelt) Última atualização em 05/07/2019 - 14:54
Article

基于英特尔® 至强™ 处理器 E5 产品家族的多节点分布式内存系统上的 Caffe* 培训

Caffe is a deep learning framework developed by the Berkeley Vision and Learning Center (BVLC) and one of the most popular community frameworks for image recognition. Caffe is often used as a benchmark together with AlexNet*, a neural network topology for image recognition, and ImageNet*, a database of labeled images.
Criado por Gennady F. (Blackbelt) Última atualização em 05/07/2019 - 14:55
Article

Palestra: Como otimizar seu código sem ser um "ninja" em Computação Paralela

Não perca a palestra "Como otimizar seu código sem ser um "ninja" em Computação Paralela" da Intel que será ministrada durante a Semana sobre Programação Massivamente Paralela em Petrópolis, RJ, no Laboratório Nacional de Computação Científica. Data: 02/02/2016 - 11h30 Local: LNCC - Av. Getúlio Vargas, 333 - Quitandinha - Petrópolis/RJ
Criado por Igor F. (Intel) Última atualização em 06/07/2019 - 16:40
Article

Classical Molecular Dynamics Simulations with LAMMPS Optimized for Knights Landing

LAMMPS is an open-source software package that simulates classical molecular dynamics. As it supports many energy models and simulation options, its versatility has made it a popular choice. It was first developed at Sandia National Laboratories to use large-scale parallel computation.
Criado por WILLIAM B. (Intel) Última atualização em 21/03/2019 - 12:00
Article

Caffe* Optimized for Intel® Architecture: Applying Modern Code Techniques

This paper demonstrates a special version of Caffe* — a deep learning framework originally developed by the Berkeley Vision and Learning Center (BVLC) — that is optimized for Intel® architecture.
Criado por Última atualização em 06/07/2019 - 16:40
Article

Introducing DNN primitives in Intel® Math Kernel Library

Please notes: Deep Neural Network(DNN) component in MKL is deprecated since intel® MKL ​2019 and will be removed in the next intel® MKL Release.

Criado por Vadim Pirogov (Intel) Última atualização em 21/03/2019 - 12:00
Article

面向英特尔® 架构优化的 Caffe*:使用现代代码技巧

This paper demonstrates a special version of Caffe* — a deep learning framework originally developed by the Berkeley Vision and Learning Center (BVLC) — that is optimized for Intel® architecture.
Criado por Última atualização em 06/07/2019 - 16:40
Article

在英特尔® 数学核心函数库中引入 DNN 基元

    深度神经网络 (DNN) 处于机器学习领域的前沿。这些算法在 20 世纪 90 年代后期得到了行业的广泛采用,最初应用于诸如银行支票手写识别等任务。深度神经网络在这一任务领域已得到广泛运用,达到甚至超过了人类能力。如今,DNN 已用于图像识别、视频和自然语言处理以及解决复杂的视觉理解问题,如自主驾驶等。DNN 在计算资源及其必须处理的数据量方面要求非常苛刻。

Criado por Vadim Pirogov (Intel) Última atualização em 21/03/2019 - 12:08