Фильтры

Article

Performance Benefits of Half Precision Floats

Half precision floats are 16-bit floating-point numbers, which are half the size of traditional 32-bit single precision floats, and have lower precision and smaller range.

Автор: Patrick Konsor (Intel) Последнее обновление: 10.07.2019 - 17:05
Article

Vectorizing Loops with Calls to User-Defined External Functions

Introduction

Автор: Anoop M. (Intel) Последнее обновление: 12.12.2018 - 18:00
Article

Improve Intel® MKL Performance for Small Problems: The Use of MKL_DIRECT_CALL

One of the big new features introduced in the Intel® Math Kernel Library (Intel® MKL) 11.2 is the greatly improved performance for small problem sizes.

Автор: Zhang, Zhang (Intel) Последнее обновление: 07.07.2019 - 10:35
Article

Sierpiński Carpet in OpenCL* 2.0

We demonstrate how to create a Sierpinski Carpet in OpenCL* 2.0

Автор: Robert I. (Intel) Последнее обновление: 31.05.2019 - 14:20
Article

The Generic Address Space in OpenCL™ 2.0

Introduction What is the Generic Address Space?
Автор: Adam Lake (Intel) Последнее обновление: 03.07.2019 - 10:34
Article

Using OpenCL™ 2.0 Read-Write Images

While Image convolution is not as effective with the new Read-Write images functionality, any image processing technique that needs be done in place may benefit from the Read-Write images. One example of a process that could be used effectively is image composition. In OpenCL 1.2 and earlier, images were qualified with the “__read_only” and __write_only” qualifiers. In the OpenCL 2.0, images can...
Автор: Последнее обновление: 31.05.2019 - 14:20
Article

使用 OpenCL™ 2.0 读写图片

While Image convolution is not as effective with the new Read-Write images functionality, any image processing technique that needs be done in place may benefit from the Read-Write images. One example of a process that could be used effectively is image composition. In OpenCL 1.2 and earlier, images were qualified with the “__read_only” and __write_only” qualifiers. In the OpenCL 2.0, images can...
Автор: Последнее обновление: 31.05.2019 - 14:20
Article

Using Intel® MPI Library 5.1 on Microsoft* Windows* with Microsoft* MPI based applications

Why it is needed?
Автор: Dmitry S. (Intel) Последнее обновление: 12.12.2018 - 20:11
Блоги

Reduce Boilerplate Code in Parallelized Loops with C++11 Lambda Expressions

Parallelize loops with Intel® Threading Building Blocks using Intel® C++ Compiler for lambda expressions.
Автор: gaston-hillar (Blackbelt) Последнее обновление: 12.12.2018 - 18:00
Article

Implementing a Masked SVML-like Function Explicitly in User-Defined Way

The Intel® Compiler provides SIMD intrinsics APIs for short vector math library (SVML) and starting with Intel® Advanced Vector Extensions

Автор: Последнее обновление: 16.07.2019 - 08:37