Filters

Article

Recipe: Building and running NEMO* on Intel® Xeon Phi™ Processors

The NEMO* (Nucleus for European Modelling of the Ocean) numerical solutions framework encompasses models of ocean, sea ice, tracers, and biochemistry equations and their related physics.This recipe shows the performance advantages of using the Intel® Xeon Phi™ processor 7250.
Authored by Dmitry K. (Intel) Last updated on 03/27/2017 - 16:04
Responsive Landing Page

High Performance Computing (HPC) Webinars

Authored by admin Last updated on 03/27/2017 - 15:49
Forum topic

build with Scons

Hi,

I am trying to build a C++ code, which includes daal.h, with using Scons. I use the following Sconstruct script to run:

env = Environment(

Authored by Farzaneh Taslimi Last updated on 03/27/2017 - 15:36
Article

Intel C and C++ Compilers: Features and Supported Platforms

Intel® C++ Compiler Features Supported in Different Products
Authored by Jennifer J. (Intel) Last updated on 03/27/2017 - 15:20
Article

Intel® XDK FAQs - General

Provides frequently asked questions (FAQs) related to developing apps with the Intel® XDK, such as how to get started as a new user, installing more than one version of Intel XDK, signing an app and updating or uninstalling Intel XDK. It also covers questions related to the built-in Brackets editor, differences between mobile platforms, specifying app settings, and other topics.
Authored by Anusha M. (Intel) Last updated on 03/27/2017 - 14:38
Responsive Landing Page

Intel® Distribution for Python* | Overview

Authored by admin Last updated on 03/27/2017 - 14:16
Responsive Landing Page

Intel® Math Kernel Library (Intel® MKL)

Intel® Math Kernel Library (Intel® MKL) accelerates math processing routines that increase application performance and reduce development time.
Authored by Martin, Kay Last updated on 03/27/2017 - 14:15
Article

Introducing DNN primitives in Intel® Math Kernel Library

    Deep Neural Networks (DNNs) are on the cutting edge of the Machine Learning domain.

Authored by Vadim Pirogov (Intel) Last updated on 03/27/2017 - 14:14
Article

Training and Deploying Deep Learning Networks with Caffe* Optimized for Intel® Architecture

Caffe* is a deep learning framework developed by the Berkeley Vision and Learning Center (BVLC). Caffe optimized for Intel architecture is currently integrated with the latest release of Intel® Math Kernel Library (Intel® MKL) 2017 optimized for Advanced Vector Extensions (AVX)-2 and AVX-512 instructions which are supported in Intel® Xeon® and Intel® Xeon Phi™ processors (among others). This...
Authored by Andres R. (Intel) Last updated on 03/27/2017 - 14:11
Video

What is Intel® Optimized Caffe*

Caffe* is a deep learning framework that is useful for convolutional and fully connected networks, and recently recurrent neural networks were added.

Authored by Gerald M. (Intel) Last updated on 03/27/2017 - 14:09
For more complete information about compiler optimizations, see our Optimization Notice.