Filters

Article
Article

Benefits of Intel® Optimized Caffe* in comparison with BVLC Caffe*

Overview
Authored by JON J K. (Intel) Last updated on 05/30/2018 - 07:00
Blog post

Two more dates published for Intel Software Innovator Tour Italy: L'Aquila and Pisa

Today we can share two new dates for our Italian Tour hosted by the Italian Innovators.

See the full list and details for each event in the original post:

Authored by Marco D.P. (Blackbelt) Last updated on 07/13/2018 - 14:32
Article

面向 GEMM 引入新封装的 API

1     面向 GEMM 引入新封装的 API
Authored by Gennady F. (Blackbelt) Last updated on 07/05/2019 - 19:03
Article

Profiling TensorFlow* workloads with Intel® VTune™ Amplifier

This tutorial will show how to combine the data provided by TensorFlow*.timeline with options available in one of the most powerful performance profilers for Intel® Architecture – Intel® VTune™ Amplifier.
Authored by Alexandr Kurylev (Intel) Last updated on 03/21/2019 - 09:54
Article

ASTRO - The Robot solution for a Safer and more Productive Workplace

1. Robots and ASTRO
Authored by Silviu-Tudor Serban Last updated on 06/23/2019 - 18:50
Article

Intel® Media SDK & Intel® Media Server Studio Historical Release Notes

Release Notes of Intel® Media SDK include important information, such as system requirements, what's new, feature table and known issues since the previous release.

Authored by Liu, Mark (Intel) Last updated on 07/03/2019 - 20:07
Blog post

How to port your application from Intel® Computer Vision SDK 2017 R3 Beta to OpenVINO™ Toolkit.

Open Visual Inference & Neural network Optimization (OpenVINO™) toolkit (former Intel® Computer Vision SDK) - a set of tools and libraries which allow developers to accelerate their computer vi

Authored by Anna B. (Intel) Last updated on 05/31/2018 - 03:37
Blog post

Accelerate Computer Vision & Deep Learning with OpenVINO™ toolkit

Authored by admin Last updated on 03/21/2019 - 14:55
Article

Intel® Math Kernel Library Improved Small Matrix Performance Using Just-in-Time (JIT) Code Generation for Matrix Multiplication (GEMM)

    The most commonly used and performance-critical Intel® Math Kernel Library (Intel® MKL) functions are the general matrix multiply (GEMM) functions.

Authored by Gennady F. (Blackbelt) Last updated on 03/21/2019 - 03:01