Library

Content Type

OK

Cancel

Topic

OK

Cancel

Technologies

OK

Cancel

Tools and SDKs

OK

Cancel

IDE/Framework/Engine

OK

Cancel

Hardware and Developer Kits

OK

Cancel

Middleware

OK

Cancel

Programming Language

OK

Cancel

Operating System

OK

Cancel

Applied Filters

This is a Wide & Deep Large Dataset FP32 training model package optimized with TensorFlow* for bare metal.

This is a Wide & Deep Large Dataset FP32 training container optimized with TensorFlow*.

This is a WaveNet FP32 inference model package optimized with TensorFlow* for bare metal.

This is a WaveNet FP32 inference container optimized with TensorFlow*.

This is a MobileNet V1 FP32 inference model package optimized with TensorFlow* for bare metal.

This is a MobileNet V1 FP32 inference container optimized with TensorFlow*.

This is a ResNet50 v1.5 FP32 inference model package optimized with TensorFlow* with artifacts needed to run on bare metal.

This is a ResNet50 v1.5 FP32 inference container optimized with TensorFlow*.

This article describes how to compile Intel® oneAPI DPC++ FPGA Designs on Red Hat Enterprise Linux (RHEL)* 7.4 OS.

This document describes the architecture of the Intel® Data Streaming Accelerator (Intel® DSA)

Get software and hardware requirements for the Intel oneAPI IoT Toolkit. Also ensure you have all the prerequisites 3rd party software builds.

This document contains instructions about Intel® FPGA workflows on Eclipse* and Microsoft Visual Studio*.