Library

Content Type

OK

Cancel

Topic

OK

Cancel

Technologies

OK

Cancel

Tools and SDKs

OK

Cancel

IDE/Framework/Engine

OK

Cancel

Hardware and Developer Kits

OK

Cancel

Middleware

OK

Cancel

Programming Language

OK

Cancel

Operating System

OK

Cancel

Applied Filters

Select your operating system, distribution channel and then download your customized installation of Intel® oneAPI.

Profile and optimize the Reverse Time Migration application with Intel® Advisor

This is a SSD-ResNet34 Int8 inference model package optimized with TensorFlow* for bare metal.

This is a SSD-ResNet34 Int8 inference container optimized with TensorFlow*.

This is a ResNet50 FP32 inference container optimized with TensorFlow*.

This is a ResNet50 FP32 inference model package optimized with TensorFlow* for bare metal.

This is a NCF FP32 inference model package optimized with TensorFlow* for bare metal.

This is a NCF FP32 inference container optimized with TensorFlow*.

This is a Wide & Deep FP32 inference container optimized with TensorFlow*.

This is a Wide & Deep FP32 inference model package optimized with TensorFlow* for bare metal.

This is a Wide & Deep Large Dataset FP32 training model package optimized with TensorFlow* for bare metal.

This is a Wide & Deep Large Dataset FP32 training container optimized with TensorFlow*.