Intel® MPI Library

利用英特尔® Nervana™ 技术、neon* 和 Pachyderm* 实施 Docker* 化的分布式深度学习

机器学习和人工智能领域的最新进步令人惊叹!几乎每天都会出现新的突破性进展,从自动驾驶汽车到人工智能学习复杂的游戏。为了给公司带来真正的价值,数据科学家必须在公司的数据管线和基础设施上部署模型,不能仅限于在电脑上展示模型。

此外,数据科学家应该在改善机器学习应用上投入更多的精力,他们不需要花大量的时间手动更新应用,以应对不断变化的生产数据;也不需要在追溯和跟踪反常的过往行为中浪费时间。

Docker*Pachyderm* 有助于数据科学家在生产集群上创建、部署和更新机器学习应用,在大数据集之间分配处理以及在数据管线中跟踪输入和输出数据。本文将展示如何利用英特尔® Nervana™ 技术、neon* 和 Pachyderm设置生产就绪型机器学习工作流程。

  • Students
  • Artificial Intelligence
  • Python*
  • Intermediate
  • Intel® Data Analytics Acceleration Library (Intel® DAAL)
  • Intel® Distribution for Python*
  • Intel® Math Kernel Library
  • Intel® MPI Library
  • Intel® Threading Building Blocks
  • Neon*
  • Pachyderm*
  • Big Data
  • Machine Learning
  • Intel® MPI Library 2018 Beta - Documentation

    The section below provides links to the Intel® MPI Library 2018 Beta documentation. You can find other documentation, including user guides and reference manuals for current and earlier Intel software product releases in the Intel® Software Documentation Library.

    Visit this page for documentation pertaining to the latest stable Intel MPI Library release.

  • Intel® MPI Library
  • Tracing and Correctness Checking

    Intel® MPI Library provides tight integration with the Intel® Trace Analyzer and Collector, which enables you to analyze MPI applications and find errors in them. Intel® MPI Library has several compile- and runtime options to simplify the application analysis. Apart from the Intel Trace Analyzer and Collector, there is also a tool called Application Performance Snapshot intended for a higher level MPI analysis.

    Subscribe to Intel® MPI Library