Article

Multi-threading Line-of-Sight Calculations to Improve Sensory System Performance in Game AI

In this article, Alex Champandard describes how to accelerate Multi-threading Line-of-Sight calculations to improve AI sensory system performance through the concept of a centralized sensory system using a mini-game prototype AI Sandbox.
Autor admin Última actualización 24/01/2018 - 15:35
Article

The Secrets of Parallel Pathfinding on Modern Computer Hardware

One of the first things that game AI developers parallelize is pathfinding as it is an expensive operation. The most common approach is to fire off the pathfinder in a separate thread. This article examines a multi-threaded pathfinding implementation.
Autor Última actualización 31/12/2018 - 15:00
Article

Building a Personality-Driven Poker AI for Lords of New York*

Writing artificial intelligence (AI) might be the best job in games. It’s creative, challenging, and blurs the line between game design and programming. AI is used for a variety of tasks ranging from the mechanical (such as auto-attacking enemies) and bot AI, to flocking group intelligence, even to deep-thinking military generals. Games that emphasize story and character-based immersion such as...
Autor Coppock, Michael J (Intel) Última actualización 30/05/2018 - 07:00
Mensajes en el blog

Celebrating a Decade of Parallel Programming with Intel® Threading Building Blocks (Intel® TBB)

This year marks the tenth anniversary of Intel® Threading Building Blocks (Intel® TBB).

Autor Sharmila C. (Intel) Última actualización 01/08/2019 - 09:30
Article

Using Intel® Data Analytics Acceleration Library to Improve the Performance of Naïve Bayes Algorithm in Python*

This article discusses machine learning and describes a machine learning method/algorithm called Naïve Bayes (NB) [2]. It also describes how to use Intel® Data Analytics Acceleration Library (Intel® DAAL) [3] to improve the performance of an NB algorithm.
Autor Nguyen, Khang T (Intel) Última actualización 06/07/2019 - 16:40
Article

Introducing DNN primitives in Intel® Math Kernel Library

Please notes: Deep Neural Network(DNN) component in MKL is deprecated since intel® MKL ​2019 and will be removed in the next intel® MKL Release.

Autor Vadim Pirogov (Intel) Última actualización 21/03/2019 - 12:00
Article

Benefits of Intel® Optimized Caffe* in comparison with BVLC Caffe*

Overview
Autor JON J K. (Intel) Última actualización 30/05/2018 - 07:00
Article

Intel® Media SDK & Intel® Media Server Studio Historical Release Notes

Release Notes of Intel® Media SDK include important information, such as system requirements, what's new, feature table and known issues since the previous release.

Autor Liu, Mark (Intel) Última actualización 03/07/2019 - 20:07
Article

Intel® Math Kernel Library Improved Small Matrix Performance Using Just-in-Time (JIT) Code Generation for Matrix Multiplication (GEMM)

    The most commonly used and performance-critical Intel® Math Kernel Library (Intel® MKL) functions are the general matrix multiply (GEMM) functions.

Autor Gennady F. (Blackbelt) Última actualización 21/03/2019 - 03:01
Article

Maximize TensorFlow* Performance on CPU: Considerations and Recommendations for Inference Workloads

This article will describe performance considerations for CPU inference using Intel® Optimization for TensorFlow*
Autor Nathan Greeneltch (Intel) Última actualización 31/07/2019 - 12:11