Success Story: Using Artificial Intelligence (AI) to Detect Lung Cancer Nodules

abstract representation of lungs and technology intersecting

To help meet the rising challenge of lung cancer in China, a competition was launched in April 2017 to develop algorithms and apply AI to the analysis of computerized tomography images for detecting pulmonary nodules.

"Cancer incidence and mortality have been increasing in China, making cancer the leading cause of death since 2010 and a major public health problem in the country."1

    - Dr. Wanqing Chen, National Cancer Center in Beijing

 

Challenge

The increasing incidences of lung cancer are putting tremendous strain on the country's healthcare system. A means to assist in the diagnosis of the disease at an early stage is an essential requirement to improve treatment outcomes and save lives.

 

Solution

The Tianchi Healthcare AI Competition—cosponsored by Alibaba Cloud, Intel, and LinkDoc—focused on developing effective algorithms to detect lung cancer in its earliest stages. Computerized tomography (CT) scans and clinical records were used as input and training for the diagnostic algorithms, and machine learning and artificial intelligence (AI) were used for detection.

 

Background and History

In April 2017, three prominent organizations in the high-tech world—Intel, Alibaba Cloud, and LinkDoc—brought top researchers and developers together in a competition to screen for lung cancer using algorithms and AI. Intel, as a cosponsor of the competition, contributed the Tianchi hardware platform, powered by Intel® Xeon® processors and Intel® Xeon Phi™ processors, as well as libraries and development tools. Fellow cosponsor Alibaba Cloud, based in Hangzhou, China, provided the hosting infrastructure for the cloud-based operations. This infrastructure was built upon the deep learning hardware and software stack from Intel.

"Through this deep collaboration with Tianchi participants across the globe, the engineering teams obtained rich insights into the medical imaging analytic domain and better understood the commonality among different solutions. Through research and experimentation, we concluded that the traditional graphics processing unit (GPU) has limitations in this type of analysis while the central processing unit (CPU) has advantages."2

     – Albert Hu, Senior AI and Big Data Solution Architect, Intel Corporation

 

In addition, LinkDoc, located in Beijing, China, provided a large volume of high-resolution CT scans and clinical data from doctors who had evaluated the scans. Over the course of the competition, participants built and trained AI models to acquire identifying characteristics of cancer nodules, based on the CT image data and evaluations of the imagery.

Training AI models successfully to provide a high degree of confidence in results requires large quantities of quality images and data. This input was obtained from 16 of the leading cancer hospitals in China. They supplied lung CT scans from some 3,000 patients, including supporting clinical data as well. The size and scope of the Tianchi Healthcare AI Competition was important to the goal of improving the accuracy of training and validating the indicators that pointed toward the presence of lung nodules.

The team from Peking University won the event, besting the other 2,886 teams involved. However, the overall insights gained from this seven-month online project will benefit the entire global medical community. The Intel team reported on its key findings in Using the CPU for Efficient and Effective Medical Image Analysis, written by staff members Albert Hu, et al. A primary observation was that by using a CPU for training the model, rather than a general-purpose GPU, which is hindered by its limited memory capacity, more detailed image resolutions could be employed. This makes it possible to identify smaller pulmonary nodules, those that radiologists are more likely to miss.

Figure 1 shows the typical CT equipment used to capture detailed, 3D X-ray images.

A patient inside a MRI
Figure 1. Computed tomography images and artificial intelligence are used to detect pulmonary modules.

Data gathering and research carried out in China in 2015, indicating that about 4.2 million invasive cancer cases were diagnosed during the year, have spurred efforts to reduce environmental pollution problems and increase the effectiveness of care and treatment. Much of the effort is being directed to disadvantaged citizens and those living in rural areas where early detection and treatment could save many lives.

"Outdoor air pollution, considered to be among the worst in the world, indoor air pollution through heating and cooking using coal and other biomass fuels, and the contamination of soil and drinking water mean that the Chinese population is exposed to many environmental carcinogens."2

     – Dr. Wanqing Chen, National Cancer Center in Beijing

 

Key Findings of the Experimentation

The team that participated in the competition gained some important insights into the use of a 3D convolutional neural network (CNN) model to perform diagnostic analysis of CT images, which consist of multiple layers of X-ray images— cross-sectional slices of a scanned object—taken around a single axis of rotation. Using digital geometry processing, these two-dimensional images are combined to obtain a 3D representation of the organ being examined.

The team consolidated the key findings of their experimentation, summarizing the three stages:

  • Establish a 3D convolutional neural network model.
  • Train the model for a high degree of accuracy.
  • Compare the effectiveness of processor architectures for medical AI.

Because regular neural networks do not scale well to 3D images, most of the Tianchi participants relied on 3D CNN for analyzing the CT images. The layers of a CNN have the neurons that ingest and process the feature maps with three dimensions: width, height, and depth (depth, in this case, refers to the third dimension of an activation volume). To successfully detect very small nodules that could be missed in human evaluations, the 3D CNN model must be capable of ingesting and processing high-resolutions images.

 

Establishing a 3D Convolutional Neural Network Model

The team implemented a 3D CNN model to reflect state- of-the-art lung nodule detection, following the common philosophy among all the Tianchi participants. Using the Intel® Math Kernel Library for Deep Neural Networks (Intel® MLK-DNN), the team devised an approach by which the 3D data and kernels were organized into a group of 2D slices (shown as the same color in Figure 2). A series of summing operations progressively moves through intermediate steps, convolving the corresponding 2D slices, until an output is produced.

Data blocks visualized in various colors
Figure 2. Highly efficient 3D convolution (leveraging the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) 2D convolution).

 

"We found that the CPU platform, compared to the traditional GPGPU platform, can more effectively support 3D CNNs for medical image analysis due to the CPU's advantage of large memory capacity, while keeping high computing efficiency through delicate algorithm implementations for 3D CNN primitives."3

     – Albert Hu, Senior AI and Big Data Solution Architect, Intel Corporation

 

Training the model

By training the model with captured data of varying resolutions, the team proved—quantitatively—that a model based on higher-resolution data achieves higher detection performance, increasing the accuracy for identifying smaller nodules. The higher-resolution model consumed much more memory than lower-resolution models and thus required more available memory to operate effectively.

Evaluating the effectiveness of a model's detection capabilities is done by calculating a FROC (free-response receiver operating characteristics) score. The results, shown in Figure 3, indicate the sensitivity of the model versus the average number of false positives.

The FROC scores obtained based on the 3D CNN model compare the results from the different resolutions that were used during training. Those models trained with higher resolutions achieved higher FROC scores, proving quantitively that high resolutions can improve the effectiveness of the model's detection capabilities.

Additional data captured during the model training supported the assertion that the AI solution showed improved accuracy at detecting small nodules, a capability that is of particular benefit to radiologists, who have more difficulty identifying smaller nodules. A graph (figure 8) in the full project report, Using the CPU for Effective and Efficient Medical Image Analysis,4 represents these results visually, as well as presenting much more information about the model design, memory consumption analysis, and processes employed by the team.

FROC scores to resolutions graph
Figure 3. Higher resolution leads to higher FROC scores.

 

The use of Intel MKL-DNN significantly enhanced the efficiency of the 3D primitives used in the CNN model, such as 3D convolution, 3D pooling, 3D cross-entropy loss, and 3D batch normalization, resulting in faster training performance and inference results, as shown in figure 5. Caffe, a community-based framework developed by Berkeley AI research, resulting in better performance when running on Intel Xeon processors.

 

Enabling technologies

The Alibaba Cloud infrastructure that served as the foundation for the Tianchi challenge was optimized for running AI workloads, capitalizing on the high performance, efficiency, and scalability of Intel Xeon processors and Intel Xeon Phi processors.

The primary enabling technologies that were used specifically by the Intel development team during this project include:

  • Intel® Xeon® Scalable processor with the Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instruction set
  • Intel® Software Optimization for Deep Learning Technologies (software libraries include the Intel® Math Kernel Library (Intel® MKL) and Intel® MKL-DNN, and Intel® Distribution for Python* mathematic libraries

Comparison graph of optimized and non-optimized training and inference rates
Figure 4. Comparison of optimized and non-optimized training and inference rates.

 

Comparing processor architectures for medical AI applications

Comparing the behaviors of general-purpose computing on graphics processor units (GPGPU) and CPU led to the discovery that the CPU architecture support for larger memory capacities enabled AI developers to implement designs based on higher resolution images, yielding improved detection results over earlier GPGPU models that have been developed.

By analyzing memory consumption during the training of the model, the team compared the image-handling capacities of the CPU and GPGPU platforms. The results indicated that for a batch size of one, a current-generation GPGPU with 12 GB memory can support 128 × 128 × 128 image resolutions. In comparison a CPU platform equipped with 384 GB memory supports image resolutions up to 448 × 448 × 448. For batch sizes of four, the difference is more striking. The GPGPU can only handle up to 96 x 96 x 96 resolutions, compared to 256 × 256 × 256 for the CPU.

 

Customized deep learning framework

The customized deep learning framework, Extended Caffe, which was used at the core of Tianchi's software stack, demonstrates that CPU architecture can support efficient 3D CNN computations. This makes it possible for researchers, data scientists, and developers to effectively implement projects using the CPU for 3D CNN model development.This version of Caffe was specifically developed for the Tianchi competition.

Although Intel® Optimization for Caffe*, available through the Intel® AI Academy, does not yet support 3D CNN, the version does contain many optimization features tuned for CPU- based models. Intel has made a number of contributions to Intel MKL-DNN accelerates deep learning frameworks on Intel® architecture (see figure 5), using highly vectorized and threaded building blocks. The building blocks streamline the development of CNNs with C and C++ interfaces.

The Intel® MKL-DNN recognizes three primary object types:

  • Primitive: A performed operation, such as convolution.
  • Engine: An execution device, such as a CPU, to which each primitive is mapped.
  • Stream: An execution context through which primitives are directed to a stream.

Deep Learning Framework (Caffe*, Theno*)

Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN)

Intel® Architecture (Intel® Xeon® Scalable processors with Intel® Advanced Vector Extensions 512 (Intel® AVX-512))

 

 

Typically, particularly in CNN projects, a developer creates a set of primitives, directs them to a stream using a specified engine, and then awaits completion of the operations.

AI is expanding the boundaries of medicine through the design and development of specialized chips, sponsored research, educational outreach, and industry partnerships, Intel is firmly committed to advancing the state of AI to solve difficult challenges in medicine, manufacturing, agriculture, scientific research, and other industry sectors. Intel works closely with government organizations, non- government organizations, educational institutions, and corporations to uncover and advance solutions that address major challenges in the sciences. For example, working with the Montefiore Health System, Intel helped deploy a data analytics platform, powered by Intel Xeon processors, that provides near real-time analysis of raw data so that clinicians can implement the most effective treatments for patients in their care.

"One of the projects that we have been doing is using the hospital clinical data of patients to help us identify patients who are at high risk for developing respiratory failure or of dying of a sudden event in the hospital. This kind of predictive analytics harnesses the data that is already being collected by the hospital, but what we are doing is using this data in a way with technology to identify patients who are at a higher risk of having an event in the hospital so that we can respond to them earlier."5

     – Dr. Michelle NG Gong, Director of Critical Care Research, Montefiore Health System and Albert Einstein College of Medicine

 

The Intel® AI portfolio includes:

Intel Xeon logo

Intel® Xeon® Scalable processor: Tackle AI challenges with a compute architecture optimized for a broad range of AI workloads, including deep learning.

Framework Optimization

Framework Optimization: Achieve faster training of deep neural networks on a robust scalable infrastructure.

Intel Movidius Myriad

Intel® Movidius™ Myriad™ Vision Processing Unit (VPU): Create and deploy on-device neural networks and computer vision applications.

 

For more information, visit this portfolio page: https://ai.intel.com/technology

"Understanding the importance of integrating hardware and software advances to create AI experiences, we have invested in acquiring companies that are innovating around the hardware and the software driving intelligent applications."6

     – Andy Bartley, Health and Sciences Solution Architect, Intel Corporation

 

Resources

Using the CPU for Effective and Efficient Medical Image Analysis

Inside Artificial Intelligence - Next-level computing powered by Intel AI

Intel® Optimization for Caffe*

Intel® Math Kernel Library

IBM*, Intel, Stanford Bet on AI to Speed Up Disease Diagnosis and Drug Discovery

One Simple Truth about Artifical Intelligence in Healthcare - It's Already Here

Getting the Most out of AI with Caffe* Deep Learning Framework

Media Alert: Shaping the Future of Healthcare through Artificial Intelligence (Video Replay)

 

References

  1. Chen W, Zheng R, Zhang S, et al. Cancer Statistics in China, 2015, CA: Cancer J. for Clin. 2016
  2. Ibid.
  3. Albert Hu, Hui W., et al. "Using the CPU for Effective and Efficient Medical Image Analysis", Intel. 2017
  4. Ibid.
  5. "Montefiore and Intel Work Together to Personalize Medicine" Intel, Video 2017
  6. "Leverage AI to revolutionize and advance healthcare" Healthcare IT News, 2017

 

Para obter informações mais completas sobre otimizações do compilador, consulte nosso aviso de otimização.