Visionary AI Explorations

moon and earth

"[Scientists] gathering of data far outpaces their ability to make sense of it. The data NASA collects far exceeds its ability to understand. The research world usually has less access to the latest and greatest compute tools than a lot of the companies out there. But as a scientist, I fundamentally believe that we need to make sure we support those efforts.”1

— Naveen Rao, general manager of Artificial Intelligence Products, Intel

Research insights gained through artificial intelligence (AI) techniques deepen our understanding of the world around us, as well as delivering discoveries about off-world environments. For example, NASA’s Frontier Development Lab (FDL)—hosted by the SETI Institute in partnership with the NASA Ames Research Center—provides a platform for applying AI solutions to the challenges of space exploration. A recent project sponsored by Intel focused on using AI to identify useful resources on the moon. Ongoing research through FDL is revealing new ways in which AI can be used in space exploration, as well as charting paths for future scientific research across diverse fields of inquiry.  


Applications grounded in AI technologies continue to gain traction and demonstrate efficacy in science, medicine, finance, agriculture, and other sectors. At the same time, prospective early adopters seek tangible examples to guide project development and serve as proofs of concept for AI techniques.  


Practical examples of the ways in which AI can address real-world challenges are appearing with increasing frequency. This, in turn, is encouraging wider acceptance of AI technology, with successful projects ranging from space exploration breakthroughs for NASA to 3D-printed orthopedic braces that add intelligence and personalization to medical devices. These achievements are helping demonstrate applied innovation techniques and establishing a foundation for new use cases.  

Discoveries Based on Landmark AI Projects

Pioneering projects in AI are reshaping the nature of scientific inquiries and providing a richer, full-spectrum view of our surroundings, our bodies, and our human potential. Innovation is fueled by new technologies, with intelligent agents springing up everywhere from the core of data centers to the furthest reaches of the network edge. Advances driving these capabilities include improvements in processor capabilities, specialized integrated circuits (ICs) that are optimized for AI operations, computer vision advances, and software enhancement tailored to deep learning and machine learning. Innovators are imaginatively applying AI tools and techniques to further our knowledge about the natural world and extend research into space, as well as solving problems that benefit individuals at a one-to-one level.

To illustrate the ways in which AI is being practically applied, Shashi Jain, Innovation Manager in the Developer Relations Division of the Intel Visual Computing Group has led several projects that ventured into applied innovation techniques, combining diverse technologies into fresh solutions, encompassing pathfinding in the Internet of Things (IoT), machine learning, virtual reality (VR), as well as exploration into 3D-printing technology.

“We do experiments to find out what problems we can solve with our technology,” Shashi said. “As an example, a few years ago we developed an industrial wearable device built around the Intel® Edison module to help reduce back injuries in the workplace.”

“We also put microcontrollers on wine bottles,” Shashi continued, “and did some really interesting things identifying the right wine pairing for a meal. Or, finding all the bottles in your collection that meet certain criteria. Using this technique, you could track wine from the moment of bottling at the winery through distribution to a retailer to an individual wine cellar.”

“Another thing that we did,” Shashi said, “was to put a microcontroller with a sensor on a scoliosis brace. We captured pressure data to determine how well it was fitting and how long it was worn and presented this data to the user in an app, hoping it would improve compliance. We achieved that, but the real magic is what the designer of the brace did with the sensor data. The brace is fully 3D printed and it starts with a body scan. The designer used the sensor to incrementally improve the design of the brace, based on the individual patient’s own sensor data.”

“We are looking for all of these interesting use cases and fits for our technology and generating the insights that we can’t get any other way,” Shashi commented.

The following sections offer more insights into applied innovation techniques.  

Identifying Moon Resources

The NASA FDL is ushering in a new age of discovery by hosting collaborative interdisciplinary projects that address specific scientific challenges using AI-based research. Intel, along with other key private-sector partners, contributes hardware and technology to advance this endeavor.

Shashi recently led Intel’s sponsorship of the FDL and brought together Intel engineers to collaborate with researchers, applying AI to identify potential resources on the moon. The research relied on a massive dataset of images from the lunar polar regions and AI-guided crater detection. “FDL,” Shashi explained, “FDL is part of a public- private partnership between the government, the SETI Institute, and industry partners to apply commercial AI to space exploration challenges. The program focuses on accelerating the pace of modern machine-learning and AI techniques at NASA. NASA has some 50 years’ worth of data on lunar missions and space missions out to the planets— and everything in between—and now we have a chance to do space exploration using that data without leaving Earth.”

Shashi continued, “We bring together teams of experts in their areas: machine learning, generally for a post-doctorate program, a doctorate program, or anything in-between. They can either be university researchers, industry researchers, or people who are published in the background. We bring them together for eight weeks to focus on the challenges of space exploration that are relevant to NASA or to the commercial space industry and spend a good amount of time defining the problems in advance.”

“Beyond Space, AI is proving a vital tool in identifying gene activations, diagnosing tumors, managing power and heat, developing new molecules and even teaching robots to walk in a constantly moving environment.”2

— James Parr, Director, NASA Frontier Development Lab

Lunar Water and Volatiles Project

In 2017, Intel sponsored the Lunar Water and Volatiles challenge, assembling a team to focus on recovering water and volatile chemicals from the moon. As this is an applied research accelerator, Intel guided the team to identify and define challenges relevant to actual users, who turned out to be engineers at the NASA Jet Propulsion Laboratory (JPL) and NASA’s Ames Research Center, as well as companies focused on lunar missions. What’s remarkable is that missions to recover moon and planetary resources are being planned and may be launched within five years.

“There are 10,000 decisions that need to be made for any of these missions to happen,” Shashi said. “And we are right at the front of that process. So, the engineers we talked with articulated that their missions included topographical maps of the moon. We said, ‘OK, maybe we can maybe help you that.’ The objective was to identify craters using the imagery these agencies had obtained from the Lunar Reconnaissance Orbiter and LCROSS missions. If you can identify craters, you can identify orientation and shadowing and create a better topological map of the moon by ordering and combining images. The output is very precise. Right now, there are only a few areas around the equator that are very well mapped for upcoming lunar missions. The machine-learning operations will open up the other regions of the moon for a deeper analysis, including the permanently shadowed regions, which are at the poles. This is where NASA believes most of the water is.”

Accelerated Identification with Machine Learning

The team used a methodology to combine the optical imagery with an overlay of depth sensor imagery. “We can’t fully analyze a flat image to identify a potential water source,” Shashi explained. “However, once we overlay the depth sensor data with the optical imagery and run the computer vision algorithms, we can get a positive identification of those craters likely to have water. Using this approach, we can look all over the moon for the right kind of craters even if they are in shadowed regions.”

As shown in Figure 1, the two different datasets were used, relying on the craters themselves to register the optical images from the Lunar Reconnaissance Orbiter (LRO) Narrow Angled Camera (NAC) with elevation data captured using the Lunar Orbiter Laser Altimeter Digital Elevation Model (LOLA DEM). The computer vision algorithm developed by the team relied on a convolutional neural network (CNN) to analyze the optical images and elevation data using an adaptive convolution filter.

Lunar Reconnaissance Orbiter
Figure 1. Elevation measures (left) were aligned with optical images (right) to create precise lunar maps.

Using technology provided by Intel, the team sped up the crater identification process, requiring only 1 minute to classify 1,000 images. Results from this project showed that the AI-based crater detection provided 100X faster the identification speed in compared with human, with a success rate of 98.4 percent.3 The Intel research team is planning to improve the algorithms that were developed so that NASA can use the technology in a potential future moon mission to harvest resources. Complete validation of the machine- learning techniques could follow when manned missions to the moon resume. At that time, maps could be adjusted and refined and then resource accessibility can be reassessed.

“We have 50 years’ worth of NASA imagery from all sides of the moon. We’ve only recently begun to combine them and make one big, awesome map.”4

— Shashi Jain, innovation manager, Intel Visual Computing Group

N A S A lunar vehicle
Figure 2. Eugene Cernan at the controls of the Apollo 17 lunar rover (courtesy of NASA).  

Converging Technologies Lead to a Personalized 3D-Printed Back Brace

Another area rich with possibilities is using data and generative design processes to inform customizations of personalized medical devices. Intel contributed to a project for using captured data from a microcontroller to construct a 3D-printing orthopedic brace for people with scoliosis. Scoliosis—an abnormal curvature of the spine— affects millions of people and is common in children. Past-generation scoliosis braces have typically been heavy, uncomfortable, and burdensome, often causing wearers to remove them to gain some relief.

In comparison, a 3D-printed brace, designed by Studio Bitonti and commercialized by UNYQ, incorporates built-in sensors to monitor the wearer’s personal data. The sensor data contributes to the generative design process by using AI techniques to introduce incremental improvements, an area in which Intel provided expertise and design assistance. As a result, the customized, lightweight, comfortable braces can be worn for many hours during the day and are stylish enough to be worn as a fashion item outside of clothing.

The design prototype incorporated a compact Intel microcontroller that included an accelerometer and gyro, pressure sensors, and Bluetooth® technology capabilities. An app developed by Intel monitored and logged activity, pressure points, temperature, and other data. The designer, Francis Bitonti, recognized the potential in linking data to design and optimized the design to remove plastic that wasn’t therapeutic. Using a generative design technique, through multiple iterations, he enhanced the structure and design of the brace (Figure 3). Feeding data into a machine- learning system or deep-learning system provides a mechanism for shaping a design for aesthetics, functionality, and materiality.

U N Y Q Align design
Figure 3. The UNYQ Align* brace detects stress and weight points during a generative design process.

The capabilities of generative design extend to fashion as well, and collaborations with Bitonti Technology, Chromat*, and Intel have produced landmark designs such as the Adrenaline Dress (see Figure 4), which employs fabrics that respond to changes in the wearer’s breathing, temperature, and heart rate, expanding and contracting dynamically. As Bitonti states on their website, “Our design process is a collaboration with artificial intelligence.”

In a presentation given at a TCT Inside 3D Printing conference, Shashi talked about 3D printing, smart devices, and new manufacturing methods: “Where I think it gets much more interesting is when we start taking third-party datasets: electronic medical records, sports data, FitBit* data. Every one of us has a phone, every one of us has step counting and tracking sensors. We need to take this data and funnel it into these systems to generate these same design hints. We need to apply those third-party datasets to find optimizations that make a medical product fit a user’s lifestyle better.”5

Intel’s Adrenaline dress
Figure 4. Intel’s Adrenaline dress uses smart fabric to map to wearer’s emotions.

“We believe the next generation of material innovation will be both digital and physical. In other words, designers can work with a synthesis of information and design parameters and turn it into design.”6

— Francis Bitonti, Studio Bitonti  

Conducting Scientific Research with Drones and AI

Tracking whales and identifying individuals over hundreds of miles of ocean is a significant challenge that is made easier through a combination of Intel machine learning technology and unmanned aerial drones. A collaborative effort (dubbed Project Parley SnotBot) involving Parley for the Oceans, the Ocean Alliance, and Intel used drones to harvest whale spout water, emitted through the whale’s blowhole, to evaluate the biological data contained within it. Machine-learning algorithms devised by Intel can identify individual whales and perform real-time assessment of a number of factors, including the overall health of the whale. Despite limited visibility in the ocean and the unpredictable movements of the whales, drone tracking and analyzed data gives researchers a means to make decisions in the field and rapidly gain access to factors, such as DNA readings, presence of harmful viruses and bacteria, exposure to toxins, hormones associated with stress or pregnancy, and other conditions.

The founder of Parley for the Oceans, Cyrill Gutsch, commented, “Our vision is to create a global network of digital exploration tools that generate the big data we need to identify threats with new speed and precision, so we can act on them instantly.”7

Novel forms of data collection are one of the earmarks of AI-based solutions. Drones can collect data in difficult environments under challenging conditions. As in the previous example of tracking whales, AI techniques could be employed to use the thermal image data collected by drones to automate the identification of individual polar bears located in different environments.

The polar bear is one of the most elusive, wide-ranging animals on the planet. They are especially difficult to observe and track because of their white fur provides little contrast against the snow pack. With their habitat threatened by the impact of global climate disruption, polar bears are struggling to adapt and survive. As part of a research project to learn more about polar bear migration and behavior in the arctic, Intel teamed with Parley for the Oceans and noted wildlife photographer Ole Jørgen Liodden. Using an Intel® Falcon™ 8+ drone equipped with a thermal camera, the team was able to get close to the bears (within 50 to 100 meters) without disturbing them and collect data to better understand the bears’ habits and health status. The data helps inform wildlife researchers as well as climate change scientists to determine the effects of changing weather patterns on the animals living in this region as well as the environmental impacts.

Data tracking whales
Figure 5. Data captured tracking whales can help ensure their survival.

Traditional methods of observing polar bears include helicopter exploration, which is invasive and dangerous, and observation from a vessel, which is typically difficult because of the harsh arctic conditions. These methods can be easily retired in favor of using aerial drones equipped with cameras (Figure 7). Research projects, such as this one, provide opportunities for taking advantage both of unmanned aerial drones and AI-based data collection.

drone camera sleeping bear
Figure 6. Sleeping bear observed by the drone camera.

Traditional methods of observing polar bears include helicopter exploration, which is invasive and dangerous, and observation from a vessel, which is typically difficult because of the harsh arctic conditions. These methods can be easily retired in favor of using aerial drones equipped with cameras (Figure 7). Research projects, such as this one, provide opportunities for taking advantage both of unmanned aerial drones and AI-based data collection.

aerial drones unlock
Figure 7. Unmanned aerial drones unlock opportunities for new and exciting scientific research.

“Polar bears are a symbol of the Arctic. They are strong, intelligent animals. If they become extinct, there will be challenges with our entire ecosystem. Drone technology can hopefully help us get ahead of these challenges to better understand our world and preserve the Earth’s environment.”8

— Ole J. Liodden  

Enabling Technologies from Intel

Hardware compute resources

Intel® AI DevCloud, powered by Intel® Xeon® Scalable processors, provides an ideal platform for machine-learning and deep-learning training and inference computing. Developers in the Intel® AI Developer Program like the easy access and the pre-configured environment of the Intel AI DevCloud. Portions of projects discussed in this success story resided at some time on the Intel® Deep Learning Cloud (Intel® DL Cloud) & System, tailored for enterprise developers.

Optimized frameworks

The Intel® Optimization for Caffe* framework, available through the Intel® AI Developer Program, contains many optimization features tuned for CPU-based models. Intel’s contributions to Caffe*, a community-based framework developed by Berkeley AI research, improved performance when running algorithms on Intel® Xeon® processors.

Additionally, a customized deep-learning framework, Extended-Caffe*, provided an addition to the software stack so that CPU architecture can efficiently support 3D CNN computations. This makes it possible for researchers, data scientists, and developers to effectively implement projects using the CPU for 3D CNN model development, similar to the CNN techniques that proved successful for the Intel team working on the NASA FDL project.

“[People] think we are recreating a brain. But we want to go beyond that, we want to create a new kind of AI that can understand the statistics of data used in business, in medicine, in all sorts of areas, and that data is very different in nature than the actual world.”9

— Amir Khosrowshahi, chief technology officer, Artificial Intelligence Products Group, Intel Corporation  

AI is Expanding the Boundaries of Scientific Exploration

Through the design and development of specialized chips, sponsored research, educational outreach, and industry partnerships, Intel is firmly committed to advancing the state of artificial intelligence (AI) to solve difficult challenges in medicine, manufacturing, agriculture, scientific research, and other industry sectors. Intel works closely with government organizations, non-government organizations, educational institutions, and corporations to discover and advance solutions that address major challenges across diverse sectors.

To bring a new generation of AI-savvy developers into the fold, Intel sponsors challenges and events designed to encourage imaginative solutions to difficult problems. For example, the Intel® AI Interplanetary Challenge, launched on May 21, 2018, brings together the Planetary Society and Intel AI experts with others interested in crafting solutions to real- world space exploration challenges.

“Intel’s AI portfolio of products, tools, and optimized frameworks is uniquely designed to enable researchers and data scientists to use AI to solve some of the world’s biggest challenges, and it’s ideal for a problem such as accelerating space travel. From the moment we heard about this challenge, we were committed to applying our expertise and technology solutions to the groundbreaking work being done on applications of AI for space research. Congratulations to the research teams, and to the Intel mentors, who are advancing technology that could take us to Mars and beyond.”10

— Naveen Rao, corporate vice president and general manager, Artificial Intelligence Products Group, Intel

The Intel® AI technologies used in this implementation included:

Intel Xeon Scalable processors Intel® Xeon® Scalable processors: Tackle AI challenges with a compute architecture optimized for a broad range of AI workloads, including deep learning.

LogosFramework optimization: Achieves faster training of deep neural networks on a robust scalable infrastructure.

Intel AI DevCloud Intel® AI DevCloud: Offers a free cloud compute platform for machine-learning and deep-learning training and inference.

Join today at:

For a complete look at our AI portfolio, visit

“Scientists need to partner with AI. They can greatly benefit from mastering the tools of AI, such as deep learning and others, in order to explore phenomena that are less defined, or when they need faster performance by orders of magnitude to address a large space. Scientists can partner with machine learning to explore and investigate which new possibilities have the best likelihood of breakthroughs and new solutions.”11

Gadi Singer, Vice President and Architecture General Manager of Intel’s Artificial Intelligence Products Group  



  1. Heater, Brian. “NASA is using Intel’s deep learning to build better moon maps.” Techcrunch 2017.
  2. Dietmar Backes, Bohacek, E., Dobrovolskis, A., Seabrook, T. Automated Crater Detection Using Deep Learning. NASA FDL 2017.
  3. Gilbert, Elissa. “Using AI to Discover the Moon’s Hidden Treasures.” iq@Intel 2018.
  4. Jain, Shashi. “Robotic Design: How to Achieve Customisation at Scale.” YouTube 2017.
  5. Bonime, Western. “Get Personal, The Future of Artificial Intelligence Design.” Forbes 2017.
  6. "From Polar Bears to Whales: Intel Pushes the Boundaries of Wildlife Research with Drone and Artificial Intelligence.” Intel Newsroom 2018.
  7. Miller Landau, Deb. “Researchers Deploy Test Drones to Track Arctic Polar Bears.” IQ by Intel October 2018.
  8. "The Many Ways to Define Artificial Intelligence.” Intel Newsroom 2018.
  9. "Intel Showcases Application of AI for Space Research at NASA FDL Event."
  10. "How is Artificial Intelligence Changing Science?” Intel Newsroom 2018.
For more complete information about compiler optimizations, see our Optimization Notice.