The Convergence of Cybersecurity and Artificial Intelligence

Hardware Security for Black Hat Conference
Cybersecurity threats have been with us since the dawn of the Internet. According to a 2015 article in The Washington Post, Robert Metcalfe, who would later found 3Com, warned the ARPANET Working Group in 1973 that it was far too easy to gain access to the network—he described intrusions that were apparently the work of high school students. By 1996, new animation tools such as Flash revolutionized Web sites, but hackers found the tools also let them take remote control of computers on the Internet.
Now we face the opportunities and challenges presented by the widespread emergence of artificial intelligence (AI) systems and algorithms. According to a July 26, 2018 DARK READING article, “AI is revolutionizing cybersecurity for both defenders and attackers as hackers, armed with the same weaponized technology, create a seemingly never-ending arms race.”
Protecting data is critical to all of us who connect to the digital world, and Intel leads that effort across the developer ecosystem with innovative technologies and solutions based on a silicon root of trust.

Everything Changes & Everything Stays the Same

Intel research and technology not only enables advanced, next generation usages with high performance AI, but does it with best practices in security and privacy to ensure trustworthiness.
There is a lot of talk about Zero Trust Networks as enterprise IT evolves to a cloud-first model. While some of these concepts are decades old, we learned a lot over the years, and we are now building an ecosystem that allows deployment of newer models. 
Big changes are unlikely in the operational sense, as things change incrementally in enterprise IT. But attackers will stay ahead of us unless we step up and protect against future attacks, using the latest/greatest arsenal we have.

AI for Client Platform Security

IT departments are inundated with data, and they face huge challenges in keeping their businesses running while maximizing user experience and minimizing costs.  By leveraging AI models, IT departments can intelligently recognize anomalies and proactively respond to them.  Without such capabilities, tens or even hundreds of hours of valuable productivity would be lost.
Traditional models supporting device health dictated that data be extracted from client devices, shipped and analyzed in the cloud, and then reacted to through traditional patching tools.  There are a number of challenges with this approach.  First, such models are privacy-challenged as potentially sensitive end user data is shipped to the cloud.  Second, this creates a large attack surface for potential malicious actors.  Third, the volume of data is sufficiently large that only a fraction can be forwarded without significantly degrading device usability.   Third, these efforts are manually intensive as experienced engineers must pour through troves of data to find the signal that will enable them to find and address the root-cause of the problem.  Last, remediation takes days, if not weeks, in this paradigm.
By enabling intelligence on edge devices, Intel enables problems to be dealt with privately and securely on edge devices with little to no impact on user productivity.
As compute becomes more and more ubiquitous and diverse, cyber threats proliferate and become increasingly sophisticated. The security industry is increasingly relying on AI-based solutions to solve many challenging security problems that traditional signature or rule based solutions are ineffective.
Besides AI algorithms, AI solutions also depend on the quality of input data and the availability of compute power. In addition to offering hardware-based accelerators that help AI run more efficiently, Intel is working with security industry leaders to innovate AI-based solutions that use hardware telemetry data to detect advanced threats that are undetectable by software telemetry alone. Intel® Accelerated Memory Scanning (AMS) technology is a notable example.

Adversarial Machine Learning

AI has demonstrated its ability to enhance security as well as many other applications. But the value of AI in enhancing security is juxtaposed with the danger of AI in exploiting security vulnerabilities.  That is the main focus of an emerging research area called adversarial machine learning. Carefully crafted adversarial examples, with minor perturbations to the input, may cause a classification system to misclassify. Often, such misclassification comes with high prediction confidence from AI algorithms. These findings can be worrisome—how much should people embed AI in their everyday lives?  
Understanding vulnerabilities and resiliency of machine learning—particularly deep learning algorithms—helps identify the AI attack surfaces and promotes new waves of analytically-secured AI frameworks. Intel is a pioneer in recognizing the importance of adversarial machine learning. In 2016, Intel launched a three-year collaboration project with a $1.5 million grant through the Intel Science and Technology Center (ISTC) to Georgia Tech Institute, focusing on adversarial resilient security analytics (ARSA). 
ISTC-ARSA has expanded adversarial machine learning research in domains such as malware analysis, computer vision and audio recognition. The cutting-edge technologies and methodologies developed by Intel and Georgia Tech collaboration result in multiple publications in top-tier security or machine learning conferences and summit. 
Besides funding fundamental security research, Intel is working with leading security vendors to protect the integrity of their AI models through hardware-enhanced security capabilities.  These include Intel® Software Guard Extensions (Intel® SGX) in our latest processors, and high-quality hardware telemetry data for improving the robustness of AI classifiers. These and other Intel hardware-enhanced security efforts will help to make AI more resilient and make it more difficult for adversarial attacks to succeed.

Lessons Learned

The attack surfaces identified on AI algorithms indicate a need for better-designed AI frameworks and continuing hardware innovation to accelerate them. The industry needs more researchers and engineers with in-depth knowledge in both AI and security to provide the next generation of innovation. Such expertise already is in great demand. 
Intel acknowledges its responsibility to build up this talent pool. Via Intel’s university collaboration investments, Intel supports students to continue research in this domain and promotes frameworks developed by academics to be used as training tools. 

For More Information

For more complete information about compiler optimizations, see our Optimization Notice.