Intel® Software Innovator David Ojika: Bringing AI to More People

In just a few short years, David Ojika has earned a remarkable share of recognition from industry leaders in artificial intelligence: a recent Ph.D. in Computer Engineering from the University of Florida, he is a recipient of the Intel Code Modernization Fellowship award and a veteran of several internships at Intel, where he researched near-memory accelerators and heterogeneous computing platforms. As an intern for Microsoft Research, he researched field-programmable gate arrays (FPGAs) alongside the Project Catapult (Brainwave) founding team at Microsoft. Most recently he was a senior consultant for Dell EMC’s advanced server infrastructure and AI group.  

Ojika’s awards are too numerous to mention, and his wide-ranging research and development interests encompass systems research, focusing on deep learning platforms and architectures for large-scale, distributed data analytics – and making AI more accessible.

As a freshly minted Ph.D., what would you like to do?
I would like to establish strong partnership between industry and academia, particularly in applied AI. My aspiration is that AI can be more democratized in science disciplines and be made more accessible to faculty and students who are from varied academic disciplines, not just engineering and computer science alone.   

What are you doing as a research associate at University of Florida?
I direct a team of Ph.D. and graduate students on the research and performance benchmarking of end-to-end deep learning algorithms and applications on heterogeneous platforms (CPU+GPU+FPGA). Traditional CPU-based sequential processing no longer meets the requirements of compute-intensive, low-latency AI workloads. Our research aims at understanding the compute, I/O and memory bottlenecks of these workloads with respect to sets of desperate hardware architectures. I helped formulate an industry-university collaboration involving members from Intel, Dell, Berkeley National Lab, CERN OpenLab and the NSF Center of Space High Performance and Resilient Computing (SHREC). This partnership has resulted in published papers, as well as internship placement for minority students.

What are your other interests right now?
I’m currently focusing on systems and infrastructure for distributed AI (DAI). I think DAI is a challenging - but really interesting - topic because it uniquely encompasses AI in more practical contexts with respect to systems and applications. I am most passionate about seeing AI run fast and efficiently with purpose-built AI hardware such as FPGAs – while making AI more scalably accessible to non-AI developers. Another area of interest is edge computing. Last year, one of my projects involved an edge-to-cloud architecture for accelerated convolutional neural networks (CNNs) on the edge, which allowed real-time stream processing directly at the data source, mitigating the processing latency involved with a cloud-only approach. I believe in years to come, as edge computing and IoT become mainstream, more organizations will largely embrace this sort of edge-to-cloud, DAI computing style.

Tell us about your involvement with CERN.
During my time as a doctoral student, I had the opportunity to serve as an affiliate at the European Center for Nuclear Research, or CERN, from 2014 to 2018. As deep learning has gained so much prominence in data analysis, in the fall of 2017 I received a joint UF-CERN sponsorship to spend a month in Switzerland at CERN, working with physicists and other scientists to investigate ways of applying deep learning algorithms to the data analysis pipeline of the Compact Muon Solenoid (CMS) experiment at CERN. Through this effort, I was awarded a one-year fellowship by Intel to support my Ph.D. studies and develop deep learning applications that are optimized for high-performance processors, specifically the Intel® Xeon Phi™ processors formerly known as Knights Landing. This work led to a paper that was published at PEARC (formally Extreme Science and Engineering Discovery Environment, or XSEDE), research grants by Amazon AWS and Microsoft Azure, and acceptance to the 2018 KNL Hackathon at Brookhaven National Laboratory.

You’re also a mentor as well as a researcher. Tell us more about that.
I have a passion for mentoring. In the past few years, I have been fortunate to mentor over 20 students on a one-on-one basis. Many of these students come from very diverse backgrounds – ethnicity, gender, age group, etc. Through close mentorship and technical advising on class projects, several of the students and I have had joint publications and participation in student competitions, one of which was the highly popular ImageNet* image recognition competition in which my students (under the name GatorVision) were invited to the ImageNet 2015 challenge in Chile after emerging top 10 in Places2 subcategory of the challenge. Prior to this, I had the great opportunity of being University of Florida’s Electrical and Computer Engineering (ECE) Ambassador, Intel® Student Ambassador for AI, and XSEDE Student Champion. Within these roles, I had the responsibility of being the voice of the student community in various academic and professional activities, as well as being an advocate on high-performance computing (HPC) and AI-related activities.

One of my personal goals is to present STEM-focused opportunities to those who are underrepresented in the academy. I was super excited to learn at the recently concluded Supercomputing conference that my mentees had won the first-ever Dell EMC AI challenge after they initially emerged as finalist. In December, I will lead a team of 3 graduate students to Berkeley National lab as part of the Sustainable Research Pathways (SHI) scholarship that was recently awarded to our team at the University of Florida. This scholarship event will provide further opportunities for collaborating with Berkeley’s Computing Sciences group, and provide a 10-week summer internship opportunity at Berkeley lab for these graduate students. 


 

How have you benefited from your relationship with Intel?
My Intel internship exposed me to a broad range of hardware and software systems, and that exposure enabled me to advance my Ph.D. studies. It prompted me to continue collaborating as an Intel® Student Ambassador, helping build an AI community at the University of Florida. I’ve also been able to share my work through articles and Intel sponsored events. Since completing my Ph.D. work, I’ve moved from the Ambassador program to the Intel® Software Innovator program.

At Intel® AI DevCon (Intel® AIDC) in May, I gave a presentation on 
Fast Convolutional Neural Network (CNN) Inference in Resource-Constrained Environments. I presented comparative performance of image classification algorithms using FPGAs from Intel that accelerated existing and emerging CNN architectures. 

At Spark Summit 2017, I gave a talk 
Speeding Up Spark* with Data Compression on Intel® Xeon® Processors and FPGAs.  I explained how the overall processing time is improved by orders of magnitude, especially for large-scale systems, when FPGA’s take on the compute-heavy, compression tasks and free up the CPU for other work.

In the article, 
Solving Latency Challenge in End-to-End Deep Learning Applications, I explain how I use Intel® Movidius™ Myriad™ 2 technology for specialized vision processing at the edge. The process of training CNNs can be greatly enhanced in the cloud, but at the expense of introducing latency which may lead to lagging inference performance in edge devices. My research shows that leveraging specialized, low-power VPUs at the edge simplified the CNN/end-application integration. 

In my latest and ongoing work, I presented SCAIGATE: Science Gateway for Scientific Computing with Artificial Intelligence and Reconfigurable Architectures at Gateways 2018. The goal of SCAIGATE is to democratize access to FPGAs and AI in scientific computing and related applications, allowing reduced developer effort, customization, and increased performance efficiency all while simplifying ease-of-use in scientific applications. 

Want to learn more about the Intel® Software Innovator Program?
You can read about our innovator updates, get the full program overview, meet the innovators and learn more about innovator benefits. We also encourage you to check out Developer Mesh to learn more about the various projects that our community of innovators are working on.

Interested in more information? Contact Wendy Boswell

For more complete information about compiler optimizations, see our Optimization Notice.