Intel® Parallel Computing Center at LAMDA Group, Nanjing University

nanjing university logo   LAMDA Logo

Principal Investigators:

Prof. Zhi-Hua Zhou portraitProf. Zhi-Hua Zhou is currently Professor and Standing Deputy Director of the National Key Laboratory for Novel Software Technology; he is also the Founding Director of the LAMDA group. His research interests are mainly in artificial intelligence, machine learning and data mining. He has authored the books Ensemble Methods: Foundations and Algorithms and Machine Learning (in Chinese), and published more than 150 papers in top-tier international journals or conference proceedings. 

He has received various awards/honors including the National Natural Science Award of China, the PAKDD Distinguished Contribution Award, the IEEE ICDM Outstanding Service Award, the Microsoft Professorship Award, etc. He also holds 22 patents.

He is an Executive Editor-in-Chief of the Frontiers of Computer Science, Associate Editor-in-Chief of the Science China Information Sciences, Action or Associate Editor of Machine Learning, IEEE Transactions on Pattern Analysis and Machine Intelligence, ACM Transactions on Knowledge Discovery from Data, etc. He served as Associate Editor-in-Chief for Chinese Science Bulletin (2008-2014), Associate Editor for IEEE Transactions on Knowledge and Data Engineering (2008-2012), IEEE Transactions on Neural Networks and Learning Systems (2014-2017), ACM Transactions on Intelligent Systems and Technology (2009-2017), Neural Networks (2014-2016),  Knowledge and Information Systems (2003-2008), etc.

He founded ACML (Asian Conference on Machine Learning), served as Advisory Committee member for IJCAI (2015-2016), Steering Committee member for ICDM, PAKDD and PRICAI, and Chair of various conferences such as General co-chair of PAKDD 2014 and ICDM 2016, Program co-chair of SDM 2013 and IJCAI 2015 Machine Learning Track, and Area chair of NIPS, ICML, AAAI, IJCAI, KDD, etc. He is/was the Chair of the IEEE CIS Data Mining Technical Committee (2015-2016), the Chair of the CCF-AI(2012- ), and the Chair of the Machine Learning Technical Committee of CAAI (2006-2015). He is a foreign member of the Academy of Europe, and a Fellow of the ACM, AAAI, AAAS, IEEE, IAPR, IET/IEE, CCF, and CAAI.

Description:

The major goal of this Intel® Parallel Computing Center (Intel® PCC) is to implement a deep forest framework as an alternative to neural networks on KNL and all IA architectures. The deep forest model possesses non-differential units (i.e., tree/tree ensembles) instead of neural units to construct multi-layered structure with highly competitive performance compared with current deep models without the need of GPU. Due to the properties of tree ensemble unites, such approaches are born to be suitable for IA architectures rather than GPU structure, and can handle discrete or tabular data better than perceptron based neural networks. There is big potential to be optimized on IA, especially to utilize the many core architecture devices such as Intel® Xeon® and Intel® Xeon Phi™. By doing so, we believe a CPU centered deep learning system can be achieved using decision trees as building blocks instead of neurons. 

In other words, after a performance profiling on the current deep forest code, optimizations and modifications on the current implementation on Intel Xeon devices will be carried out accordingly. Other variations of deep forest models for specific tasks will also be designed and implemented, with the help of Intel® Many Integrated Core Architecture (Intel® MIC Architecture) and the Intel® AI platform.
 
This Intel Parallel Computing Center will also give students hands-on experiences of applying AI technology to solve real world problems with the help of Intel’s AI platforms including hardware and software. Firstly, hardware-oriented AI training. The success of AI applications depends on designing efficient platforms and the knowledge of hardware is a critical step. Students will have access to the latest models for learning and developing proposes. Secondly, software-oriented AI training. Writing efficient implementations of AI programs also requires experiences of using well-maintained IA libraries, like implementing AI System with Intel’s AI tools integration including Intel® Parallel Studio, Intel® Data Analytics Acceleration Library (Intel® DAAL), Intel® Math Kernel Library (Intel® MKL), etc.

Publications:

Zhi-Hua Zhou's Publications

Related Website:

http://lamda.nju.edu.cn

For more complete information about compiler optimizations, see our Optimization Notice.