European Innovators: Showcasing the Forefront of Technology at AI Meetups

Intel® Software Innovators have been presenting their work on artificial intelligence at meetups across Europe giving presentations on everything from the basics of machine learning and deep learning to fine-tuning techniques that can be applied to specific use cases. We encourage our Innovators to share their knowledge and expertise throughout the AI community and below we’ve highlighted some of our European Innovators and the presentations they’ve given recently on a variety of machine learning and deep learning topics.

Justin Shenk

Justin Shenk develops innovative AI software and organizes projects that bring people together to find elegant solutions with modern technology. Before focusing on AI and deep learning, Justin worked as a neuroscience researcher in the US and successfully founded and organized two award-winning projects: San Antonio Science Café, a public science initiative and Open History Project, a collaborative oral history and translation website. He is currently a Masters student in Cognitive Science at the University of Osnabruck in Germany and working on his thesis “Breaking the Black Box of Deep Learning” with AI software company, Peltarion, based in Stockholm.

Justin gave presentations at both the Tel Aviv Deep Learning meetup in September and the PyData Warsaw 2017 conference in October where he spoke on how to visualize neural network activity and parameters, which is related to his masters’ thesis. Justin walked the audience through open source tools for visualizing unit activation, feature detection, and network accuracy in training convolutional and recurrent neural networks. His presentation explored deep neural network features and activity using open source tools, in particular, Python, as well as discovering latent encodings and bases in models, create tools to guide data scientists to the right model, and also the social impact of scientific research. Watch a recording of Justin’s presentation.

Gokula Krishnan

Gokula Krishnan is a Masters student at ETH Zurich and a Deep Learning Researcher working on both fundamentals and applications of Deep Learning technologies. Originally from Chennai, India and now living and working Switzerland, he likes to work on problems that have a huge impact on people’s lives and push the boundaries of human knowledge.

At the PyData Warsaw Meetup in October, Gokula gave a talk on the basics of Machine Learning and Deep Learning. He covered the basics of Supervised and Unsupervised Learning with examples of different models used for each one. He also highlighted how you can use scikit-learn to use these models and cases where they won’t work. Gokula also went into how neural networks work and the basics of Convolutional Neural Networks (CNNs) used in image classification. After the presentation, anyone in the audience would be able to explain how different models work and make the right choice for their use case. Watch a recording of Gokula’s presentation.

Vu Pham

Vu Pham has a strong background in numerical optimization, graphical models, deep learning, and “traditional” machine learning with experience in researching and building Machine Learning solutions in both academic labs and companies. Vu has experience with startups and research labs, where he trained machine learning models and built frameworks on top of Spark and CUDA, back in the days before TensorFlow. He spoke at Spark Summit and Strata +Hadoop World on several aspects of machine learning and data analytics systems. He is currently a Research Engineer at Deep Mind.

Vu gave a presentation on Bayesian Optimization for Hyper-parameter tuning and beyond at the Berlin ML Meetup. Most machine learning practitioners spend relentless efforts on feature engineering and hyper-parameter tuning for their predictive models, yet the process is manual, time-consuming, and boring. Vu’s talk presented an overview of the problems and different approaches to tackle them. In particular, he took a closer look at Bayesian Optimization, both in its theoretical background as well as usage in popular toolboxes. The results of using BayesOpt for training predictive models on some real-world datasets was shown and the code was shared. The talk also gave a glimpse into an integrated framework for ML practitioners which can automate the boring parts of their job, helping them to be more productive and creative. View the slides from Vu’s presentation.

Gregory Chatel

Gregory Chatel has a PhD halfway between Mathematics and Computer Science. After his studies he got interested in machine learning and deep learning in particular. He followed the massive open online courses (MOOC), and from there developed many implementations of deep learning algorithms from research papers.

Gregory gave presentations at both the Deep Learning Paris Meetup in June and the Brussels Meetup in October on the concept of adversarial samples in the world of deep learning. This topic falls midway between computer security and artificial intelligence. The main idea of adversarial samples is to modify the images very slightly in order to fool a neural network that tries to classify them. It has a huge number of applications, for example one could attach a self-driving car by making road signs “invisible” to it using these kinds of techniques. Gregory talked on this subject as part of his initiative to try to make these problems more broadly known by AI practitioners. He has also written a Medium blog post on the subject with the same content as the talk. The slides of his talks and the LaTeX sources are available in this GitHub repository. The source code of his proof of concept for this talk is available in this GitHub repository.

Want to learn more about the Intel® Software Innovator Program?

You can read about our innovator updates, get the full program overview, meet the innovators and learn more about innovator benefits. We also encourage you to check out Developer Mesh to learn more about the various projects that our community of innovators are working on.

Interested in more information? Contact Wendy Boswell

For more complete information about compiler optimizations, see our Optimization Notice.