The Modern Code Developer Challenge

By Michael A Pearce,

Published:07/28/2017   Last Updated:10/24/2017

As part of its ongoing support of the world-wide student developer community and advancement of science, Intel has partnered with CERN through CERN openlab to sponsor the Intel® Modern Code Developer Challenge. The goal for Intel is to give budding developers the opportunity to use modern programming methods to improve code that helps move science forward. 

The Challenge will take place from July - October, 2017 with the winners announced on November 15, 2017 in the Intel booth at SC17.

What's New


1) Smash-simulation software

Teaching algorithms to be faster at simulating particle-collision events

Physicists widely use a software toolkit called GEANT4 to simulate what will happen when a particular kind of particle hits a particular kind of material in a particle detector. In fact, this toolkit is so popular that it is also used by researchers in other fields who want to predict how particles will interact with other matter: it’s used to assess radiation hazards in space, for commercial air travel, in medical imaging, and even to optimise scanning systems for cargo security.

An international team, led by researchers at CERN, is now working to develop a new version of this simulation toolkit, called GeantV. This work is supported by a CERN openlab project with Intel on code modernisation. GeantV will improve physics accuracy and boost performance on modern computing architectures.

The team behind GeantV is currently implementing a ‘deep-learning' tool that will be used to make simulation faster. The goal of this project is to write a flexible mini-application that can be used to support the efforts to train the deep neural network on distributed computing systems.

Elena Orlova: I am a third year student in applied mathematics. I study in Higher School of Economics, Moscow, Russia. I also was born in Russia. Fields of my interest: math, practical computer science, machine learning, deep learning, quantum mechanics, painting (especially Impressionists).

Read more from my blogs, on the Smash-Simulation Software project:

Deep learning for fast simulation: Introduction,   Mode collapse in GANs


2) Connecting the dots

Using machine learning to better identify the particles produced by collision events

The particle detectors at CERN are like cathedral-sized 3D digital cameras, capable of recording hundreds of millions of collision events per second. The detectors consist of multiple ‘layers’ of detecting equipment, designed to recognise different types of charged particles produced by the collisions at the heart of the detector. As the charged particles fly outwards through the various layers of the detector, they leave traces, or ‘hits’.

Tracking is the art of connecting the hits to recreate trajectories, thus helping researchers to understand more about and identify the particles. The algorithms used to reconstruct the collision events by identifying which dots belong to which charged particles can be very computationally expensive. And, with the rate of particle collisions in the LHC set to be further increased over the coming decade, it’s important to be able to identify particle tracks as efficiently as possible.

Many track-finding algorithms start by building ‘track seeds’: groups of two or three hits that are potentially compatible with one another. Compatibility between hits can also be inferred from what are known as ‘hit shapes’. These are akin to footprints; the shape of a hit depends on the energy released in the layer, the crossing angle of the hit at the detector, and on the type of particle.

This project investigates the use of machine-learning techniques to help recognise these hit shapes more efficiently. The project will explore the use of state-of-the-art many-core architectures, such as the Intel® Xeon Phi™ processor, for this work.

 Antonio Carta: I am from Sardinia and I am currently finishing my master studies in Computer Science at the University of Pisa. My main area of interest is artificial intelligence, in particular machine learning. During my studies I developed a couple of projects in that area, for example community question answering with recurrent and recursive neural network and music generation with rnn-rbm.

Read more from my blogs, on Connecting the Dots project:

Track Reconstruction with Deep Learning at the CERN CMS Experiment,   Part 2 - Track Reconstruction with Deep Learning at the CERN CMS Experiment


3) Cells in the cloud

Running biological simulations more efficiently with cloud computing

BioDynaMo is one of CERN openlab’s knowledge-sharing projects. It is part of CERN openlab’s collaboration with Intel on code modernisation, working on methods to ensure that scientific software makes full use of the computing potential offered by today’s cutting-edge hardware technologies.

It is a joint effort between CERN, Newcastle University, Innopolis University, and Kazan Federal University to design and build a scalable and flexible platform for rapid simulation of biological tissue development.

The project focuses initially on the area of brain tissue simulation, drawing inspiration from existing, but low-performance software frameworks. By using the code to simulate the development of the normal and diseased brain, neuroscientists hope to be able to learn more about the causes of — and identify potential treatments for — disorders such as epilepsy and schizophrenia.

Late 2015 and early 2016 saw algorithms already written in Java* code ported to C++. Once porting was completed, work was carried out to optimise the code for modern computer processors and co-processors. In order to be able to address ambitious research questions, however, more computational power will be needed. Work will, therefore, be undertaken to adapt the code for running using high-performance computing resources over the cloud. This project focuses on adding network support for the single-node simulator and prototyping the computation management across many nodes.

Konstantinos Kanellis is a final year undergraduate at the Department of Electrical and Computer Engineering, University of Thessaly, Greece. His interests lie in the fields of Distributed and Parallel Systems, High-Performance Computing and Computer Networks. As a CERN openlab Summer Student, he works at the BioDynaMo project focusing on implementing execution support in high-performance clusters and distributed environments.

Read more in my blogs, on the Cells in the Cloud project:

Cells in the Cloud: Scaling a Biological Simulator to the Cloud,   Cells in the Cloud: Thoughts on the Distributed Architecture,  Cells in the Cloud: Distributed Runtime Prototype Implementation



4) Disaster relief

Helping computers to get better at recognising objects in satellite maps created by a UN agency

UNOSAT is part of the United Nations Institute for Training and Research (UNITAR). It provides a rapid front-line service to turn satellite imagery into information that can aid disaster-response teams. By delivering imagery analysis and satellite solutions to relief and development organizations — both within and outside the UN system — UNOSAT helps to make a difference in critical areas such as humanitarian relief, human security, and development planning.

Since 2001, UNOSAT has been based at CERN and is supported by CERN's IT Department in the work it does. This partnership means UNOSAT can benefit from CERN's IT infrastructure whenever the situation requires, enabling the UN to be at the forefront of satellite-analysis technology. Specialists in geographic information systems and in the analysis of satellite data, supported by IT engineers and policy experts, ensure a dedicated service to the international humanitarian and development communities 24 hours a day, seven days a week.

CERN openlab and UNOSAT are currently exploring new approaches to image analysis and automated feature recognition to ease the task of identifying different classes of objects from satellite maps. This project evaluates available machine-learning-based feature-extraction algorithms. It also  investigates the potential for optimising these algorithms for running on state-of-the-art many-core architectures, such as the Intel® Xeon Phi™ processor.

Muhammad Abu Bakr: I am a fresh graduate from COMSATS institute of information technology, Islamabad. I did my final year project on region of interest based quality selective high efficiency video coding (HEVC) for telehealth applications. Currently, I am part of CERN openlab and UNOSAT team which is working on exploring new approaches to image analysis and automated feature recognition for identifying different objects.

Read more in my blogs, on the Disaster Relief project:

Disaster Relief using Satellite Imagery and Machine Learning, DeepMask Installation and Annotation Format for Satellite Imagery Project,  DeepMask Installation Problems and Solutions,   Pre-Processing GeoTIFF files and training DeepMask/SharpMask model


5) IoT at the LHC

Integrating ‘internet-of-things’ devices into the control systems for the Large Hadron Collider

The Large Hadron Collider (LHC) accelerates particles to over 99.9999% of the speed of light. It is the most complex machine ever built, relying on a wide range industrial control systems for proper functioning.  

This project will focus on integrating modern ‘systems-on-a-chip’ devices into the LHC control systems. The new, embedded ‘systems-on-a-chip’ available on the market are sufficiently powerful to run fully-fledged operating systems and complex algorithms. Such devices can also be easily enriched with a wide range of different sensors and communication controllers.

The ‘systems-on-a-chip’ devices will be integrated into the LHC control systems in line with the ‘internet of things’ (IoT) paradigm, meaning they will be able to communicate via an overlaying cloud-computing service. It should also be possible to perform simple analyses on the devices themselves, such as filtering, pre-processing, conditioning, monitoring, etc. By exploiting the IoT devices’ processing power in this manner, the goal is to reduce the network load within the entire control infrastructure and ensure that applications are not disrupted in case of limited or intermittent network connectivity.

 Lamija Tupo: I'm a student from Bosnia and Herzegovina. I'm currently pursuing my masters degree in Computer Science at International Burch University in Sarajevo, more specifically in the field of Internet of Things. 

Read more from my blogs, on the IoT at the LHC project:

IoT in LHC: Introduction,    IoT in LHC: A Deeper Look into the Frameworks




Product and Performance Information


Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804