Date

Supercomputer Frontiers 2020 will be the sixth edition of the annual conference which run in Singapore in 2015-2017 and subsequently in 2018-2019 in Warsaw, Poland. Tentatively the main topics at this edition will be quantum computing, connectome, optical computing, computation studies of the brain, neuromorphic computing, and microbiome.

SCFE is a platform for thought leaders from both academia and industry to interact and discuss visionary ideas, the most important global trends, and substantial innovations in supercomputing. Each year we focus on somewhat different topics, but we always focus on ideas that are most innovative, ingenious, and have the potential to change the course of supercomputing. We also highlight research domains that might present the greatest potential of becoming the leading applications of supercomputers in the future.

SODALITE participates with the presentation of the paper titled "Optimising AI training deployments using Graph compilers and containers" written by Karthee Sivalingam, Alfio Lazzaro, Nina Mujkanovic from HPE HPC/AI EMEA Research Lab on March 23rd between 13:10 and 13:45.

Abstract: AI applications based on Deep Neural Networks (DNN) have become popular in solving nontrivial problems like image analysis and speech recognition. An AI workload usually incorporates an Extract, Transform and Load (ETL) pipeline, data movement, and execution of DNN graphs. Deep learning (DL) models are usually represented as computational graphs, with nodes representing tensor operators, and edges the data dependencies between them. AI training deployments can be optimised with target-specific libraries, Graph compilers, and by improving data movement. Graph compilers aim to optimise the execution of a DNN graph by generating an optimised code for a target hardware/backend, thus accelerating the training and deployment of DNN models. Heterogeneous Cloud and HPC infrastructures further increase the complexity of deploying and optimising AI training workloads.


In the SODALITE, we address this problem by providing tools to enable simpler and faster development, deployment, operation, and execution of applications in heterogeneous HPC and cloud computing environments. As part of this project, we are developing an Application Optimiser component that uses a performance model of infrastructure and applications (based on benchmarks) to optimise its deployment and runtime for heterogeneous infrastructure and hardware. Using input from a data scientist which defines the configurations and optimisations to be enabled, the Application Optimiser component will select the framework, graph compiler, and target-specific libraries before building an optimised container. In this talk, we present a review of different AI frameworks and supported graph compilers. We also compare the performance of different frameworks and graph compilers using standard benchmarks when deployed using containers. We describe how AI training deployments in heterogeneous targets will be optimised by the Application Optimiser.