GRSS ESI HDCRS End-to-End Machine Learning with High Performance and Cloud Computing
The participants will work through an end-to-end machine learning project for an application from remote sensing by exploiting modern distributed systems (i.e., HPC and cloud computing systems) and state-of-the-art distributed deep learning frameworks.
This tutorial provides a complete overview of the latest developments of High Performance Computing (HPC) systems and Cloud Computing services. Firstly, it shows how the parallelization and scalability potential of HPC systems are fertile ground for the development and enhancement of Deep Learning (DL) methods. Then, it demonstrates how High-throughput computing (HTC) systems make computing resources accessible and affordable via Internet. The focus is put on cloud computing, a scalable and efficient alternative to HPC systems for particular Machine Learning (ML) tasks.
The tutorial includes practical sessions for an end-to-end ML project where a DL model is trained and optimized for an application from remote sensing. The exercises includes the speed-up of the training phase through state-of-the-art HPC distributed DL frameworks and the usage of cloud computing resources to push the model into a production environment and evaluate it against new and real-time data.
Morning Session: 9:30-13:00
- 9:30 – 10:00 Lecture 1: Work and Activities of the GRSS ESI HDCRS Working Group
- 10:00 – 10:30 Lecture 2: Introduction and Motivations
- 10:30 – 11:15 Lecture 3: Levels of Parallelism and High Performance Computing
- 11:15 – 11:45 Coffee Break
- 11:45 – 13:00 Lecture 4.1: Distributed Deep Learning with High Performance Computing
Afternoon session: 14:00-17:30
- 14:00 – 15:00 Lecture 4.2: Distributed Deep Learning with High Performance Computing
- 15:00 – 16:30 Lecture 5: Deep Learning with Cloud computing
- 16:30 – 17:00 Coffee Break
- 17:00 – 17:30 More time for hands-on, Q&A and wrap-up
- Presentation slides are online available
Lecture 1: Work and Activities of the GRSS ESI HDCRS Working Group
Lecture 2: Introduction and Motivations
Lecture 3: Levels of Parallelism and High Performance Computing
Lecture 4.1: Distributed Deep Learning with High Performance Computing
- Repository with code and Jupyter notebooks