University of Bonn

Parallel and Scalable Machine Learning for Remote Sensing Big Data

WS 2020-2021

Abstract

Recent advances in remote sensors with higher spectral, spatial, and temporal resolutions have significantly increased data volumes, which pose a challenge to process and analyse the resulting massive data in a timely fashion to support practical applications. Meanwhile, the development of computationally demanding Machine Learning (ML) and Deep Learning (DL) techniques (e.g., deep neural networks with massive amounts of tunable parameters) demand for parallel algorithms with high scalability performance. Therefore, data intensive computing approaches have become indispensable tools to deal with the challenges posed by applications from geoscience and remote sensing. In recent years, high-performance and distributed computing have been rapidly advanced in terms of hardware architectures and software. For instance, the popular graphics processing unit (GPU) has evolved into a highly parallel many-core processor with tremendous computing power and high memory bandwidth. Moreover, recent High Performance Computing (HPC) architectures and parallel programming have been influenced by the rapid advancement of DL and hardware accelerators as modern GPUs. This and more in my course on Scalable Machine Learning for Remote Sensing Big Data.

Lecture 0 - Prologue

🛰️ 💻🛠

Introduction of the topics of the course and overview of modern High Performance Computing (HPC) systems and their capabilities for the extraction of interpretable information from big remote sensing data with deep learning.

Get lecture

Lecture 1 - Remote Sensing Systems and Data

🚀The Remote Sensing Process

     - Platforms and Sensors

🌏Earth Observation Missions

🍂Land Use and Land Cover (LULC) Classes

    - USGS and CORINE

📈Spectral Reflectance

     - Informative and Spectral Classes

     - Spectral Response Patterns

Get lecture

Lecture 2: Machine Learning for Classification

🙊Recognition with Photointerpretation

🔦Pattern Recognition Systems

🎊Feature Space

       - Feature Extraction and Selection

       -Standardization of the features

💻Machine Learning

       -Forms of learning

🔑Components of Learning

       -Discriminant functions

       -Supervised Learning

Get lecture

Laboratory 1: Introduction to Python and Jupyter Notebook

⌨️Introduction to the basic concept of programming in Python and basic usage of Google Colab

Get Jupyter notebook

Laboratory 2: Introduction to ANNs using Keras and Tensorflow

🕹Hands-On Neural Networks with Keras and Tensorflow

Get Jupyter notebook

Lecture 3: Deep Learning for Classification

🧠Biological inspiration for computation

     -Linear perceptron model

✖️Multilayer Perceptron

     -Artificial Neural Networks (ANNs)

🏋️Deep Learning

     -Convolutional Neural Networks (CNNs)

👁CNNs with Hyperspectral Images

🌄Training with Gradient Descent

     -Loss Function

      -Learning rate and Optimizers

✍️Evaluation of a Model

     -Confusion matrix and F1 score

Get lecture

Laboratory 3: Introduction to CNNs

🕹Hands-On Convolutional Neural Networks

Get Jupyter notebook

Laboratory 4: Regularization Techniques and Transfer Learning

📌Brief introduction to regularization techniques and transfer learning approaches

Get lecture

Get Jupyter notebook

Lecture 4: Levels of Parallelism and High Performance Computing

💡High Performance Computing (HPC) for Deep Learning

🍽The Free Lunch is Over

-Moore’s Law

-Work Harder and work smarter

-Get Help: Many-Core Era

🥓Hardware Levels of Parallelism

-In-core, In-Processor, Single and Multiple Computers

-Graphics Processing Units (GPUS)

⛏High Performance Computing (HPC)

-TOP500

-Architectures of HPC Systems

🎲Modular Supercomputing Architecture

-Deep projects

Get lecture

Lecture 5: Distributed Deep Learning

📖Theory on distributed deep learning

🔧Distributed deep learning frameworks

Get lecture

No items found.

Previous teaching