Masters programme | E-portfolio
Semester independent

EO Summer School 2022

Summary of the contents of the one-week summer school on state-of-the-art use of various remote sensing data
© contains modified Copernicus Sentinel data (2022), processed by ESA

Table of Contents


In accordance with its title, this year’s ZGIS Summer School covered a wide range of different applications and underlying data and software infrastructures in the field of remote sensing. Introduced by a discussion on the importance of earth observation data for addressing key environmental issues, each of the following days looked at different topics through a mixture of theory and practice sessions. Starting from basic theoretical explanations, related state-of-the-art research questions and corresponding processing workflows were presented and addressed in hands-on sessions. The main contents are summarised below and were also presented at this year’s GI conference Salzburg 2022.

A. Data cubes for time series analyses

„Bring the user to the data, not the data to the user“ is one of the key phrases in the big earth observation era. The main idea of utilising the tremendous amount of remote sensing data currently available requires corresponding processing infrastructures and intelligent solutions allowing to conduct analyses in a performant manner. Storing data in data cubes as three-dimensional structures with two spatial and one temporal dimension represents a recent and increasingly popular way of handling satellite imagery in this context. One advantage of such a data cube, possibly storing multiple terabytes of data somewhere in a cloud storage accessible via a web interface, is that users can really utilise the full information contained in the scenes. No compression of information (e.g. via temporal or spatial aggregation) is done prior to presenting the data to the user and, thus, the user can slice & aggregate the data tailored to his/her needs as given by the actual application purpose. Another benefit of a data cube connected to a multi-server infrastructure obviously lies in the utilisation of cloud processing capabilities with performance optimisations being implemented via chunking and cloud-optimised storage formats. These enhancements are done in the background and the complexity is abstracted from the user. The user can directly query data for his/her area of interest and time span and neither needs to deal with storage formats and naming of scences nor with parallelisation of operations anymore.

Within the context of the summer school, the ZGIS in-house data cube solution named sen2cube was presented. Differences compared to other data cubes exist with regard to semantic enrichment done for all Sentinel 2 data stored. Semantic enrichment refers to a pre-classification of the scenes with output layers such as „deep water or shadow“. This pre-classification can then be utilised in knowledge-based models built by the user to come up with inferences based on a convergence of evidence approach. Semantic layers are user-friendly as their interpretation is easy compared to the direct use of spectral intensity values. They incorporate established expert knowledge on the interpretation of spectral intensity values and ease the step towards high-level semantic models. Last but not least, as categorial layers they reduce the data amounts to be stored, consequently leading to performance improvements.

In a hands-on session several applications of the sen2cube system for detection and delineation of gravitational mass movements such as landslides were presented. Other examples from recent research projects dealt with the identification of areas characterised by soil sealing. A different case study using the semantic data cube to retrieve time series information on crop dynamics can be found here.

Main concepts implemented in the semantic data cube sen2cube

B. Glacier mapping via computer vision & OBIA

Monitoring the cryosphere becomes an increasingly significant part of earth observation in the context of climate change. Whereas sea ice and clean glaciers have been included in climate research since its early stages, current analyses also acknowledge the significance of permafrost and rock glaciers. They contribute to freshwater supply and play an important role for storing and releasing organic matter. Mapping the extent of glaciers thus forms the foundation for a variety of subsequent modelling approaches.

Prof. Benjamin Robson – invited as a guest lecturer from the university of Bergen – provided a comprehensive overview on techniques and corresponding difficulties in the field of glacier mapping. Whereas the recognition of clean glaciers as huge blocks of ice is rather simple (e.g. by taking the NIR/SWIR ratio), the delineation of debris-covered glaciers and rock glaciers requires more elaborated approaches going beyond simple spectral indices. Debris-covered ice as well as rock-ice-mixtures may be identified by spatial structures indicating movements such as fringe or ridge structures. Initial efforts towards integrating this knowledge on spatio-temporal patterns in processing pipelines may be based on considerations of textures on optical images and SAR-based coherence metrics. Relying on recent progresses in computer vision techniques, one may then move to a combination of convolutional neural networks (CNNs) and object-based image analyses (OBIA) based on a variety of data sources to derive at delineation results with higher accuracies. The idea is to rely on the strengths of CNNs with respect to automated low-level feature extraction in an initial step. The obtained result – a probability heatmap – can then be further refined utilising OBIA techniques to reduce salt-and-pepper-effects and include contextual information (neighbouring segments). The overall workflow foundational to this publication of Robson et al. 2020 and implemented in a hands-on session in the ecognition software environment is depicted below.

Convolution neural network for image classification
(Robson et al. 2020)
Glacier mapping workflow with synergistic use of deep learning (B) and OBIA (C)
(Robson et al. 2020)

C. Sentinel 1 for DEM generation

Radar interferometry refers to a bunch of techniques utilising the phase information of at least 2 SAR images. The phase information originating from coherent imaging employed by SAR enables to perform analyses on surface deformation as well as on topography. Surface deformation studies are based on differential interferometry and popular approaches in this field include Small BAseline Subset (SBAS) and Permanent Scatterer Interferometry (PSI). They are based on multiple SAR images and aim to measure ground surface displacements up to sub-centimetre changes in line-of-sight directions. Elevation model generation in the field of topographic analyses is less complex and can already be performed with 2 SAR images. Contrary to DInSAR, elevation model creation requires large perpendicular baselines to obtain accurate results. Thus, the small orbital tube of Sentinel 1 satellites makes the creation of elevation models more challenging and applications focusing on surface deformations prevail.

During the summer school, the workflow for DEM generation was discussed in detail and a step-to-step implementation using ESAs SNAP toolbox was practiced. The main steps consisting of coregistration, interferogram formation (image c below), filtering, phase unwrapping and conversion to elevation heights (image e) is transferable to any SAR data. A package automating this processing pipeline tailored to Sentinel 1 was subsequently introduced and tested. A case study using S1 generated DEMs to quantify the impact of a dam failure is presented as a separate blog post here.

SAR-based DEM generation - input and output products
(adapted from Braun 2021)

D. UAV data processing for DEM generation

As opposed to lidar and radar data, which are especially suited for national and supra-regional scales, drone data is often foundational to small-scale elevation models. Unmanned Aerial Vehicles (UAVs) enable flexible data acquisition in small areas and can be used to create a variety of products including orthophotos but also digital surface and digital terrain models (DSM & DTM). Different to radar and lidar, the construction of 3D information is neither based on coherent phases nor on the signal’s propagation time but on the correspondence between overlapping images. The set of employed techniques called photogrammetry comprises structure from motion, which uses the motion parallax effect to estimate 3D coordinates. Motion parallax refers to different changes in the position of objects on two images depending on their distance to the observer. Thus, the creation of elevation models based on UAV images starts from the identification of tie points across images, Subsequently, a depth map as a prerequisite for the creation of a dense point cloud can be built. Interpolating the points, finally, results in a continuous elevation model.

Within the framework of the summer school, Prof. Benjamin Robson reiterated on the practical considerations when planning a drone data acquisition flight. The desired resolution must be traded off against the area that can be covered, terrain following modes instead of constant flight altitudes are recommended for complex terrain and certain environmental conditions (constant sunlight, high sun angle, no snow) should be met in order to get good results. Depending on the requirements given by the application purpose, one may need to have ground control points with associated differential GPS measurements so that not only the relative orientation but also the exterior-absolute orientation can be evaluated. For comparisons with reference data this correspondence to an external geographical reference system is of particular relevance. Planning and conducting a drone flight considering the aforementioned points was finally practiced on a field trip.

A case study using UAV imagery for the creation of a digital surface model and orthophoto can be found here.


UAV data products - from sparse point clouds to dense point clouds
(Robson 2022)