Masters programme | E-portfolio
Semester I & II
Spatial Analyses

Sen2Cube – Vegetation index time series

Using a semantic earth observation datacube containing Sentinel-2 scenes in order to calculate a vegetation index time series on agricultural fields

Sen2Cube is a semantic earth observation data cube for Austria. Following the idea of earth observation data cubes, the Sen2Cube infrastructure allows to perform queries and analyses on remote sensing imagery in both the temporal and spatial dimension without downloading any data. The data storage consisting of temporally stacked, analysis ready Sentinel-2 images (i.e. Level-2A BOA corrected reflectance products) is abstracted from the user. This replaces the traditional file-based access to images with the necessity of downloading data bound by their arbitrary, technical-defined boundaries. Instead a logical view on the data with possibilities to execute queries for a specific temporal and spatial extent is provided. All analyses can be performed in a cloud-based environment applying a range of selection and transformation operations on the underlying multidimensional data array.

Compared to other data cubes, Sen2Cube has at least two unique characteristics: First, it is semantically enriched meaning that for each observation at least one categorial interpretation is given. The corresponding set of classification layers is derived automatically and data-driven utilising the Satellite Image Automatic Mapper (SIAM) algorithm. Second, apart from the common programmatic interface (via jupyter notebooks in this case), a graphical user interface as shown in the header is provided. The ability to create models block by block allows access to the data cube without the need to programme anything. This will be demonstrated by the application example of calculating a time series of a vegetation index below. For further general information on Sen2Cube, the reader is referred to this publication as well as the Sen2Cube manual.

All analyses were performed on four agricultural fields in the south of the city of Salzburg. They were selected based on an annual inventory of fields provided by the austrian authorities. Out of this reference data set, four fields were chosen in a way that two crop types – winter wheat and maize, each planted on two fields – were represented. Thus, their temporal evolution within a specified period of interest (January – August 2021) could be compared.

For the purpose of calculating vegetation indices for the selected fields, the model shown to the right was created. First, an entity comprising mainly clouds, shadows and snow is defined in order to exclude all pixels categorised as such from further calculations. Each category corresponds to one of the automatically derived interpretation layers. By adding and removing single categories and redoing the subsequent analyses it was determined that this selection of categories and the resulting definition of the mask entity led to the least biased result.

 

Model - Part I

First, for each point in time the percentage of pixels belonging to the excluded classes was calculated to get a general overview on the availability of useful scenes within the period of interest. Then, the median greenness index per field was calculated and averaged per month. Again, the greenness index represents one of the automatically generated interpretation layers. It is a multispectral vegetation index slightly more complex than the NDVI resulting in values > 0:$$greenness = (NIR / R)\,+\,(NIR / MIR1)\,-\,(Vis / MIR1)$$

Prior to the greenness index aggregation, observations were filtered in a way that only those with at least 1ha of non-masked pixels per field and timestamp were included. This way and by subsequent masking of all pixels belonging to the mask entity the noise in the greenness index timeseries was reduced.

Model - Part IIa

In the second part of the analyses, only a temporal without additional spatial reduction was performed in order to map the monthly averaged greenness values calculated above. Note that it was not possible to implement the filter operation for all fields with at least 1ha of non-masked pixels due to restrictions to apply spatial grouping without performing a subsequent spatial reduction operation.

Model - Part IIb

In the following, the xml versions of the models are provided, which allow to reproduce the models by importing them into the knowledge base using the Sen2Cube webinterface:

				
					<!--- Model part I & IIa --->
fields_greenness_timeseriesDefinition of entity to be excluded fromfurther calculationsmask_entitycolouror_Color typeisinCLSMKPLMColor typeisinTNCLVTNCLWA_BBSNSHSNSHSLWASHTWASHDPWASHCalculate greenness timeseriesmask_percentagetruemask_entityspacepercentagemask_countfalsemask_entityspacecountgreenness_timeseriestruemask_countspatialfeaturegt100greennessmask_entityspacemedianmonthtimemean
				
			
				
					<!--- Model part I & IIb --->
fields_greenness_mapDefinition of entity to be excluded fromfurther calculationsmask_entitycolouror_Color typeisinCLSMKPLMColor typeisinTNCLVTNCLWA_BBSNSHSNSHSLWASHTWASHDPWASHCalculate greenness mapgreenness_maptruegreennessduringTRUE2021TRUE7TRUE1TRUE2021TRUE7TRUE31mask_entitymonthtimemean
				
			

As shown in the temporal plot below, only a small proportion of the 49 scenes available for the period of interest exhibit nearly completely cloud-, shadow- and snow-free conditions. Especially at the beginning of the year, no scenes with less than 95% masked pixels are available. On the other hand, for all subsequent months at least one valid observation exists and enables to calculate the monthly greenness index. Note that for some point in time the degree of masked pixels varies greatly even within the small area of interest as proven by the animation cycling through all available scenes. This just points again to the advantage of using a data cube with space-specific query options. In the traditional file-based approach, this filtering of suitable observations could only have been achieved by downloading all available scenes in the study period, as e.g. a pre-filtering based on scene-wide cloudiness levels would have been imprecise and would have led to the exclusion of possibly suitable scenes. Filtering by snow cover would not be possible at all and would require additional effort even after downloading the scenes due to the lack of semantic enrichment.

Timeseries of cloud-, shadow- and snow-masked pixels
Animation of S2-Scenes for period & area of interest

Looking at the resulting monthly averaged greenness index values, a clear separation by fruit type is obvious. While winter wheat is already in the growth stages in spring and increases in biomass, the late sowing of maize is reflected in the delayed increase in the vegetation index. At this time, the vegetation index for the wheat fields already decreases again due to the ripening of the grain and the final harvest. It is significant that the relatively simple model leads to quite robust results. Only minor fluctuations are visible between fields of the same crop type. This indicates the further potential of the analyses, e.g. for land use classifications.

Timeseries of greeness indices for all fields

Finally, the described differences between crop types can be displayed spatially by mapping the monthly averaged greenness per pixel. Explicitly visualising the spatial dimension furthermore allows to perform larger-scale analysis on the intra-field homogeneity of greenness values.

Greenness values per pixel for two different months during the vegetation period