Masters programme | E-portfolio
Semester I & II
Spatial Analyses

Interpolation – Inverse Distance Weighting

Analysing the effect of different parameter settings on the result of IDW-interpolated temperature values. An interactive dashboard with the interpolation results, descriptive statistics and validation plots is provided.

Table of Contents

Introduction

Interpolation aims at estimating values of a usually continuous variable at specific locations based on a limited number of known values at other points. The variety of interpolation techniques can be classified into deterministic and geostatistical techniques, the latter assuming that the data observed at discrete points are sampled from an underlying continuous spatial phenomenon forming the population of all values. Considering the particular structure of the input data (for example in terms of variogram-based autocorrelation descriptors), geostatistical approaches determine parameter values to establish a model used to predict the unknown values at different locations. In contrast, deterministic approaches utilise models with parameters directly set by the user. In general, geostatistical approaches such as kriging are more sophisticated than deterministic techniques such as inverse distance weighting (IDW), natural neighbour interpolation and spline interpolation. On the other hand, deterministic techniques are easier to understand and apply. For a concise summary on different interpolation techniques, the reader may be referred to GIS Resources. A more comprehensive overview including elaborations on the methodological basis of interpolation techniques is provided by Smith et al. 2021 (chapter 6.5 & 6.6).

In the following, IDW will be explored in more detail testing a set of different model parameter values for interpolating temperatures. Not only are the interpolation results described in form of maps and descriptive statistics, but also evaluated by utilising cross-validation procedures. This way of testing different parameter settings in a systematic manner helps to understand the influence of model parameter choice and allows to select the best-suited parameter setting for a given problem. All analyses are mainly pyhton-based, utilising the GRASS GIS python interface for performing the interpolation part. For the dashboard visualisation of results the streamlit framework was used. All scripts are provided below but the most recent versions may be retrieved from GitHub.

Data selection & preparation

Hourly temperature values from more than 450 measuring stations collected by the German Weather Service (DWD) build the basis for the following analyses. Since the data is provided in a non-standardised format, some preparation effort merging geolocation and attribute data was needed to get a GIS-ready vector data set of point measurements. As depicted in Appx. A, data for multiple points in time are acquired, which is considered necessary to obtain meaningful results regarding the influence of parameter choice on the interpolation of temperature values in general. Taking data from only one point in time and performing interpolations ignores the temporal variability of spatial patterns of temperature and, thus, results may not be transferable for interpolation of temperature values on a different date. Aiming at results valid for the interpolation of hourly temperatures in a general, it is considered desirable to apply the analyses to hourly temperatures of at least one day of each month in a year. Thus, the script (Appx. A) is designed in such a way that results in retrieving 288 (=24 hours per day * 12 days) datasets, each containing national point data on temperatures for a specific date and hour. Note that for all further analyses, two of these datasets are selected randomly in order to minimise the computational effort and allow for some first preliminary results regarding the generalisability and transferability of the results. The two selected datasets entail temperature values for the dates 01/07/2020 13:00 & 01/02/2021 20:00. Nevertheless, the presented framework easily allows to extend the analyses by including the remaining 286 dates.

Interpolation, Evaluation & Validation

The IDW formula to calculate the unknown value $z$ for a point is based on the distances $d_{i}$ to known points and their values $z_{i}$: $$\large{}z_{n,p} = \frac{\sum_{i=1}^{n}\frac{z_{i}}{d_{i}^{p}}}{\sum_{i=1}^{n}\frac{1}{d_{i}^{p}}}$$ Two parameters influence the interpolation results: the power $p$ and the number of points $n$ that are included in the estimation of the unknown value. In the following analyses, 10 discrete values for both the power ($0.5 \leq p \leq 2.75$) and the number of points ($1 \leq n \leq 19$) were tested. Note that choosing $n = 1$ equals a nearest neighbour interpolation representing simple Thiessen polygons. Combining every different n and p value, 100 interpolations were calculated for each date. Considering the average distance of the station measurements to each other as well as the expected spatial variability of temperature values, a cell size of 2 x 2km was regarded to be adequate for all subsequent calculations. Little improvements provided by finer grid structures do not seem to justify the associated, additional computational effort.

To evaluate the results, different descriptive statistics including means of variability (range & standard deviation) were calculated. In addition to these universal measures, texture and autocorrelation measures specifically tailored to analysing spatial patterns were used. Representating the most popular autocorrelation measures, Moran’s I and Geary’s C were calculated. Moran’s I is expressed in a mathematical way as follows. $$I = \frac{N}{W} \frac{\sum_{i}\sum_{j}^{}w_{ij}\left ( z_{i} – \bar{z}\right )\left ( z_{j} – \bar{z}\right )}{\sum_{i} \left (z_{i} – \bar{z} \right )^{2}}$$ where the $N$ variable values $z$ are indexed by $i$ and $j$ and a matrix of spatial weights $ w_{ij}$ is employed. Putting it simply, the general formula of a correlation coefficient is extended by including the contiguity- or neighborhood-defining weights matrix $w_{ij}$. Moran’s I tends to take values close to 0 in case of no spatial autocorrelation (=random patterns) and shows values close to -1 or +1 in case of strong negative or positive autocorrelations, respectively.  Geary’s C defined as $$C = \frac{N-1}{2W} \frac{\sum_{i}\sum_{j}^{}w_{ij}\left ( z_{i} – z_{j}\right )^{2}}{\sum_{i} \left (z_{i} – \bar{z} \right )^{2}}$$ is similar to Moran’s I but differs in the way the numerator is calculated. Instead of relating values of neighbouring spatial units to their global means, Geary’s C directly compares the values of the juxtaposed units in a paired manner. Thus, Geary’s C is considered to be a measure of local autocorrelation, whereas Moran’s I is more sensitive to global autocorrelation. The values for Geary’s C range from 0 representing perfect positive autocorrelation to values greater of 1 indicating negative autocorrelation. Thus, Moran’s I and Geary’s C are inversely related to each other. Complementary to these statistics, the entropy based on a 3 x 3 moving window is calculated and averaged for all cells. Entropy refers to the number of microstates representing a systems macrostate and can be simplified as a measure of lack of order. Low entropy values close to zero express uniformity within the moving window and similar to Geary’s C no upper bound of entropy can’t be specified. Different to the autocorrelation measures, the average entropy is not normalised by dividing through the squared sum of differences.

To validate the results cross-validations were carried out. Different to the common leave-one-out approach, a leave-one-group-out technique was applied in order to reduce the computational burden. Hence, in each iteration cycle of a validation not only one but up to ten points (= max. 2% of all stations) were left out during the interpolation and afterwards compared to their actually measured value. It is postulated that omitting a group of up to 10 points yields results very similar to the usual leave-one-out approach since it is highly unlikely that the randomly excluded points are spatially clustered, which would then influence the validation result.

Results & Discussion

The dashboard visualising the results is accessible via the following button. The main observations and results are summarised below and supported by excerpts from the dashboard representing the extremes of parameter choice.

Regarding the influence of the number of neighbours, the following figure demonstrates that including more neighbours leads to smoother interpolation results on a local level. Smaller number of points result in pronounced discrete boundaries between fragmented, regionalised temperature patterns. This can be explained by the significance of sudden shifts in the group of neighbours taken into account for interpolating the value at a specific point. These shifts are less severe if one out of many points is omitted and a different one close by is taken into account. In the latter case, smooth patterns dominate and only the areas immediately surrounding the known temperature points show a peak-like structure. Therefore, also the variability measures such as the standard deviation decrease with rising values of $n$. In line with this, the autocorrelation statistic $I$ tends to increase while $C$ decreases, both indicating a stronger autocorrelation for interpolated maps based on greater $n$. Nevertheless, it should be emphasised that the evolution of $I$ and $C$ with rising $n$ does not happen in a strictly linear manner. Moran’s I, for example, reaches it maximum for $n$-values between 13 and 17 (but not at n=19). The general level of both autocorrelation measures proofs that positive autocorrelation patterns exist, regardless of the chosen  parameter setting. This makes sense since the overall pattern of temperature values doesn’t change strongly according to the parameter choice and visually appears to be clustered with higher values in southern Germany and lower values in the northern part. What is somehow suspicious are the almost zero values for $C$ indicating a nearly perfect positive autocorrelation on a local level. This may be explained by the formula presented above, showing that the usually small paired differences are normalised by dividing through the huge squared sum of differences calculated globally. The average entropy decreasing with increasing $n$ seems to support the previous inferences.

With regard to the power values $p$, the following figure supports the inference that higher values of $p$ also lead to some kind of smoothing but on a global rather than local level. With low $p$, small-scale patterns become clearly visible. The extreme case of $p=0.5$ leads to results which resemble regression-based trend surfaces. Contrary, high values of $p$ pronounce local differences and the aforementioned peaks become apparent. Different to changing the number of points $n$, changing of $p$ may imply severe changes in the range of interpolated values (see min & max below). As expected, the standard deviation increases with increasing $p$. One may also assume, that the stronger local variations given higher $p$ further imply declining values for Moran’s $I$ and rising values for Geary’s $C$. However, this is not evident. Once again, when changing the parameter values incrementally, $I$ and $C$ values respond in a highly non-linear fashion. This raises doubts on whether the calculation of $I$ and $C$ are carried out properly. In contrast, the average entropy meets the expectation in that rising values of $p$ result in higher entropy values indicating stronger heterogeneity.

Note that the described pattern are also observed when analysing the interpolation results for the other date under consideration. Due to the different range and spatial patterns of the input temperature values, non-normalised measures such as the entropy differ in their level but their changes with respect to changes in the parameters $p$ and $n$ are still comparable to what has been described for the depicted date.

The general overview of root mean square error (rmse) values for all parameter settings and both dates provides the following insights:

  • Rmse values range from 1.77 to 2.39°C for the interpolation on 01/07/2020 and from 1.03 to 1.39°C for the 01/02/2021. The lower values for the interpolation results at the beginning of February are a consequence of the overall smaller temperature range compared to the situation in July.
  • Both parameter values $p$ and $n$ influence the rmse values. For $n=1$ a significant reduction in goodness of fit is evident which is due to the very simplistic approach of simply transferring the neighbouring value to the location at which interpolation should be performed. Without any kind of weighting between surrounding stations, the error is relatively high.
  • Rather than a single parameter combination, there seems to be a band of parameter settings whose rmse values are close to the minimum. This band as well as the parameter combination leading to the absolute minimal rmse value differs for the two dates. In both cases, a setting with low $p$ values and $n$ values of either 5 or 7 lead to a low error result. However, the optimal interpolation (in terms of minimal rmse) for 01/07/2021 is obtained by employing a parameter setting with $p=1.5$ and $n=17$ – an area of parameter settings that allows only sub-optimal results for 01/02/2021. Whereas the structure of the band of low rmse values for 01/07/2021 follows a rising (exponential) function, this holds not true for the 01/02/2021. The former indicates that $n$ and $p$ values can to some degree balance each other in obtaining similarly low rmse values, whereas the latter states that if too many neighbours are included no optimal solution can be achieved regardless of the choiche of $p$.  
Rmse values for the different parameter settings and dates 01/07/2020 (top) & 01/02/2020 (bottom)

A more specific view on the cross-leave-one-group-out validation for a single parameter combination and date furthermore allows to make the following observations and draw the associated conclusions:

  • The trendline of interpolated values always shows a smaller slope than the reference 1:1 line. This reflects the general range preserving or minimising property of IDW interpolation techniques. Different to some spline approaches, for example, the min/max values of measured temperatures are never exceeded with IDW. Depending on the parameter setting, stronger smoothing may also lead to a reduction of value ranges and, thus, a slope reduction of the trendline.
  • Comparing the relative position of the dots to the 1:1 line, no systematic bias in terms of over- or underestimation of values can be detected.
  • Deviations are correlated to height above sea level. The strongest deviations exist for stations in areas with strong  relief such as the mountainous region around the Zugspitze. There is at least a twofold problem here: First, the station is surrounded by other stations at very different heights making it error-prone to estimate temperature values without an explicit inclusion of the height as a variable of influence. Second, the temperature is estimated solely based on the temperature values of the lowland stations in the north of the alps since other mountainous stations are located south to the German border. This boundary effect is also clearly visible in the maps provided above. If one aims at a further optimisation of rmse values, this has to account for the height in some way.
Cross-validation results of the lowest error interpolation for the 01/07/2020 (p=1.5 & n=17)

Conclusion & Outlook

Testing a set of different parameter settings for IDW interpolations of hourly temperature values, a range of insights regarding spatial patterns and errors of interpolated values have been gained. Low power values tend to create trend surfaces with global smoothing of data, whereas a small number of neighbours tend to increase local variability and causes greater fragmentation. Describing these trends in terms of autocorrelation measures was only partly successful and any ideas regarding improvements of the analyses in this regard are highly appreciated. Regarding the interpolation errors, it is apparent that typical default settings in GIS software environments (such as $p=2$ and $n=12$) may lead to acceptable but not optimal results. The best-suited parameter combination differs according to the specific spatial structure of the input data. Thus, the optimal choice may also vary for different dates for the same spatial phenomenom.

Further analyses may therefore aim at a systematic analysis of interpolation results not only for the two selected dates but for a greater variety of dates as mentioned at the beginning. An in-depth analysis of the relationship between the characteristics of a spatial phenomenom and the best-suited parameter choice could then be carried out. Of course, it might also be interesting to extend the presented analyses by comparing the IDW results to other interpolation techniques. The utilised GRASS GIS environment, for example, includes implementations of natural neighbour, spline and kriging methods. These may be applied with relatively low additional effort given the already existing scripts.

Appendix - Scripts

part A - data preparation

				
					### --- To run on shell --- ###
# working_dir='/home/felix/Desktop/assigment_interpolation'
# export working_dir

# Setup/Imports
import glob
import numpy as np
import pandas as pd
import os
import requests
import wget
from bs4 import BeautifulSoup
from datetime import datetime, timedelta
from pyproj import Transformer

pd.set_option('display.float_format', lambda x: '%.2f' % x)

# Download temperature data (station/point measurements, hourly)
# For each station a txt file is available storing data for a certain station & period mid-2020 up to yesterday
baseurl = 'https://opendata.dwd.de/climate_environment/CDC/observations_germany/climate/hourly/air_temperature/recent/'
download_dir = os.path.join(os.environ['working_dir'], 'temp_raw_data')

page_temperature = requests.get(baseurl)
page_temperature = BeautifulSoup(page_temperature.content, "html.parser")

import zipfile as zipfile
for href in page_temperature.find_all('a', string=lambda text: 'stundenwerte' in text.split('_')):
    wget.download(os.path.join(baseurl, href.text.strip()), download_dir)
    zip = zipfile.ZipFile(os.path.join(download_dir, href.text.strip()), 'r')
    zip.extractall(download_dir)
    zip.close()
    os.remove(os.path.join(download_dir, href.text.strip()))

wget.download(os.path.join(baseurl, 'DESCRIPTION_obsgermany_climate_hourly_tu_recent_en.pdf'), download_dir)
wget.download(os.path.join(baseurl, 'TU_Stundenwerte_Beschreibung_Stationen.txt'), download_dir)

# Parse station geodata (station_id, lat/lon, name,...)
stations = pd.read_fwf(os.path.join(download_dir, 'TU_Stundenwerte_Beschreibung_Stationen.txt'), encoding='latin1', header=0, widths=[5,9,9,15,12,10,41,25]).iloc[1:,:]
stations_columnnames = pd.read_csv(os.path.join(download_dir,'TU_Stundenwerte_Beschreibung_Stationen.txt'), delim_whitespace=True, skipinitialspace=True, encoding='latin1', nrows=0).columns.to_list()
stations.columns = stations_columnnames
stations_columnnames = stations.columns.to_list()
dtypes = ['int64','int64','int64','int64','float64','float64','object', 'object']
del zip
stations = stations.astype(dict(zip(stations_columnnames, dtypes)))
stations.describe(include='all')

# Get temperature data for all dates of interest
# Define daterange of interest and select all hours on the first day of each selected month
daterange = pd.date_range(start='2020-06-01', end='2021-05-01', freq='MS')
dates = [pd.date_range(start=day, end=day+pd.Timedelta("1 day"), freq='H', closed='left') for day in daterange]
dates = [date_hour for date in dates for date_hour in date]
dates = [int(datetime.strftime(day_hour, "%Y%m%d%H")) for day_hour in dates]

# Filter each station data file for selected dates and store data for all stations together in dict
station_data_all = {}
for i, station_file in enumerate(glob.glob(os.path.join(download_dir,'produkt_tu_stunde*.txt'))):
    station_data = pd.read_csv(station_file, sep=';', usecols=[0,1,3], names=['station_id', 'time_hour', 'temp'], header=0)
    station_data_all[f'station_data_{i}'] = station_data[station_data['time_hour'].isin(dates)]

# Merge with station data to get georeferenced temperature data
station_data_all = pd.concat(station_data_all).merge(stations.iloc[:,[0,3,4,5,6,7]], left_on='station_id', right_on='Stations_id')
del station_data_all['Stations_id']
station_data_all.replace(-999, np.nan).describe(include='all')

# Reproject data to UTM32N
transformer = Transformer.from_crs('epsg:4326', 'epsg:32632')
station_data_all['coord_x'] = list(map(lambda x,y: transformer.transform(x, y)[0], station_data_all.geoBreite, station_data_all.geoLaenge))
station_data_all['coord_y'] = list(map(lambda x,y: transformer.transform(x, y)[1], station_data_all.geoBreite, station_data_all.geoLaenge))

# Export data for each timestamp as csv for further use
export_dir = os.path.join(os.environ['working_dir'], 'temp_prep_data')
for name, group in station_data_all.groupby('time_hour'):
    group.to_csv(export_dir+os.path.sep +'temp'+str(name)+'.csv', sep = '|', na_rep='NULL')
				
			

part B - interpolation

				
					# start grass project
grass78 -c epsg:32632 "grassdata/interpol_temp" -e
grass78 "grassdata/interpol_temp/PERMANENT"

working_dir='/home/felix/Desktop/assigment_interpolation'
export working_dir

python3
import os
import grass.script as gs
import numpy as np
import pandas as pd
import random
import wget
from tqdm import tqdm

# Select 2 out of all 288 files for further analyses
temp_files = os.listdir(os.path.join(os.environ['working_dir'], 'temp_prep_data'))
random.seed(234)
selected_files = random.sample(temp_files, 2)

# Importing them as vector & raster layer
temp_layers = []
for temp_file in selected_files:
    layer_name = temp_file.split('.')[0]
    gs.run_command('v.in.ascii', input=os.path.join(os.environ['working_dir'], 'temp_prep_data', temp_file),
                   output=layer_name, x=10, y=11, skip=1, cat=1, overwrite=True, columns="""cat integer, 
                   station_id integer, time varchar(25), temp double precision, height integer, 
                   latitude double precision, longitude double precision, station_name varchar(50), 
                   federal_state varchar(50), x_coord double precision, y_coord double precision""")
    gs.run_command('g.region', vector=layer_name, res=2000, flags='a')
    gs.run_command('v.extract', input=layer_name, output="{}_filtered".format(layer_name), 
                   where="temp > -999", overwrite=True)
    gs.run_command('g.rename', vector="{0}_filtered,{0}".format(layer_name), overwrite=True)
    gs.run_command('v.to.rast', input=layer_name, output=layer_name, use='attr', 
                   attribute_column='temp', type='point', overwrite=True)
    temp_layers.append(layer_name)

# Importing boundary mask
download_dir = os.path.join(os.environ['working_dir'], 'boundaries')
import zipfile as zipfile
wget.download('https://daten.gdz.bkg.bund.de/produkte/vg/vg1000_ebenen_0101/aktuell/vg1000_01-01.utm32s.shape.ebenen.zip', download_dir)
zip = zipfile.ZipFile(os.path.join(download_dir, 'vg1000_01-01.utm32s.shape.ebenen.zip'), 'r')
zip.extractall(download_dir)
os.remove(os.path.join(download_dir, 'vg1000_01-01.utm32s.shape.ebenen.zip'))

gs.run_command('v.import', input='{}/vg1000_01-01.utm32s.shape.ebenen/vg1000_ebenen_0101/VG1000_LAN.shp'.format(download_dir), 
               output='borders_germany_', flags='o', overwrite=True)
gs.run_command('v.extract', input='borders_germany_', output='borders_germany', where="GF = 4", overwrite=True)
gs.run_command('g.region', vector='borders_germany', res=2000, flags='ap')
gs.run_command('v.to.rast', input='borders_germany', output='borders_germany', use='cat', overwrite=True)

# Defining parameter set for idw
mapset = gs.gisenv()['MAPSET']
power = np.arange(0.5, 3, 0.25)
npoints = np.arange(1, 20, 2)

# Calculating idw interpolation
gs.run_command('r.mask', raster='borders_germany')

v_surf_idw = Module('v.surf.idw', run_=False)
r_colors = Module('r.colors', run_=False)
idw_layers = []

for temp_layer in temp_layers:
    for pow in power:
        for npoi in npoints:
            nprocs = min(os.cpu_count(), len(npoints))
            queue = ParallelModuleQueue(nprocs)
            idw_name = "{}_idw_pow{}_npoi{}".format(temp_layer, pow, npoi).replace(".", "")
            idw_layers.append(idw_name)
            idw = v_surf_idw(input=temp_layer, column='temp', npoints=npoi, power=pow, output=idw_name)
            colorise = r_colors(map=idw_name, rules=os.path.join(os.environ['working_dir'], 'celsius_colorramp.txt'))
            process_chain = MultiModule([idw, colorise])
            queue.put(process_chain)


# Export interpolation maps
gs.run_command('g.region', vector='borders_germany', res=200, flags='ap')
for idw_layer in tqdm(idw_layers):
    r_info = gs.parse_command('r.info', map=idw_layer.split("_")[0], flags='rg')
    range_min = 5 * np.floor(float(r_info['min'])/5)
    range_max = 5 * np.ceil(float(r_info['max'])/5)
    outdir_layer = os.path.join(os.environ['working_dir'], 'results', idw_layer)
    outdir_legend = os.path.join(os.environ['working_dir'], 'results', "{}_legend.png".format(idw_layer))
    gs.run_command('r.out.png', input=idw_layer, compression=9, output=outdir_layer)
    gs.run_command('r.out.legend', raster=idw_layer, filetype='cairo', dimension='1.25,12.5', fontsize=12, 
                   font='Arial:Bold', flags='d', range=[range_min, range_max], label_step=5, file=outdir_legend)
    os.system("convert +append {0}.png {1} {0}.png".format(outdir_layer, outdir_legend))
    os.system("rm {}".format(outdir_legend))


# Calculating stats for interpolated rasters
idw_stats = []

for idw_layer in idw_layers:
    single_layer_stats = {}
    single_layer_stats['name'] = idw_layer
    # simple univariate statistics
    simple_stats = gs.parse_command('r.univar', map=idw_layer, flags='t')
    idx_min = [i.split('|') for i in simple_stats.keys()][0].index('min')
    idx_max = [i.split('|') for i in simple_stats.keys()][0].index('max')
    idx_mean = [i.split('|') for i in simple_stats.keys()][0].index('mean')
    idx_sd = [i.split('|') for i in simple_stats.keys()][0].index('stddev')
    single_layer_stats['min'] = [i.split('|')[idx_min] for i in simple_stats.keys()][1]
    single_layer_stats['max'] = [i.split('|')[idx_max] for i in simple_stats.keys()][1]
    single_layer_stats['mean'] = [i.split('|')[idx_mean] for i in simple_stats.keys()][1]
    single_layer_stats['sd'] = [i.split('|')[idx_sd] for i in simple_stats.keys()][1]
    # autocorrelation measures
    gs.mapcalc("{0}_int = int({0})".format(idw_layer), overwrite=True)
    autocor_stats = gs.parse_command('r.object.spatialautocor', method = 'moran',
                                     object_map = "{}_int".format(idw_layer), 
                                     variable_map = "{}_int".format(idw_layer))
    single_layer_stats['moran'] = float(list(autocor_stats.keys())[0])
    autocor_stats = gs.parse_command('r.object.spatialautocor', method = 'geary',
                                     object_map = "{}_int".format(idw_layer), 
                                     variable_map = "{}_int".format(idw_layer))
    single_layer_stats['geary'] = float(list(autocor_stats.keys())[0])
    # texture/entropy measures
    gs.run_command('r.texture', input=idw_layer, output=idw_layer, size=3, method='entr', overwrite=True)
    entropy_stats = gs.parse_command('r.univar', map="{}_Entr".format(idw_layer), flags='t')
    idx_min = [i.split('|') for i in entropy_stats.keys()][0].index('mean')
    single_layer_stats['entropy_mean'] = [i.split('|')[idx_min] for i in entropy_stats.keys()][1]
    # cleanup & writing to idw_stats
    gs.run_command('g.remove', type='raster', name="{}_int".format(idw_layer), flags='f')
    gs.run_command('g.remove', type='raster', name="{}_Entr".format(idw_layer), flags='f')
    idw_stats.append(single_layer_stats)

# write to data frame and export
pd.DataFrame(idw_stats).to_csv(os.path.join(os.environ['working_dir'], 'results', 'interpolation_stats.csv'))


# Performing cross-validation
# Create leave-one-out layers

loo_layers = {}
for temp_layer in temp_layers:
    loo_layers[temp_layer] = {}
    loo_layers[temp_layer]['pos'] = []
    loo_layers[temp_layer]['neg'] = []
    layer_attr = gs.parse_command('db.select', sql="select * from {}".format(temp_layer))
    idx_stationid = [i.split('|') for i in layer_attr.keys()][0].index('station_id')
    station_ids = [i.split('|')[idx_stationid] for i in layer_attr.keys()][1:]
    station_ids = [int(x) for x in station_ids]
    np.random.seed(123)
    np.random.shuffle(station_ids)
    splits = np.array_split(station_ids, 50)
    for i, split in enumerate(splits):
        gs.run_command('v.extract', input=temp_layer, output="{}_pos_{}".format(temp_layer, i),
                        where="station_id not in {}".format(str(list(split)).replace("[", "(").replace("]", ")")))
        gs.run_command('v.extract', input=temp_layer, output="{}_neg_{}".format(temp_layer, i),
                        where="station_id in {}".format(str(list(split)).replace("[", "(").replace("]", ")")))
        loo_layers[temp_layer]['pos'].append("{}_pos_{}".format(temp_layer, i))
        loo_layers[temp_layer]['neg'].append("{}_neg_{}".format(temp_layer, i))

# Perform interpolation for each parameter combination for all loo_layers
# Compare results to measured values

r_mapcalc = Module("r.mapcalc", run_=False)
g_remove = Module('g.remove', run_=False)

# Create diff layers
for temp_layer in temp_layers:
    for pow in power:
        for npoi in npoints:
            for i, loo_layer in enumerate(loo_layers[temp_layer]['pos']):
                nprocs = min(os.cpu_count(), len(npoints))
                queue = ParallelModuleQueue(nprocs)
                idw_name = "idw_{}_{}_{}".format(pow,npoi,i).replace(".", "")
                idw = v_surf_idw(input=loo_layer, column='temp', npoints=npoi, power=pow, output=idw_name, overwrite=True)
                diff_name = "{}_val_pow{}_npoi{}_{}".format(temp_layer,pow,npoi,i).replace(".", "")
                diff_calc = r_mapcalc("{} = {}-{}".format(diff_name, idw_name, temp_layer), overwrite=True)
                cleanup = g_remove(type='raster', name=idw_name, flags='f')
                process_chain = MultiModule([idw, diff_calc])
                queue.put(process_chain)
            

# Populate diff results in df
def extract_diffs(idw_layer):
    val_layers = gs.list_grouped('raster', pattern="{}_*".format(idw_layer.replace("idw", "val")))[mapset]
    temp_layers = []
    for val_layer in val_layers:
        temp_prefix = val_layer.split("_")[0]
        val_suffix = val_layer.rsplit("_",1)[1]
        temp_layer = "{}_neg_{}".format(temp_prefix, val_suffix)
        temp_layers.append(temp_layer)
        gs.run_command('v.what.rast', map=temp_layer, raster=val_layer, column='diff_meas_interpol')
    gs.run_command('v.patch', input=temp_layers, output="{}_val".format(idw_layer), flags='e', overwrite=True)
    tmp_file = os.path.join(gs.gisenv()['GISDBASE'], gs.gisenv()['LOCATION_NAME'], gs.gisenv()['MAPSET'], '.tmp', gs.tempname(length=10)) + '.csv'
    gs.run_command('db.out.ogr', input="{}_val".format(idw_layer), output=tmp_file, overwrite=True)
    val_idw_layer = pd.read_csv(tmp_file)
    val_idw_layer = val_idw_layer.loc[:,['station_name', 'federal_state', 'height', 'temp', 'diff_meas_interpol']]
    val_idw_layer.insert(0, 'idw_layer', idw_layer)
    return val_idw_layer


idw_validation = []
for idw_layer in tqdm(idw_layers):
    idw_validation.append(extract_diffs(idw_layer))

pd.concat(idw_validation).to_csv(os.path.join(os.environ['working_dir'], 'results', 'interpolation_validation.csv'))

				
			

part C - app development

				
					import streamlit as st
import io
import math
import numpy as np
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
import requests
import zipfile
from datetime import datetime
from PIL import Image

# Define app layout
st.set_page_config(layout="wide")
st.title('IDW Interpolation - Hourly Temperature (Germany)')
st.text('')
col1,_,col2,_,col3 = st.columns([4,1,1,0.5,4])

# Load data
@st.experimental_singleton
def load_image_data():
    idw_img_url = "https://dl.dropboxusercontent.com/s/10v4yoaig1ed7u0/interpol_results.zip?dl=0"
    resp = requests.get(idw_img_url, stream=True)
    zip_file = zipfile.ZipFile(io.BytesIO(resp.content))
    unzipped_files = [zip_file.open(f) for f in zip_file.infolist()]
    layers_out = []
    for f in unzipped_files:
        layer = Image.open(f)
        layer.filename = f.name
        layers_out.append(layer) 
    return layers_out

@st.experimental_singleton
def load_stat_data():
    idw_stat_url = "https://dl.dropboxusercontent.com/s/77r3hwmlipak5l0/interpolation_stats.csv?dl=0"
    return pd.read_csv(idw_stat_url)

@st.experimental_singleton
def load_val_data():
    idw_val_url = "https://dl.dropboxusercontent.com/s/bm26d038xr925sh/interpolation_validation.csv?dl=0"
    return pd.read_csv(idw_val_url)

with st.spinner('Application is loading...'):
    idw_imgs = load_image_data()
    idw_stats = load_stat_data()
    idw_val = load_val_data()

# Display interpolated map
with col1:
    date_time = st.selectbox('Date & Time', ('01/07/2020 13:00', '01/02/2021 20:00'))
    date_time = datetime.strptime(date_time, "%d/%m/%Y %H:%M").strftime("%Y%m%d%H")
    power = st.slider('Exponential Distance Weight', min_value=0.5, max_value=2.75, value=2.0, step=0.25)
    power = str(power).replace(".","")
    npoints = st.slider('Number of Points', min_value=1, max_value=19, value=15, step=2)

    selected_img_name = "temp{}_idw_pow{}_npoi{}".format(date_time, power, npoints)
    selected_img = [img for img in idw_imgs if img.filename == "{}.png".format(selected_img_name)][0]
    st.image(selected_img)

# Display stats panel & validation results
idw_stats_selected = idw_stats[idw_stats['name']==selected_img_name]
with col2:
    st.subheader('Basic stats')
    st.metric('min', round(idw_stats_selected['min'].values[0],2),delta=None)
    st.metric('max', round(idw_stats_selected['max'].values[0],2),delta=None)
    st.metric('sd', round(idw_stats_selected['sd'].values[0],2),delta=None)
    st.metric("Moran's I", round(idw_stats_selected['moran'].values[0],2),delta=None)
    st.metric("Geary's C", round(idw_stats_selected['geary'].values[0],5),delta=None)
    st.metric("Entropy", round(idw_stats_selected['entropy_mean'].values[0],2),delta=None)

# Display validation results
with col3:
    st.subheader('Validation results')
    st.text('''The following figure displays the root mean square error 
for all possible combinations of the parameters weigth and number of points 
for the selected date. The black dot marks the current parameter selection.''')

    with st.container():
        def rmse(single_diffs):
            return math.sqrt((single_diffs**2).sum()/single_diffs.count())

        def get_pow(layer_name):
            str_pow = [name.rsplit("_",2)[1].replace("pow","") for name in layer_name]
            float_pow = [float(pow[:1]+'.'+pow[1:]) if len(pow)>1 else float(pow) for pow in str_pow]
            return float_pow

        def get_npoi(layer_name):
            int_npoi = [int(name.rsplit("_",1)[1].replace("npoi","")) for name in layer_name]
            return int_npoi


        val_overview = (idw_val.groupby(['idw_layer'])
                            .agg(rmse=('diff_meas_interpol', rmse))
                            .reset_index()
                            .assign(temptime = lambda x: [name.split("_")[0] for name in x['idw_layer']])
                            .assign(npoi=lambda x: get_npoi(x['idw_layer']))
                            .assign(pow=lambda x: get_pow(x['idw_layer'])))

        fig_prep = (val_overview[val_overview['temptime']==selected_img_name.split("_")[0]]
                            .sort_values(['npoi','pow'])
                            .groupby(['npoi'])
                            .agg({'pow': lambda x: list(x),
                                    'rmse': lambda x: list(x)})
                            .reset_index())

        fig = go.Figure(data=go.Heatmap(
            x = fig_prep['pow'][0],
            y = list(fig_prep['npoi']),
            z = list(fig_prep['rmse']),
            zsmooth='best',
            colorscale='RdBu_r',
            hoverinfo=None,
            name=''))
        fig.add_scatter(x=val_overview[val_overview['idw_layer']==selected_img_name]['pow'].tolist(),
                        y=val_overview[val_overview['idw_layer']==selected_img_name]['npoi'].tolist(),
                        mode='markers',
                        name='current parameter selection',
                        marker=dict(size=12, color='black'))
        fig.update_xaxes(title=dict(text='power'))
        fig.update_yaxes(title=dict(text='number of points'))
        st.plotly_chart(fig)


        st.text('')
        st.text('''Below the leave-one-group-out cross-validation results for the chosen parameter
combination is shown. Points are colorised according to their height above see level
as this variable explains the strongest deviations.''')
        val_specific = idw_val[idw_val['idw_layer']==selected_img_name]
        val_specific['temp_interpolated'] = val_specific['temp'] + val_specific['diff_meas_interpol']
        fig = px.scatter(val_specific, 
                        x='temp', 
                        y='temp_interpolated', 
                        color='height', 
                        color_continuous_scale=["#1A9850", "#91CF60", "#F1A340", "#8C510A"],
                        trendline='ols',
                        trendline_color_override='green',
                        hover_data=['station_name', 'federal_state'],
                        labels=dict(temp='measured temperature(°C)', temp_interpolated='interpolated temperature(°C)'))
        fig.add_trace(go.Scatter(x=np.arange(min(val_specific['temp']), max(val_specific['temp']),0.01),
                                y=np.arange(min(val_specific['temp']), max(val_specific['temp']),0.01),
                                text='1:1 line',
                                name='',
                                mode='lines',
                                line=dict(color='grey'),
                                showlegend=False))
        st.plotly_chart(fig)