The ESA Earth Observation Φ-week

EO Open Science and FutureEO

12—16 November 2018 | ESA—ESRIN | Frascati (Rome), Italy

Connect with us


Final Programme


Space for Earth - Opening Session on Space4.0 and EO
09:00 - 10:30
Chair: Simonetta Cheli - ESA


Transformative Technologies for Space
11:00 - 12:30
Chair: Bianca Hoersch - ESA


Entrepreneurship & New Business Models
14:00 - 15:30
Chair: Elia Montanari - ESA


Innovation for Space
16:00 - 17:40
Chair: Pierluigi Silvestrin - ESTEC


17:40 - 17:50
Summary of the Day (summary )

Chair: Iarla Kilbane-Dawe - ESA

Bootcamp
09:00 - 10:30
Chair: Ryan Laird - Design & Data GmbH

Φ-week Startup Bootcamp (ID: 381)
Presenting: Laird, Ryan

Φ-week Startup Bootcamp (11-13 Nov 2018) [http://phiweekbootcamp.space] is bringing together EO experts, non-space corporates, tech leaders, with "aspiring" entrepreneurs for an intensive 3-days event where participants will develop ideas for their own start-up by using design thinking methods. An expert jury will decide on 13 Nov, which start-up ideas, products and business models are the most promising.

Authors: Laird, Ryan
Organisations: Design & Data GmbH, Germany

EO Open Science

AI4EO (Part1)
09:00 - 10:30
Chair: Diego Fernandez Prieto - ESA- ESRIN

09:00 - 09:20
EO in Society: Open Science and Innovation (ID: 342)
Keynote: Brovelli, Maria Antonia
(PDF )

Keynote Talk for Open Science

Authors: Brovelli, Maria Antonia
Organisations: Politecnico di Milano, Italy
09:20 - 09:40
Opportunities and Future Directions in Land Use and Land Cover Classification with Sentinel-2 (ID: 341)
Keynote: Helber, Patrick
(PDF )

Recent advances in Earth observation open up a new exciting area for exploration of satellite image data with AI. Focusing on Sentinel-2 satellite images, we will show how to analyze this new Earth observation data source with deep neural networks. An automated satellite image understanding is of high interest for various research fields and industry sectors such as agriculture, urban planning and insurance. We apply deep neural networks for patch-based land use and land cover classification. We will specifically focus on the challenges and future opportunities of an AI-based classification system based on Sentinel-2 satellite imagery. This includes the transition from single patch based predictions towards a multi-temporal, multi-class and pixelwise classification approach.

Authors: Helber, Patrick; Bischke, Benjamin
Organisations: German Research Center for Artificial Intelligence (DFKI), Germany
09:40 - 09:55
AgriGEO: Geoinformation solutions for agriculture based on Big Data analytics (ID: 156)
Presenting: Volpe, Fabio
(PDF )

With the fast growth and availability of time series from constellations delivering free of charge (Sentinel-1, Sentinel-2, Sentinel-3, Landsat) or commercial (DigitalGlobe, Planet) high and very-high resolution satellite data, the usage of Earth Observation information has progressively shifted from simple mapping to complex monitoring applications, with advanced feature extraction and change detection capabilities and fusion with non-EO data. The integration of these satellite time series with ICT scalable resources and with the adoption of big data analysis techniques has opened new frontiers in the usage of Geoinformation for the provision of services in many application domains. One of the application domain for which this new scenario is providing very interesting results is agriculture, for which e-Geos has developed a dedicated service platform: AgriGEO. Through this platform are provided Geoinformation-based services in agriculture dedicated to a wide range of users such as: Public Administration, farmers, professionals, insurances, investors, service integrators. Services provided by AgriGEO strongly rely on the usage of satellite time series, organised in multi-source Data Cubes managed in a fully scalable environment and allowing a fast and efficient extraction of information for feeding vertical workflow pipelines, often requiring near-real-time delivery performances. One main driver for the provision of these services is the availability of big data analysis techniques enabling the extraction of information and analytics from huge amounts of data as those provided by the satellite time series. In this scenario, several services provided by the application platform are based on the analysis of satellite data by adopting techniques operating on pure numbers, with a loose link to the physical meaning of satellite measures over vegetated areas. For example, a specific processing chain, based on large multi-source high-resolution satellite time series, has been tailored to the generation of crop maps over very wide areas, leveraging the scalability of ICT resources and the performance of big data analysis techniques. The predictive analysis of visible and infrared spectral reflectance values is particularly promising as it provides reliable insights on crop type from the early stages of growth. Innovative approaches combining machine learning algorithms with big data management techniques can represent an opportunity to reach high levels of precision in the classification of crop based on satellite feeds, delivering, for example, early predictions of acreage and yield. AgriGEO embeds an ad hoc innovative machine learning algorithm that exploits both time series of observations and the probability distribution of visible and infrared reflectance values to 1) build a training set for the current year and 2) model the spectral signature of different types of crop to exploit the stream of satellite observations and obtain early predictions. The algorithm has been adopted for performing early classification of corn and soy crops over very wide area base on Sentinel-2 and Landsat 8 time series.

Authors: Volpe, Fabio (1); Pistillo, Pasquale (1); Grandoni, Domenico (1); Corsi, Marco (1); Biscardi, Mariano (1); Tricomi, Alessia (2); Francalanci, Chiara (3); Geronazzo, Angela (3); Giacomazzi, Paolo (3); Ravanelli, Paolo (3)
Organisations: 1: e-geos, Italy; 2: Tor Vergata University, Italy; 3: Politecnico di Milano, Italy
09:55 - 10:10
Artificial Intelligence For Earth Observation Data Production And Analytics: Result And Perspectives Of Current French Spatial Agency Earth Observation Department Researches (ID: 246)
Presenting: Masse, Antoine
(PDF )

Earth Observation (EO) is changing as database sizes and variabilities have exploded in the last few years and lead to emergence of new research opportunities and applications in the domain of large/big EO data production and exploitation, thanks to Machine Learning (ML) techniques and powerful computational architectures like High Performance Computing (HPC) and clouds. In this EO revolution, the French space agency (CNES) is leading some major researches in Artificial Intelligence (AI) for EO data production and exploitation with various French research institutes for the benefit of a large community: scientific and institutional partners, leading space industries, downstream service companies, etc. The ESA Phi Week is the opportunity to present and discuss on problematics, research results and perspectives on the use of AI techniques in the context of EO large data production and analytics. First, we will discuss on data production optimization with the optimal use of computational facilities and the combination of HPC and Cloud platforms. In this domain, CNES is leading research on the use of Artificial Intelligence, and more particularly Reinforcement Learning (RL) techniques, to dynamically adapt and optimize data production systems. This AI uses cost functions based on real data production system observations to predict and adapt orchestration strategies for optimization of cost, cache storage, and computational resource facilities (HPC and Cloud). This AI is also capable of trying new configurations (resource allocation) to explore and adapt to any system, to find execution anomalies, and thus to converge to an optimal orchestration strategy. Secondly, we will discuss on information extraction techniques in two distinct contexts: long time EO image time series and very high resolution (VHR) images. A large part of this research concerns the combination of multiple mission and sensor data. In this context, we take the example of the Spot 1 to 5 data, reprocessed by the Spot World Heritage (SWH), and their use to extend the ESA Sentinel-2 time series. The combination of multiple sensor characteristics and database sizes leads to the development of new AI techniques (Deep Learning, Transfer Learning) to enable the learning on large and accurate database, e.g. Sentinel-2 cloud mask, and then its application to mask detection in Spot 1-5 images. We will also discuss of information extraction from very high resolution images like Pléiades and Spot-5 Supermode and more particularly on the optimization techniques to increase both computational efficiency and object detection accuracy, with applications to car and ship detections.

Authors: Masse, Antoine; Lassalle, Pierre; Kettig, Peter; Ducret, Thibault; Delvit, Jean-Marc; Baillarin, Simon
Organisations: French Spatial Agency, France
10:10 - 10:25
Exploiting Contextual Features In Superpixels For Land Cover Mapping Using High Resolution Image Time Series (ID: 125)
Presenting: Derksen, Dawa Jozef
(PDF )

Ambitious earth observation missions such as Sentinel have been providing the remote sensing community with an abundance of images with spatial, spectral, and temporal resolutions never seen before on a global scale. Time series acquired with a short revisit of 5 days and a spatial resolution of 10m in several spectral bands allow for a precise tracking of natural, agricultural and artificial surfaces, which opens up the possibility of near real-time surface mapping applications. This can be enhanced by combining images from different satellites, like radar (Sentinel-1), optical (Sentinel-2), sub-metric optical imagery (Pleiades) and so on. Faced with this huge amount of data, new supervised learning algorithms must be devised to improve the quality of the land cover maps without overloading the computational costs. Several studies show that discrimination between certain land cover classes can be largely improved by using contextual information, i.e. information contained in the surroundings of the pixel to classify. In particular, Convolutional Neural Networks have recently gained in popularity due to their ability to extract contextual features through a supervised end-to-end optimization. However, applying such methods to data with a large dimension can be challenging, due to the high computational cost of an optimization with so many variables. In classical, less costly methods, the feature extraction step is done manually, guided by thematic knowledge of the problem, and is then combined with a standard classifier like Random Forest or Support Vector Machine. The aim of this paper is to demonstrate new ways to include contextual information into the land cover mapping process, with the goal of improving the classification accuracy while maintaining a relatively low computational cost. The approach presented in this study combines older contextual methods with new interesting ideas from Deep Learning methods. Calculating contextual features requires first selecting a shape of neighborhood, and the most basic idea of using a fixed shape like a square window around the pixel has the tendency to smooth out high frequency areas such as corners and thin elements in the final map. On the other hand, if an object segmentation is used to extract the neighborhoods (like in OBIA), textured areas give very small segments which are unable to capture the diversity of pixels that characterizes these terrains. For this reason, our study investigates the use of neighborhoods that are adaptive to image content, while maintaining the compacity and size constraints of fixed shape neighborhoods. Superpixel methods split the image in compact and equally spread segments that adhere to the natural boundaries in the image, so features calculated in these neighborhoods have a low impact on the geometry of high spatial frequency areas. The aim is to demonstrate the effectiveness of superpixel neighborhoods for extracting contextual features, evaluated in terms of classification accuracy, geometric precision, and computational load when applied to high dimensional Sentinel-2 optical time series. Results show that superpixel neighborhoods provide acceptable land cover maps for a lower computational cost than Deep Learning.

Authors: Derksen, Dawa Jozef; Inglada, Jordi; Michel, Julien
Organisations: Centre National d'Etudes Spatiales, France

AI4EO (Part2)
11:00 - 12:30
Chair: Diego Fernandez Prieto - ESA- ESRIN

11:00 - 11:15
A Geometrical Approximation of PCA for Hyperspectral Data Dimensionality Reduction (ID: 147)
Presenting: Ivanovici, Mihai
(PDF )

Principal Component Analysis (PCA) is a widely-used statistical tool for multivariate signal and image analysis, including dimensionality reduction, optimized representation, compression, feature extraction, visualization and pixel unmixing in the case of hyperspectral satellite images. There exist various methods for computing PCA and approximations, including non-linear and machine learning-based approaches. We propose a geometrical construction of an approximation of the PCA based on the maximum distances between the data points instead of the classical statistical approach based on computing the eigenvectors of the data covariance matrix. We propose this approach for estimating the principal components of a multivariate data set, based on the observation that the direction given by the furthest points is, depending on the correlation of data, relatively close to the one given by the first principal component. The assumption is that the direction of maximum distance in a set indicates the direction for the evolution of the signal. Then, iteratively, we apply the same approach for the second and third components, and so on. The computation of the maximum distance for each approximated component can be performed in a fully parallel way, for improving the order of complexity and the implementation running time, which constitute one major advantage of the proposed approach. We validated our approach on synthetic 2D data with Gaussian distribution, by computing two error measures in terms of the angle between the direction of the maximum distance and the one of the first PCA component, and the displacement between the center of the segment given by the maximum distance and the data mean. We previously showed that this approach can be successfully used for the visualization of hyperspectral images for the Pavia university data set. In this paper, we perform a comparison with a non-linear PCA approach based on neural networks for the purpose of pixel classification for hyperspectral data. We discuss our experimental results on various data sets, especially on the airborne INTA Airborne Hyperspectral Scanner (INTA-AHS) data set, showing that there is interest in computing this under-optimal approximation of PCA and then draw the conclusions.

Authors: Ivanovici, Mihai (1); Machidon, Alina (1); Coliban, Radu (1); Del Frate, Fabio (2)
Organisations: 1: Transilvania University of Brasov, Romania; 2: University of Rome Tor Vergata, Italy
11:15 - 11:30
Production Ready Earth Observation Applications Using Machine Learning (ID: 155)
Presenting: Peressutti, Devis
(PDF )

The availability of open source Earth Observation (EO) data through the Copernicus program represents an unprecedented resource for many EO applications, ranging from ocean and land usage and land cover (LULC) monitoring, disaster control, emergency services and humanitarian relief. Given the large amount of high spatial resolution data at high revisit frequency, techniques able to automatically extract complex patterns in such spatio-temporal data are needed. In recent years, machine learning (ML), and in particular Deep Learning (DL), techniques have shown ground-breaking results in many computer vision and data processing tasks, such as image and video classification, speech recognition and natural language processing. ML/DL techniques are a subset of Artificial Intelligence (AI) methods that can automatically model from the input data complex and meaningful patterns that can be easily interpreted by the final user for action-making. In this presentation we will showcase how the EO team at Sinergise has been coupling the Sentinel-Hub services with state-of-the-art ML/DL techniques to develop production ready EO applications. Open source Python packages have been developed to seamlessly access and process spatio-temporal image sequences acquired by the Sentinel satellite fleet in a timely and automatic manner. On top of this processing pipeline, open source ML/DL models have been developed to tackle complex EO problems. For instance, given the recurrent issue of cloudy scenes in Sentinel-2 satellite images, one of the first EO applications undertaken using ML/DL methods has been cloud detection on single-scene MSI images. Relying on such cloud detector, a ML/DL model for LULC monitoring at a country level has been developed using multiple Sentinel data sources. In this presentation we will describe the pipelines used to develop such applications, we will show how the results compare to state-of-the-art methods, and how the open source tools can be adopted by users who wish to develop their own EO applications powered by ML/DL algorithms.

Authors: Peressutti, Devis; Zupanc, Anze; Aleksandrov, Matej; Sovdat, Blaz; Mocnik, Rok; Batic, Matej; Kadunc, Miha; Milcinski, Grega
Organisations: Sinergise, Slovenia
11:30 - 11:45
Physics-aware And Explainable Machine Learning (ID: 224)
Presenting: Camps-Valls, Gustau
(PDF )

Earth observation (EO) through in situ measurements, models and remote sensing data help us improving our prediction and modeling capabilities. In the last decade, machine learning has been reveal as a promising solution to address many of the data analysis problems that we are facing in remote sensing. Current machine learning methods, such as deep neural networks and support vector machines, exploit the complex (spatial and temporal) relations in the data deluge in a semi-automatic manner. However, several relevant shortcomings are indeed present in such approaches: they typically need large amounts of labeled data, they are often regarded as black-box models with little or no transparent access to the learned relationships, and they hardly respect physics processes such as advection, diffusion or energy/mass conservation. Currently, we are reviewing the current machine learning paradigms and their main applications in remote sensing and geosciences. We will make a tour through classification, change and anomaly detection, as well as parameter retrieval with novel and modern techniques. Noting the previous limitations, we will introduce three main advanced machine learning models for EO applications: 1) advances in emulation of radiative transfer codes and optimized LUT generation (examples in PROSPECT and MODTRAN5); 2) a nonlinear spatio-temporal EOF-like dimensionality reduction technique that learns the complex relations in EO data cubes; and 3) physically-consistent regression that with minor modifications can incorporate prior knowledge and constraints. The presented methods outperform standard machine learning methods and pave the way to physics-aware and explainable machine learning EO data processing.

Authors: Camps-Valls, Gustau; Gómez-Chova, Luis; Svendsen, Daniel; Bueso, Diego; Martino, Luca; Perez-Suay, Adrian; Piles, Maria; Laparra, Valero; Ruescas, Ana Belen
Organisations: Universitat de València, Spain
11:45 - 12:00
AI4EO5Challenges (ID: 199)
Presenting: Datcu, Mihai
(PDF )

AI4EO5Challenges Challenge 1: Volume and heterogeneity EO images are multisensory, eg. multispectral, SAR, or altimeter, records. These are multidimensional signals, acquired by sensors and carry physical meaning. They are measuring global land, ocean, or atmospheric parameters. Meanwhile the very High Resolution EO images observe detailed spatial structures and objects and Satellite Image Time Series observe evolution processes over long period of time. Therefore an important particularity of EO data should be considered, this is their “instrument” nature, i.e. they are sensing physical parameters, and they are often sensing outside of the visual spectrum. Moreover, the EO product metadata, are describing location, time of acquisition, instrument parameters, orbit information, product processing level, etc. Additionally GIS and maps, with various thematic, describe related aspects of the observed scenes. Also, other types of geo-information, as geo-morphological models, models of evolution, textual description, contribute to the understanding of EO. In-situ information is continuously growing in diversity of sensors in larger and larger networks, measuring air or water content, in-situ photography, measurements of physical parameters, etc. And location information, multimedia location awareness, GPS, tagging, spatial context, internet social networks or mobile communication information, with a fantastic evolution in diversity, and volume, and containing unexpected important information. Challenge 2: Big EO Data Analytics The today techniques, methods, and tools, for automated data analysis are insufficient for the analysis and information extraction from EO data sources. A new goal has become the gathering of the user’s interest, together with the transformation of the data into reduced information and knowledge items, and adaptation to direct and easy understanding. The capability of retrieving information interactively and the use of data-driven paradigms are now more than ever necessary due to the huge data volumes being involved. Methods of Computer Vision and Pattern Recognition, are needed for new tasks: detecting, localizing and recognizing objects, recognition and extraction of semantic descriptions of the scenes from sensor data. Important particularities are the extraction of quantitative measures of the physical meaningful parameters of the scene, the registration of multi-sensor multi-temporal data and exploitation of variability of the imaging modes to provide different types of information about various structures. All these result in the need of recognition methods to distinguish huge variability of scene classes and objects with very good precision. Challenge 3: Big EO Data Mining Big data involves more and more machine or statistical learning for “discovery” functions. The discrepancy between data volume explosion and the analysis potential is continuously growing, thus new solutions are required. Among the most important can be enumerated: detection of irrelevant data, the design of new sensors based on Compressive Sensing/Sampling, recoding smaller data volumes but with the pertinent content, data compression, machine/statistical learning algorithms for fast prediction, DNN for large scale prediction, content analysis to extract higher-level analytics or extraction and formalization of knowledge for data classification and understanding. There are several advanced topics, beyond today techniques and methods which are fantastically impacting the EO: computational imaging, sensor networks, quantum sensors, quantum information theory, quantum signal processing, quantum machine learning, quantum computers. Challenge 4: Human Machine Communication The Human Machine Communication (HMC) new assets are predictive, adaptive natural user interfaces, learning and anticipating the user behavior and collaborate with the user. HMC shall be based on understanding and learning the user intentions and context, establishing truly a HM semantic dialog to transform non-visual sensor data and information in human easy understandable representations. Challenge 5: Information platforms and architectures The Web based interactive technologies and tools, using distributed architecture systems are the potential to provide access and valorize the EO information and knowledge. To cope with very important load and requirements regarding the data volumes to be accessed, the complexity of the information to be extracted, analyzed and presented, it is important to adapt to specific applications, and speed-up of the interactive operation. Cloud computing should enable tasks not achievable with actual resources. But, new methods are further needed, since tools as Hadoop or MapReduce may soon reache their limits. Potential solutions are foreseen in virtual EO data center frames connected and communicating across clouds for enhanced potential to share hardware resources and data.

Authors: Datcu, Mihai
Organisations: DLR, Germany, University Politehnica Bucharest, Chair Blaise Pascal Paris
12:00 - 12:15
Convolutional Neural Network Assessment For Land Cover Map Production From Sentinel-2 Image Time Series (ID: 126)
Presenting: Poulain, Vincent
(PDF )

Land cover maps consist of an inventory of biophysical coverage of lands for a given time period. They are used to classify the ground content into a variable number of classes. For instance, they identify man-made structures (roads, buildings, etc.), agricultural plots (annual crops, pastures, etc.) or natural areas (water, forest, sand). Their availability is vital for applications in fields such as cartography, city and country planning, and agriculture. Land cover maps are usually created by human operators through visual image analysis. This leads to very long production times, high prices, and heterogeneous quality. The increasing availability of open and free satellite data with high revisit frequency, like the EU's Copernicus program, drives the research for automation in land cover map production. It can allow frequent updates of land cover maps at a low cost. Deep Learning is a particular set of machine learning techniques that recently produced big improvements in computer vision applications. Contrary to traditional techniques, deep learning methods learn their own relevant features from the data. They rely for the most part on multi-layer convolutional neural networks. Deep learning techniques outperform traditional techniques in computer vision applications like multi-class image classification, object detection and image segmentation. The goal of this work is to assess the usefulness of Deep Convolutional Neural Networks for automatic land cover map production using Sentinel-2 image time series. We compare this approach to those currently used in operational settings which rely on pixel-based (non-contextual) Random Forest classification. The data used for this study are Sentinel-2 images for the entire year 2016 over metropolitan France. Sentinel-2’s capabilities - 10 high-resolution spectral bands, 290 km swath, 5 day revisit frequency and good multi temporal registration - are perfectly adapted to land cover map production. Sparse labelled data (16 classes) was made available by CNES's CESBIO lab for this study. The proposed network is fully convolutional. It is based on the classical U-Net segmentation network. However, due to the high level of sparsity of the labelled data, the original U-Net architecture generated segmentations that lacked the required level of detail. A land cover map must be fine grained to be suited to end user needs. Thus, the network was improved with a cascade of 1x1 convolutional filters whose output is fused with the output of the 3x3 filters. This approach increases the spatial accuracy of produced maps. Results of this study highlight strengths and weaknesses of such a method. They are carefully compared to classical methods, like Random Forest (already used by the French space agency on the same dataset). The evaluation is performed using statistical measures but also visual analysis of classification maps. This study assesses the ability of the neural network to be generic and able to process new data, new classes, efficiently, without need of prior information of features characterizing each class. These criteria are important for the qualification of such a method for operational use and scalability, which is not usually analysed in literature.

Authors: Poulain, Vincent (1); Poughon, Victor (2); Inglada, Jordi (3); Stoian, Andrei (1); Galimberti, Arnaud (1)
Organisations: 1: Thales Services, France; 2: CNES, France; 3: CNES/CESBIO, France
12:15 - 12:30
Artificial Intelligence for Earth Observation: Smart Analytics at the Satellite Applications Catapult (ID: 228)
Presenting: Jones, Tom
(PDF )

Conventional geo-information analyses remain very labour intensive while the volume of available geospatial image datasets continues to grow exponentially. Artificial Intelligence (AI), including Machine Learning (ML) and Computer Vision (CV), are maturing techniques within the geospatial community due to their unrivalled ability to efficiently extract valuable information in such a world of data abundance. Recognising this, Smart Analytics is an active programme of work established at the Satellite Applications Catapult to accelerate adoption of these techniques in the UK geospatial community and stimulate development of the next generation of analytics techniques. In this speaking slot an overview of Smart Analytics activities undertaken over the last year shall be presented, including our landscaping of the UK community and innovative projects around concepts such as labelled data generation and intuitively breaking the barrier between satellite datasets and their potential applications. We shall also give insights into what to expect from the next 12 months of our work.

Authors: Jones, Tom; Muller, Adrien
Organisations: Satellite Applications Catapult, United Kingdom

AI4EO (Part3)
14:00 - 15:30
Chairs: Maria Antonia Brovelli - Politecnico di Milano, Paulo Sacramento - Solenix c/o European Space Agency

14:00 - 14:15
Statistical Distillation of the Earth System Data Cube (ID: 248)
Presenting: Camps-Valls, Gustau
(PDF )

The Earth System Data Cube (ESDC) and associated Data Lab allow easy access to an harmonized set of variables and climate indicators globally, weekly, at different spatial resolutions. The cube include a wide range of variables encoding atmospheric conditions, climate states, the terrestrial biosphere, the terrestrial hydrosphere, land-atmosphere fluxes, as well as variables that allow for attributing changes in the subsystems to anthropogenic interventions i.e. land-use change. Its scientific relevance is paramount and of no doubt. Nevertheless, from a pure statistical point of view, the ESDC object is very attractive and poses many different challenges. After all, this is a high dimensional object that evolves in space and time, with high levels of collinearity, missing data, uneven uncertainties and noise, as well as relative informative content in all directions. This work presents some advanced methodological tools from machine learning and statistics to squeeze and analyze the ESDC cube. We introduce three novel approaches: 1) an information-theoretic analysis of the space-vs-time dimensions which allows us to quantify and detect extreme events; 2) a wavelet-based compression analysis of the cubes to identify the most relevant variables and dimensions; and 3) a nonlinear spatio-temporal EOF-like dimensionality reduction technique that learns the complex relations in the data cubes. The presented general-purpose data analysis methods may lead to a systematic scheme for knowledge discovery in Earth system cubes on solid statistical grounds.

Authors: Camps-Valls, Gustau (1); Johnson, Emmanuel (1); Laparra, Valero (1); Bueso, Diego (1); Brandt, Gunnar (2); Fomferra, Norman (2); Permana, Hans (2); Mahecha, Miguel (3)
Organisations: 1: Universitat de València, Spain; 2: Brockmann Consult, Germany; 3: MPI Biogeochemistry, Germany
14:15 - 14:30
FPGA- based approach to efficient on-board data processing using deep neural networks. (ID: 236)
Presenting: Czyz, Krzysztof
(PDF )

Earth observation (EO) satellites equipped with on-board high-resolution sensors , e.g., hyperspectral cameras, collect extreme data volumes. Unprocessed data data would require dozens of terabytes of satellite on-board memory for data buffering before it can be transmitted down to a ground station, and then to a data processing centre. One of the most important drawbacks of such approach is limited bandwidth of the data link and its cost. Therefore, transfer of all the collected data can be unprofitable. The solution to this problem may be the usage of Deep Neural Networks (DNN) for data processing on board of a satellite, so that the processed (and already segmented) image data is sent to the ground station reducing (by orders of magnitude) the amount of data to be transferred. Nevertheless, such approach can be computationally intensive and require supercomputing resources - that can be challenging, especially on-board of a small satellite. However, FPGA-accelerated computing is becoming a triggering technology for building low-cost power-efficient supercomputing systems, which are accelerating deep learning, analytics, and engineering applications. Due to advances in silicon integrated circuits production, new chips are becoming more efficient. Also, employing FinFET production process make them more reliable and resistant to radiation, hence they can be successfully applied on-board of small satellites operating at Low Earth Orbits. The objective of this presentation is to present a dedicated FPGA based DNN processing unit intended to be used for small EO satellites. This unit is built on top of the NVIDIA Deep Learning Accelerator (NVDLA) - a standardized, open deep learning acceleration architecture. The NVDLA architecture is providing interoperability with the majority of modern Deep Learning networks and frameworks, including TensorFlow. The unit is taking a performance advantage by parallel execution of a large number of operations, like convolutions, activations and normalizations, which are fairly typical for DNN structures. The NVDLA was implemented in Xilinx Zynq UltraScale+ MPSoC FPGA providing a significant boost in terms of performance and power consumption comparing to non-accelerated processing of DNNs. The main limiting factor for usage of NVDLA for space applications is lack of fault tolerance. Therefore, in order to make the DNN accelerator unit more suitable for operation in harsh environment, a number of architecture modifications are proposed. To name the most important ones: functionality to detect and correct single-event upset (SEU) errors in registers and memory, modular redundancy of critical circuits, fault detection and reporting to host processor. The implementation details and system-on-chip features will be summarized and DNN accelerator efficiency in terms of performance and power consumption will be discussed during the presentation. Finally, we will show how DNNs can be quantized (without lowering their segmentation capabilities) to fit into a very constrained hardware environment.

Authors: Czyz, Krzysztof; Maciag, Mateusz; Lach, Jacek; Kurczalski, Marcin; Nalepa, Jakub; Ribalta Lorenzo, Pablo
Organisations: FPSpace, Poland
14:30 - 14:45
Accurate and Scalable Remote Sensing Image Search and Retrieval in Large Archives (ID: 257)
Presenting: Demir, Begüm
(PDF )

Due to the continuous advances in satellite technology, recent years have witnessed an explosive growth of remote sensing (RS) image archives. Accordingly, fast and accurate content based image search and retrieval (CBIR) has attracted increasing attention in RS, aiming to seek the most similar images to a query image from large-scale archives. The applications of querying image contents from large RS data archives and indexing them rely on the capability and effectiveness of: 1) the feature extraction techniques in describing RS images and their specific properties on spatial and spectral resolutions of the data; and 2) the retrieval algorithms in evaluating the similarity among the considered features. Most of the existing CBIR systems in RS have limitations to deal with large-scale image search and retrieval problems that we are currently facing. In this work, we present our latest CBIR system that: 1) characterizes and exploits high level semantic content and spectral information present in RS images; and 2) achieves accurate and scalable RS image indexing and retrieval. In details, our system includes a hashing method that maps the original feature space into a low-dimensional Hamming (binary) space by a set of hash functions. By this way the images are represented by binary hash codes, and therefore image retrieval can be achieved by simply calculating the Hamming distances with simple bit-wise XOR operations. Unlike the existing systems, our proposed system includes a strategy to apply hashing to huge amount of RS images in time-efficient manner (in terms of both storage and speed) and reach accurate search capability within huge data archives. Thus, the proposed system is much more suitable to be used on real RS image large-scale retrieval scenarios where the images have highly complex semantic content.

Authors: Demir, Begüm
Organisations: Technische Universität Berlin, Germany
14:45 - 15:00
AI4EO – Successful Stories and Open Issues (ID: 268)
Presenting: Zhu, Xiaoxiang
(PDF )

AI4EO stands for Artificial Intelligence (AI) for Earth Observation (EO). AI is currently penetrating many technological areas. Even though the term is used inflationary today, it often refers to machine learning, usually with deep neural networks (Deep Learning). Internet giants such as Google, Facebook, Microsoft with their almost unlimited computing capacities achieved spectacular results in image classification, text translation or in the Go game. At the same time, Earth observation has irreversibly arrived in the Big Data era with the Sentinel satellites (and in the future with Tandem-L). This requires not only new technological approaches to manage large amounts of data, but also new analysis methods. We are one of the pioneers in using Deep Learning in Earth observation and are enthusiastic about its possibilities. Going beyond quick-wins by fine-tuning existing architectures for the usual classification and detection tasks, we take particular care of the fact that Earth observation data and problems are in many aspects different from standard imagery found in the internet. In this talk, a wide spectrum of possibilities where Earth observation could tremendously benefit from methods from AI and Data Science, like deep learning, will be presented [1]. This includes hyperspectral data analysis [2], [3], time series data analysis [4], SAR imaging [5], multimodal data fusion [6], very high resolution scene interpretation [7], as well as geo-information extraction from social media data [8]. In particular, we exploit deep learning as an implicit nonlinear model to tackle important challenges, such as monitoring of global urbanization – one of the most important megatrends of global change. Despite of these first successes, there is a great need for research in the future: according to the current hype, important questions, including theoretical ones, will have to be dealt with. For example, in many cases remote sensing aims at retrieving geophysical or bio-chemical quantities rather than detecting or classifying objects. These quantities include mass movement rates, mineral composition of soils, water constituents, atmospheric trace gas concentrations, and terrain elevation of biomass. Often process models and expert knowledge exist that is traditionally used as priors for the estimates. This particularity suggests that the so far dogma of expert-free fully automated deep learning should be questioned for remote sensing and physical models should be re-introduced into the concept. Other examples are handling small and erroneous training data sets, networks for complex SAR data and much more. The joint force in the community is needed in order to address these challenges. References: [1] X. X. Zhu et al., “Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources,” IEEE Geosci. Remote Sens. Mag., vol. 5, no. 4, pp. 8–36, Dec. 2017. [2] L. Mou, P. Ghamisi, and X. X. Zhu, “Deep Recurrent Neural Networks for Hyperspectral Image Classification,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 7, pp. 3639–3655, Jul. 2017. [3] L. Mou, P. Ghamisi, and X. X. Zhu, “Unsupervised Spectral #x2013;Spatial Feature Learning via Deep Residual Conv #x2013;Deconv Network for Hyperspectral Image Classification,” IEEE Trans. Geosci. Remote Sens., vol. 56, no. 1, pp. 391–406, Jan. 2018. [4] L. Mou, L. Bruzzone, and X. X. Zhu, “Learning Spectral-Spatial-Temporal Features via a Recurrent Convolutional Neural Network for Change Detection in Multispectral Imagery,” ArXiv180302642 Cs, Mar. 2018. [5] L. H. Hughes, M. Schmitt, L. Mou, Y. Wang, and X. X. Zhu, “Identifying Corresponding Patches in SAR and Optical Images With a Pseudo-Siamese CNN,” IEEE Geosci. Remote Sens. Lett., vol. 15, no. 5, pp. 784–788, May 2018. [6] P. Ghamisi, B. Höfle, and X. X. Zhu, “Hyperspectral and LiDAR Data Fusion Using Extinction Profiles and Deep Convolutional Neural Network,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 10, no. 6, pp. 3011–3024, Jun. 2017. [7] L. Mou and X. X. Zhu, “RiFCN: Recurrent Network in Fully Convolutional Network for Semantic Segmentation of High Resolution Remote Sensing Images,” ArXiv180502091 Cs, May 2018. [8] J. Kang, M. Körner, Y. Wang, H. Taubenböck, and X. X. Zhu, “Building instance classification using street view images,” ISPRS J. Photogramm. Remote Sens., Mar. 2018.

Authors: Zhu, Xiaoxiang
Organisations: DLR & TUM, Germany
15:00 - 15:15
From Research & Innovation to Operations: Implementing a Set of Space and Security services (ID: 110)
Presenting: Albani, Sergio
(PDF )

The Research, Technology Development and Innovation (RTDI) Unit is conducting Research and Innovation (R&I) activities at the European Union Satellite Centre (SatCen) with the aim of providing new solutions to support the operational needs of SatCen and its stakeholders. RTDI is also in charge of fostering the cooperation with international organisations such as the European Space Agency (ESA) and the Group on Earth Observations (GEO). To advance the management and exploitation of Earth Observation (EO) and collateral data for improved service provision to Space and Security stakeholders, RTDI is currently looking to implement a set of services using Big Data, Cloud Computing and Artificial Intelligence technologies. The typical RTDI service development lifecycle includes service design, implementation, testing, validation and integration in the operational chain. In the pre-operational stage, the evaluation of suitable technologies and service applications is supported through the participation in several H2020 projects (e.g. BigDataEurope, EVER-EST, NextGEOSS and BETTER) and by collaborating with international organisations (e.g. ESA and GEO). The service design is driven by requirements collected from Space and Security stakeholders (e.g. EU Member States and a number of EU entities); in particular, new technologies should enable effective exploitation of increasing data volumes (foreseeing a major contribution of open data) through automatic tools covering the whole data life-cycle. Starting from these requirements, the identified and developed services outline three main areas: Data Access, Processing and Analysis. Data Access services aim at facilitating discovery and fetching of relevant data (e.g. geospatial data from satellites and other sources). Services are mainly set on Sentinel data: the Sentinel Data Access service currently guarantees a fast and reliable access to Sentinel-1 and Sentinel-2 data via the Copernicus Services Data Hub (ServHub). Through an optimized search the user inputs minimal query parameters to have access to rapid visualization and local download mechanism. Data Processing services aim at providing users with image processing capabilities using processing chains customized for Space and Security applications. The Sentinel-1 Pre-Processing service allows Sentinel-1 data automatic pre-processing, providing a terrain-corrected product ready to use on the user own GIS. The Change Detection service allows the computation of Change Detection Maps, using Sentinel-1 imagery, within the user-defined interval of interest. Two SAR data processing chains are being developed: Amplitude Change Detection (ACD) and Multi-Temporal Coherence (MTC). Data Analysis services aim at extracting value from the data. The Object Detection service is a demonstrator aiming at identifying specific objects of interest for the Space and Security community using Machine Learning techniques. These services have different levels of maturity: the Sentinel Data Access service is deployed and operational, the Data Processing services are in the testing and validation phase, while the Object Detection service is currently under development. The final step will be the implementation of all services in a unitary framework, for a full integration within the SatCen operational workflow.

Authors: Albani, Sergio; Lazzarini, Michele; Angiuli, Emanuele; Popescu, Anca
Organisations: European Union Satellite Centre, Spain

AI4EO (Part4)
16:00 - 17:30
Chairs: Maria Antonia Brovelli - Politecnico di Milano, Paulo Sacramento - Solenix c/o European Space Agency

16:00 - 16:15
AI4EO Challenges in the context of the Great Green Wall Initiative (ID: 269)
Presenting: Salberg, Arnt
(PDF )

Droughts in the Horn of Africa and the Sahel from the 1970s onwards gave wings to the Great Green Wall (GGW) Initiative. The GGW is a plan to build an almost 8,000km long, 15km wide, wall of trees across the African continent – from Senegal in the west to Djibouti in the east. The GGW will act as a barrier to prevent spread of the desert. Considering the need of monitoring the results of the investments in the Sahel regions we will explore if deep learning techniques applied to multi-sensor time series of Sentinel-1 and Sentienel-2 data. In particular, we will invsetigate of deep learning techniques: - can be applied detect dry forest in the Sahel regions, with robustness to seasonal challenges, - can be applied to distinguish between tree planted areas and sprouting roots - can be able to monitor the evolution of the dry forest (this in terms of deforestation, reforestation, and forest degradation) - can be applied to provide decision making support, by combining different indicators (climate, soil quality, trend of the dry forest, etc.) to identify areas were investment in reforestation can be more efficient/maximized

Authors: Waldeland, Anders (1); Salberg, Arnt (1); Marin, Alessandro (2)
Organisations: 1: Norwegian Computing Center, Norway; 2: ESA/ESRIN
16:15 - 16:30
Bridging Climate and Earth Observation in AI-Enabled Scientific Workflows on Next Generation Federated Cyberinfrastructures (ID: 270)
Presenting: Landry, Tom
(PDF )

Since late 2015, the Computer Research Institute of Montreal (CRIM) and Ouranos, a Consortium on Regional Climatology and Adaptation to Climate Change, both located in Montreal, have developed the PAVICS (Power Analytics and Visualization for Climate Science) platform with the support of CANARIE, Canada's Advanced Research and Innovation Network. PAVICS focuses on the creation of standard and custom climate scenarios for impacts and adaptation studies; exploration, analysis and validation of climate model simulations; and visualization of climate scenarios. Discussions are now underway for PAVICS NexGen to be used by the Canadian Center for Climate Services (CCCS), an integral player in the Pan-Canadian Framework on Clean Growth and Climate Change. PAVICS is powered by Birdhouse, a software framework offering a collection of services to support data processing in the climate science community. Birdhouse development is lead by the German Climate Computing Center (DKRZ) as a community-driven software ensemble. The framework is being aligned to recommendations and demands of international juridical Conventions of the United Nations (UN) and is in-line with the Paris Agreement, aiming to provide processes supporting the realization of the UN Sustainable Development Goals (SDGs). More recently, Birdhouse has become the backend for climate projection data processing for the Copernicus Climate Change Service (C3S). As a very ambititous Earth Observation (EO) initiative headed by the European Commission (EC) in partnership with ESA, Copernicus focuses on several thematic areas like land, marine, atmosphere and climate change and makes extensive use of state-of-the-art space components through its Sentinel-series satellites. Advanced support for EO data is currently being implemented into PAVICS through several initiatives. For instance, the platform is being reused by CRIM in the OGC Testbed-14 (TB-14) Earth Observation and Clouds (EOC) thread, sponsored by ESA, aiming to advance cloud API interoperability and application portability as key elements in hybrid cloud computing environments. Other initiatives such as PAVICS Next Generation (NexGen), planned for the next 3 years, also include one of the very first Canadian Data Cube. This datacube will be providing Analysis Ready Data (ARD) to scientists and end-users alike, further facilitating the development of EO and climate services while favoring the adoption of data governance best practices. PAVICS NexGen aim to bridge the gap between EO and climate sciences through the use of multidisplinary workflows and Natural Language Processing (NLP) techniques and tools, such as an ontology browser and Query Understanding Interfaces (QUI). Users will be able to create and discover applications and workflows most closely matching their own needs and to run them in federated cyberinfrastructures. CRIM, Ouranos and DKRZ are all recent OGC members, but are also active stakeholders in the Earth System Grid Federation (ESGF). The federation collaboratively deploys and maintains software infrastructure for management, dissemination, and analysis of model output (CMIP, CORDEX) as well as observational data (Obs4MIPS). Federated infrastructures, security and data are also being advanced in OGC TB-14 and are key challenges inherent to ESA's Thematic Exploitation Platforms (TEP). Recent sustainability funding by CANARIE will integrate novel remote sensing services based on Deep Learning (DL) techniques, allowing classification, detection, and tracking of terrestrial properties and variables. These new services will allow for building and exploiting a GEO image database, include basic querying and collaborative annotation enabling Big Data applications that were previously impossible to perform. PAVICS NexGen also calls for inclusion of novel Machine Learning (ML) and DL services and pipelines, applied to various EO and climate science use cases such as storm tracking and land change detection. Artificial Intelligence best practices are also experimented with and documented in TB-14. In the Machine Learning task, Synthetic Aperture Rader (SAR) imaging, Very High Resolution (VHR) imagery and IoT sensors such as surveillance cameras are used in a flood mapping and response concept. Participants and sponsors from at least six different countries are jointly developing a proof of concept and its associated engineering report (ER) will be delivered by CRIM to the OGC. We envision that the series of initiatives presented here will: provide a structuring framework as well as important building blocks for the international scientific community; foster innovation and technological transfer for participating and beneficiary countries; and help develop a climate diplomacy involving scientists, engineers, educators, politicians and citizens towards a better Future Earth for all.

Authors: Landry, Tom
Organisations: Computer Research Institute of Montreal (CRIM), Canada
16:30 - 16:45
Data analytics applied to the enhancement and improvement of EO products and services in a big data environment (ID: 279)
Presenting: Lorenzo, Alberto
(PDF )

Land Analytics EO Platform is an Earth Observation processing platform with capabilities of applying machine learning, artificial intelligence and a variety of data analytics algorithms. Its design allows the installation in virtual machines, in either local hardware or a cloud environment (tested successfully in AWS, Azure and Indra’s private cloud). The advent of the DIAS will improve the performance of the platform if the access to data is finally more efficient. It is capable of producing efficiently ARD (Analysis Ready Data) and downstream products and services mostly related to the land environment. The platform is multi-purpose: tested for the production of water bodies and wetlands, its scope is getting wider with each new industrial need that addresses. Additionally, the platform allows the ingestion of EO intermediate products in a Big Data environment, where analytics algorithms are applied in order to perform time-series analysis and extract hidden information using NO-SQL databases. Sofia2 (a middleware supporting HDFS, SparkSQL, Hive, Hadoop, Cloudera Impala, Kudu and MongoDB among others) ingests satellite-based intermediate products in its storage system. Multi-language analytics algorithms (R, Python and Spark) are applied to these databases. The benefits of ingesting information in these type of systems is twofold: the capacity of processing enormous quantities of data in a very efficient way (optimal use of cloud infrastructures) and the possibilities of using IoT and social media when it is deemed necessary or interesting. Analytics methods in this platform are based in time series analysis, but on top, we apply the analytics methods to Earth Observation data, Earth Observation products and ancillary data using a threefold approach: 1. Analyzing time series to discover hidden patterns in the data. The objective is to classify the results in classes that are not pre-defined but a consequence of the internal organization of the data. 2. Analyzing relations between EO based time series and ancillary data. The key of this process, again, is that the selection of ancillary data should not be guided for a pre-conception of what databases are more suitable to match. 3. Using both EO data and ancillary to train (with time-series) and apply (with forecasts) models of prediction, in order to foresee what is the response of EO products to a certain phenomenon. The platform is capable of ingesting and making use of social media information in order to add value to the result. Images provided by social media users are stored in a database to support the process of validation. Thus, our validation concept, based on interpretation of satellite imagery, includes also the possibility of comparing results with in-situ information of the same temporal time of certain satellite images. Moreover, the platform allows the use of comments produced in social media applications to enrich the elements extracted by EO and analytics processes (for instance providing a real land use layer to the traditional land use / land cover elements)

Authors: Lorenzo, Alberto
Organisations: Indra, Spain
16:45 - 17:00
Deep Learning Based Methods For Building Segmentation From Remote Sensing Data (ID: 280)
Presenting: Lobry, Sylvain
(PDF )

A recent trend in Earth Observation (EO) is the strong increase of available data sources. This can be attributed to an increase of the number of sensors being launched and open access policies allowing a wide use of the data. An example is the Sentinel satellites, which acquire data at a high temporal resolution while covering most parts of the world and follow a free and open data policy. The main objective of these open policies is to support the emergence of new applications using EO data as a main source. To facilitate the creation of these applications, it is important to develop tools extracting high-level information. In this talk, we present deep learning methods developed in the laboratory of Geo-information Science and Remote Sensing of Wageningen University and Research (WUR) to extract knowledge from this data. In particular, we focus on building segmentation. A precise building map allows for efficient crisis management, city planning and can be used in many applications. Namely, we present works on: - correction of OpenStreetMap building annotations in rural areas: while OpenStreetMap annotations are of good quality in urban areas, the quality decreases in rural areas due to the lack of volunteers. This work presents a 3-step methodology based on a Convolutional Neural Network (CNN) to automatically correct and update the annotations on such areas. - Deep active contours for building segmentation: while CNNs allowed for a jump in performances for building detection, they are not able to precisely delineate their borders. On the contrary, active contours models enforce high-level geometric constraints, but the parameters of these constraints are set globally, leading to a lack of flexibility. In this work, we learn adaptive parameters of an active contour model for each instance using a CNN. This model is then trained end-to-end. - Domain adaptation for building segmentation: as CNNs are supervised models, they are highly dependent on the annotated training data. The acquisition of this data involves a lot of human effort, limiting the usage of such methods. We study how we can adapt a CNN model trained to segment buildings over areas presenting certain characteristics (e.g. highly populated areas) to different areas showing different characteristics using an imperfect ground truth. By showcasing these three methods, we show with a specific problem (building segmentation) how deep learning methods can be used to allow for concrete applications of remote sensing data.

Authors: Lobry, Sylvain (1); Marcos Gonzalez, Diego (1); Vargas-Munoz, John (2); Kellenberger, Benjamin (1); Srivastava, Shivangi (1); Tuia, Devis (1)
Organisations: 1: Wageningen University and Research, Netherlands, The; 2: Institute of Computing - University of Campinas, Brazil
17:00 - 17:15
Automatic And Robust Chain For Urban Reconstruction From Satellite Imagery (ID: 222)
Presenting: Tripodi, Sébastien
(PDF )

Automatic 3D reconstruction of urban scenes from satellite imagery is a popular yet challenging topic in remote sensing. The required accuracy in industrial applications is very high and ever-increasing, which is critical for several fields such as: telecommunications, urban simulation, et al. Currently, human interaction plays a key role in 3D scene reconstruction, especially the rooftop buildings extraction regarding the existing challenges such as occlusion, complex roof-structure, et al. We propose a fully automatic chain of 3D urban reconstruction to process all the worldwide cities in one year with a decent accuracy. Semi-automatic strategies applied by many industries, including Luxcarta for years are based on a human interacted pipeline by generating: 1) epipolar images from satellite image; 2) polygons of buildings manually with high accuracy; 3) DSMs by Semi-GlobalMatching (SGM) [1]; 4) semi-automatic classification (maps of trees, roads, etc.); 5) 3D models by fusing the DSM, polygons and classification maps. The main bottleneck of this method used massively in the industry is the manual digitalization of rooftops. Luxcarta has implemented an end-to-end automatic chain by replacing the manual steps of the previous method. We have implemented an innovative DTM extraction and a method to generate accurate 3D models of buildings for large scale satellite scenes [2]. The main challenge of this chain is to detect in an accurate way the contour of each building. For this we have developed methods using pattern recognition technics [3] and enhanced the SGM. This chain gives decent results with accurate 3D buildings, but in certain cases the contour must be corrected manually like for textureless objects. To solve the problem of accuracy and remove manual correction, we think that Deep Learning (DL) approach is the solution. It has shown high performances in semantic labeling, in particular for segmentation of rooftops and classification of natural environment. The output predictions can be used directly as input in our method. For example, a mask of building can improve the robustness of: our GCP collection process by filtering only the ground points, results of our DTM and classification computed in our 3D building reconstruction method. However, the main bottleneck in this technology is the availability of training data set. Providing geodata for more than 20 years, Luxcarta has a huge quantity of ground truth data. The DL model we have used to validate this approach is a version of U-NET [4] which has been specified for the aerial images [5]. UNET is based on the architecture “fully convolutional network”, but it is specialized and extended to work with few training data set and have a precise segmentation. This approach has been tested and the first results show its efficiency on our data. It can both be used to train and predict the data for a specific area with high accuracy in a short time, as well as provide good generalization to be applied on heterogenous area. DL combined with our existing method can help today to respond to the challenge of the automatic reconstruction of 3D urban scenes. [1] Hirschmuller, H. (2008). Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 30, 328–341. [2] Duan L. and Lafarge F. (2016). Towards large-scale city reconstruction from satellites. Proc. of the European Conference on Computer Vision (ECCV) [3] Duan L. and Lafarge F. (2015). Image Partitioning into Convex Polygons. Proc. of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), Boston, US. [4] Olaf Ronneberger, Philipp Fischer, Thomas Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer, LNCS, Vol.9351: 234--241, 2015 [5] Bohao Huang, Kangkang Lu, Nicolas Audebert, Andrew Khalel, Yuliya Tarabalka, et al. Large-scale semantic classification: outcome of the first year of Inria aerial image labeling benchmark. IEEE International Geoscience and Remote Sensing Symposium – IGARSS 2018, Jul 2018, Valencia, Spain.

Authors: Tripodi, Sébastien (1); Duan, Liuyun (1); Tasar, Onur (2,3,4); Tarabalka, Yuliya (2); Clerc, Sébastien (3); Fanton d’Andon, Odile (3); Trastour, Fréderic (1); Laurore, Lionel (1)
Organisations: 1: Luxcarta; 2: Inria Sophia-Antipolis, UCA; 3: ACRI-ST; 4: CNES
17:15 - 17:30
A Walk-Through on Machine Learning techniques for Sentinel Big Data Fusion (ID: 260)
Presenting: Aparício, Sara
(PDF )

Copernicus, the largest single observation program to date is delivering an unprecedented wealth of data imagery. Cloud computing platforms and Artificial Intelligence (AI) are overtaking traditional tools to tackle challenges coming from Big Data and extraction of meaningful information. This study is a walk through the main Machine Learning (ML) techniques for classification purposes fusing satellite data of different nature, namely synthetic aperture radar (SAR) and multispectral data from Sentinel-1 and Sentinel-2, respectively, from Copernicus program. Google Earth Engine provides many ML algorithms and satellite imagery data, being the method, which are very useful for extracting land cover from different sources of imagery. The two main objectives of this study were to (1) to explore and determine the best ML among the most popular on Google Earth Engine, and (2) understanding and quantify/qualify the improvement and accuracy of the best performing model (given by its overall accuracy) considering different bands and indices scenarios for multi-temporal land cover classification. The different classifiers were applied to the region of interest (ROI) containing four main different land cover types (forest, crop, water and urban). The input for machine learning were one year of satellite data namely: all bands of Sentinel-2 with exception of atmospheric bands; NDVI and NDWI; VV and VH band of Sentinel-1 and band ratio VV/VH and also elevation from ASTER GDEM. Random Forests showed to have one of the highest accuracy for land cover mapping (with 99% overall accuracy and Kappa index of 98%). GEE proved to be particularly useful for multi-temporal and multi-sensor data exploitation, providing scientifically relevant information in cost and time manner.

Authors: Aparício, Sara
Organisations: European Space Agency, Italy

WebWorldWind NASA-ESA project
16:00 - 17:30

16:00 - 17:30
WorldWind Open Source Virtual Globe Platform (ID: 107)
Presenting: Voumard, Yann

As an introduction to Web WorldWind, the fundamental ideas that have driven the development of the WorldWind libraries since their beginning will be presented through a bit of history and the philosophy behind them. At this occasion, projects from all over the world using one of the WorldWind implementations will be shown. The inspiring events organised around the technical libraries for bright students and young entrepreneurs will be highlighted thanks to the participation of the winning team at this year’s UN World Challenge. The focus will then shift to the Web implementation of the WorldWind concept and the key role played by ESA in its development. An overview of the current development state and features will be given together with a sneak peek of the development roadmap. The presentation will then continue with a series of examples and demonstrations of concrete applications built in Europe with the Web WorldWind technology, in particular using ESA data. Finally, useful resources to get started with Web WorldWind will be presented.

Authors: Barois, Olivier (1); Voumard, Yann (3); Draghici, Florin (4); Hogan, Patrick (2); Ifrim, Claudia (5); Balhar, Jakub (5)
Organisations: 1: European Space Agency; 2: NASA Retired, United States of America; 3: Solenix Deutschland GmbH; 4: Qualteh JR; 5: GISAT

Workshop Launcher
09:30 - 12:00

09:30 - 10:00
Vega Space Transportation Systems in Development (including Small Satellite Mission Service and Space Rider) (ESA - Giorgio Tumino) (ID: 409)
Presenting: Tumino, Giorgio

The Vega-C on-going development programme shall be briefly presented. Vega-C is an improved version of the current Vega version with approx. 800 kg increased performance at reference orbit (700 km PEO) dedicated to larger payloads. Maiden Flight is currently scheduled end 2019. Furthermore the other Vega-C Space Transportation Systems under development shall be briefly presented: – Small Satellite Mission Service (SSMS), satellite dispenser for smaller payloads under development in several configurations which Proof of Concept (PoC) flight with Vega is currently planned mid 2019; – Space Rider System for payloads return under development based on the IXV experience capable to host on its Multi-Purpose-Cargo-Bay up to 600 kg of PL/experiments, relevant maiden flight, provided development programme completion decided at next CM-19, is currently planned in 2021; – VENUS for payloads orbital transfer: electric propulsion module for specific applications such as orbit transfer; – Vega-E preparing the future: further medium/long term Vega evolutions including a Vega light version and one heavy version equipped with lox-methane liquid engine replacing 3rd stage SRM Z-9 and AVUM.

Authors: Tumino, Giorgio
Organisations: ESA, STS Esrin Italy
10:00 - 10:30
Vega Space Transportation Systems Exploitation Perspectives and the L3 Initiative (ESA - Renato Lafranconi) (ID: 412)
Presenting: Lafranconi, Renato

Vega exploitation records (12 flights successfully carried out so far, the 13th is planned next Nov 20th) and perspectives shall be briefly presented. This section shall include also: – Light satellite Low-cost Launch opportunities (LLL or L3) Initiative description. The LLL Initiative is a coordinated Initiative for a comprehensive approach for providing competitive Ariane 6 and Vega/Vega C launch service solutions suited to the needs of light satellites, including “Proof-of-Concept” flight(s) demonstrating the service; – and Vega Space Transportation Systems perspectives for Small Satellites through the SSMS dedicated dispenser and for payloads return through the Space Rider System as well.

Authors: Lafranconi, Renato
Organisations: ESA, STS Esrin Italy
11:00 - 11:30
Ariane 6 new services in development (including MLS) (ESA - Piero Resta) (ID: 413)
Presenting: Resta, Piero

Ariane 6 development and related new services for small satellites in LEO shall be briefly presented including the Multi-Launch-System (MLS) under development to make the best use of the unused capacity, relevant Proof of Concept flight is currently planned in 2021.

Authors: Resta, Piero
Organisations: ESA, STS HQ Daumesnil Paris France
11:30 - 12:00
Arianespace Services and Solutions (including smallsat solutions for Vega and Ariane 6) (Arianespace - Marino Fragnito) (ID: 410)
Presenting: Fragnito, Marino

Arianespace, the European Launch Service Provider, shall present Ariane 6 and Vega/Vega-C dedicated solutions for Small Satellites and relevant short term potential launch opportunities. This session shall include Vega flight records and Vega-C launch manifest as well as the Arianespace perspective about Ariane 6 MLS.

Authors: Fragnito, Marino
Organisations: Arianespace, France

Incubed side event
14:00 - 15:30

14:00 - 15:30
Insights into Investing in Industrial Innovation (InCubed) Workshop (ID: 388)
Presenting: Regan, Amanda

Investing in Industrial Innovation (InCubed) is a new programme element and the first co-funding framework within the Earth Watch Programme. This session will provide an overview of the programme, the kinds of proposals being submitted, the evaluation process and some examples of recently kicked off activities will be presented. The session will end with a round table discussion.

Authors: Regan, Amanda
Organisations: ESA-ESRIN, Italy

DEMO ADAM
16:00 - 17:30

MEEO Data Cube Hands on - ADAM - Advanced DAta Management platform (ID: 380)
Presenting: Natali, Stefano

Digitalisation is crucial in many aspects of our daily lives nowadays – and so is access to data. The concept of ‘Digital Earth’ (DE), as outlined in 1999 by the former US Vice-President Al Gore, foresees a “multi-resolution, three-dimensional representation of the planet that would make it possible to find, visualise and make sense of vast amounts of geo-referenced information on physical and social environments” (Gore, A , The Digital Earth: understanding our planet in the 21st century, Photogrammetric Engineering and Remote Sensing, 65 (5), 528). The DE concept is now reality, with the Copernicus programme providing a fundamental contribution. The challenge is now how to access and extract information from the decades of global and local environmental data generated by in-situ sensors, numerical models, satellites, and, more recently, by individuals. This implies a change in the data exploitation paradigm, moving towards a massive data analysis approach. One of the main issues to face is the inhomogeneity of data types, formats, geographic projections or, in one word, the variety of the data that describe our planet. ADAM is a platform that implements the Digital Earth Concept: ADAM is a cross-domain platform that makes available a large set of multi-year global environmental collections allowing data discovery, visualization, combination, processing and download. It implements a "virtual datacube" approach where data stored on distributed data centers are made available via standardized OGC-compliant interfaces. Dedicated web-based Graphic User Interfaces as well as web-based notebooks, REST APIs and deskop GIS plug-ins can be used to access and manipulate the data. ADAM is a “horizontal” layer to support a large variety of vertical (thematic) applications such as climate change monitoring and mitigation, cultural heritage safeguard, air quality assessment and monitoring, agricultural applications and (re-)insurance in agriculture, security applications, education, and many others. The session presents ADAM, its development and features, current applications to trigger discussions on future perspectives

Authors: Natali, Stefano (1); Mantovani, Simone (2); Folegani, Marco (2)
Organisations: 1: SISTEMA GmbH, Austria; 2: MEEO Srl, Italy

Demo SAP HANA
14:00 - 15:30

Demonstration Of SAP HANNA to Enhance the Value Of EO (ID: 368)
Presenting: Sachdev, Paramjeet

Understand how SAP, in combination with Partners like Esri, HERE, and ESA, are enabling Enterprises to harness the power of Geospatial data to enhance cross-organisational insights, drive efficiencies, and build new revenue models. See how SAP have co-innovated with customers to bring enterprise and spatial data together to enable real-time decisions and understand why SAP are offering these services through both traditional and serverless consumption models.

Authors: Sachdev, Paramjeet; Bucci, Luca
Organisations: SAP, United Kingdom

Demo SAP HANA
16:00 - 17:30

Demonstration Of SAP HANNA to Enhance the Value Of EO (2) (ID: 411)
Presenting: Sachdev, Paramjeet

Understand how SAP, in combination with Partners like Esri, HERE, and ESA, are enabling Enterprises to harness the power of Geospatial data to enhance cross-organisational insights, drive efficiencies, and build new revenue models. See how SAP have co-innovated with customers to bring enterprise and spatial data together to enable real-time decisions and understand why SAP are offering these services through both traditional and serverless consumption models.

Authors: Sachdev, Paramjeet; Bucci, Luca
Organisations: SAP, United Kingdom

Digital Poster -Exhibition - Drink
17:30 - 19:00

Beyond the Proba-V Mission Exploitation Platform (ID: 100)
Presenting: Clarijs, Dennis

Since 2016 VITO develops and operates the Proba-V Mission Exploitation Platform (MEP) on a private OpenStack and Hadoop/Spark cluster, well-known by ESRIN. This platform is used by the Proba-V user community, is expanded with access to Sentinel data and is used as a node in a federation of platform in various projects (TEP Food Security, NextGEOSS, DataBio, Copernicus AppLab, etc. ). Evolutions of the platform are ensured in these and future projects, and the integration with the Copernicus DIAS infrastructures is a clear logical step. This talk however will discuss the future ‘disruptive revolutions’ which VITO plans as a follow-up activity: design and develop novel concepts and components in the field of fast data analytics and on-the-fly user-centric processing on massive and heterogeneous EO data archives, in order to further extend the capabilities and functionalities of the Proba-V MEP as to remain a state-of-the-art exploitation platform for vegetation data: keep the pace with a rapidly evolving IT landscape and answer the needs invoked by end-user requests. In this new activity VITO seeks to collaborate with computer scientist and ICT infrastructure providers, including commercial ones which do offer unique capabilities. Our focus areas include: •) Applying Machine learning (both supervised and unsupervised) , as a technique to be used on the platform to boost application performance. Several demonstration prototypes will be developed, as examples we mention supervised streaming of cloud detection, forecasting of yield production and phenology, data fusion to create the capabilities to move towards ‘super-resolution’ based on different sensors. As a result of this activity, the Machine Learning capabilities shall be made available as well to all users of the platform for other usage. The activity shall as well prototope with best-in-class Machine Learning offerings, e.g. Google TensorFlow. •) Data Cubes: the scope is not to develop a new data cube software, but to use/extend existing software e.g. to support python/R analysis and processing on the full archives of Sentinel data on clouds. This will realise the paradigm shift from accessing ‘EO-data files’ processing to ‘EO-data cube’ processing, enabling very fast access to the ‘right’ data. In short bringing the users’ algorithm to the data cube. • )Streaming processing components for on-the-fly processing on any data cube of e.g. vegetation indices, to avoid reprocessing and extensive storage cost and delivering to users a user-tailored ‘virtual’ archive. This technology can offer user-tailored products without building up large archives of products using algorithms which will never satisfy all users. It has the potential to provide products, which are today offered as a pre-processed archive, as ‘virtual’ products which are only produced when a user requests this (for a specified AoI, date range, etc. ) and with the algorithms the users prefers (e.g. cloud detection, atmospheric correction, ect. ). In this activity the need for time series of on-the-fly produced products shall be taken into account. •) Technologies allowing to move MEP capabilities to public cloud infrastructures, i.e. envisaging hybrid solutions to take advance of the strengths of each platform according to the needs of the job-to-be-done. •)A toolset for Time Series Analytics and Visualisation, to bring the most valuable features for exploiting time series on a cloud platform with a Web Service interface, integrated in Notebooks and a user-friendly Web Client. •) By exploiting all above points, pave the way for massive data fusion (Proba-V, Sentinel-x, possibly other sensors): this is key for agriculture application to provide temporally dense time series.

Authors: Goor, Erwin; Clarijs, Dennis; Janssen, Bram
Organisations: VITO, Belgium
starfm4py: The Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) implemented in Python (ID: 138)
Presenting: Mileva, Nikolina

Remote sensing image fusion allows the spectral, spatial and temporal enhancement of images. New techniques for image fusion are constantly emerging shifting the focus from pan-sharpening to spatiotemporal fusion of data originating from different sensors and platforms. However, the application of image fusion in the field of Earth observation still remains limited. The number and complexity of the different techniques available today can be overwhelming thus preventing users from fully exploiting the potential of fusion products. The aim of this study is to make fusion products more accessible to users by providing them with a simple tool for spatiotemporal fusion. This tool will contribute to the better exploitation of data from available sensors making possible to bring the images to the spectral, spatial and temporal resolution required by the user. The fusion algorithm implemented in the tool is based on the spatial and temporal adaptive reflectance fusion model (STARFM) – a well established fusion technique in the field of remote sensing often used as benchmark by other algorithms. The capabilities of the tool are demonstrated by three case studies using data from Sentinel-2 and Sentinel-3. The first case study is about detecting change in an agricultural site in Southern Germany. The other two case studies concentrate on deforestation in the Amazon forest and urban flooding caused by the hurricane Harvey.

Authors: Mileva, Nikolina; Mecklenburg, Susanne; Gascon, Ferran
Organisations: European Space Agency, Italy
Deep Learning with CryoSat: Using a neural network to improve radar altimetry (ID: 281)
Presenting: Ewart, Martin

Radar altimetry is commonly used for monitoring changes within the cryosphere and its contribution to sea-level rise. The revolutionary design of CryoSat-2 (CS2) features SAR and interferometric capacities that allow us to increase spatial resolution while resolving the angular origin of off-nadir echoes occurring over sloping terrain, making it particularly adapted to monitor ice-sheet margins. On top of measuring more precisely the elevation of Point-Of-Closest-Approach (POCA), the standard level-2 product, CS2 SARIn mode allows the CS2 Swath SARIn approach. This produces elevations beyond the POCA, leading to between 1 and 2 orders of magnitude more measurements than level-2 products and has enabled fine 500m spatial resolution surface elevation models from each satellite pass. However, there is variability in radar elevation estimates, often due to the penetration of radar waves into snow and firn, yielding differences compared to local airborne LiDAR altimeters which are less impacted by penetration. Local Airborne LiDAR campaigns however have limited coverage and have temporal restrictions due to flights only operating in favourable weather conditions. Therefore, a way to utilise the accuracy of local airborne LiDAR together with the spatial and temporal advantages of using CS2 Swath data is desirable. While neural networks are increasingly being used in a wide variety of domains to enhance predictions and detect changes, they have not previously been applied to radar altimetry to correct for elevation biases within the cryosphere. In this study, we present a novel approach to adjusting for elevation bias by creating a two-layer neural network that is trained to predict the elevation provided by local airborne LiDAR from CS2 Swath data where both data are available and then apply that model where only CS2 Swath data are present. By using a neural network we remove the need for an individual to build a model that defines the relationships between input variables; of which many relationships are largely unknown. A neural network overcomes this by automatically determining the relationships between variables in its hidden layers during the training cycle, capturing behavioural patterns that are not currently known. We investigate the challenges with building such models and review a variety of configurations and considerations such as overfitting and overgeneralising. Finally, we present two proof of concepts that show good spatial and temporal transfer ability across three topographically different study areas in Greenland (Jakobshavn, Storstrømmen and the South East). These models compensate for 70-90% of the mean penetration by radar in the CS2 Swath data, while reducing the root mean squared error by 10-17%.

Authors: Ewart, Martin (2); Gourmelen, Noel (1)
Organisations: 1: The University of Edinburgh, United Kingdom; 2: EarthWave Ltd, United Kingdom
Evaluating Sentinel-1 InSAR services in the Geohazard Exploitation Platform (GEP): a contribution to the CEOS Geohazards Lab initiative (ID: 165)
Presenting: Cigna, Francesca

This work showcases an analysis of Sentinel-1 Interferometric Wide (IW) swath C-band Synthetic Aperture Radar (SAR) imagery in Terrain Observation by Progressive Scans (TOPS) mode by using Interferometric SAR (InSAR) processing services hosted within ESA’s Geohazard Exploitation Platform (GEP). This analysis is carried out in the framework of the Geohazard Lab, a new initiative within the the Committee on Earth Observation Satellites (CEOS) – Working Group on Disasters, which follows-on and collaborates with the CEOS Pilots on thematic activities and the Recovery Observatory. The Lab aims at addressing priorities of the Sendai Framework for Disaster Risk Reduction 2015-2030 by enabling greater use of Earth observation data and derived products to assess geohazards and their impact, with a focus on disaster risk management. To contribute to the achievement of these objectives, we run a number of trials to test the different processing services available in GEP to process Sentinel-1 IW data. The InSAR services that we exploit include both conventional and advanced InSAR tools for differential InSAR, extraction of coherence maps, Persistent Scatterers (PS) and Small Baseline Subset (SBAS) methods for time series analysis. The geographic focus of our trials is Haiti. This country is known for its susceptibility to various natural hazards including earthquakes, hurricanes and landslides, and experienced damages by a number of major events occurred over the last years, for instance the catastrophic 7.0 MW earthquake in 2010, and Hurricane Matthew in 2016. The lesson learnt from the trials provides technical and scientific feedback on the methodologies for InSAR product generation using Sentinel-1 IW data, as well as geohazard information that feed into the CEOS Recovery Observatory project in Haiti.

Authors: Cigna, Francesca (1); Tapete, Deodato (1); Bally, Philippe (2); Cuccu, Roberto (3,4); Papadopoulou, Theodora (5); Caumont, Hervé (6); Foumelis, Michael (7)
Organisations: 1: Italian Space Agency (ASI), Italy; 2: European Space Agency (ESA), Italy; 3: ESA Research and Service Support, Italy; 4: Progressive Systems Srl, Italy; 5: ARGANS Ltd, France; 6: Terradue Srl., Italy; 7: French Geological Survey (BRGM), France
GeoStorm on EO IPT Poland: a private initiative to provide EO adding value data in a geospatial platform (ID: 162)
Presenting: Savinaud, Mickaël

GeoStorm is a geospatial platform developed by CS SI since several years to handle various types of data from satellite, GIS database, IoT network or social network and performs analytics on them. It was instantiated on the Earth Observation Innovative Platform Testbed Poland (EO IPT Poland). This cloud architecture provides a large set of EO data: Sentinel-1/2/3, Landsat5-7-8 and Envisat and computing facilities: various virtual machines configuration or a Torque cluster for example. CS SI has integrated in this platform a set of open source processing from Orfeo ToolBox, SNAP Toolbox and Sentinel-2 For Agriculture and Theia expertise centers. For Orfeo Toolbox, we have integrated the BandMath application to allow user to make various pixel based computation formula on EO data and the pixel based classification framework with a user friendly interface. For SNAP, we have integrated the SNAP graph descriptor framework which allows users to submit its own graph or use pre-existing one. Concerning Sentinel-2 For Agriculture we have integrated the Leaf Area Index processor which run the MAJA L2A processor to perform atmospheric correction and cloud detection. Based on our experience of Orfeo ToolBox, we plan to integrate the IOTA2 chain and the Snow Cover one from Theia expertise center. All these processing are integrated in a user friendly interface which allow to select an Area of Interest, to monitor processing, to visualize and to download output products. Moreover, they are integrated in a marketplace which allow to estimate processing cost. Indeed GeoStorm provide a processing framework which allow to perform processing from local scale to national scale thanks to the Torque cluster available on EO IPT Platform and a hybrid cloud technology based on Apache Mesos. In order to retrieve EO data from another data provider, CS SI has developed a python library to search, download and filter EO products from Scihub, Theia or PEPS provider. All these features provide to our geospatial platform the capabilities to generate on demand a large set of added values EO product from Copernicus data.

Authors: Savinaud, Mickaël; Gaudissart, Vincent
Organisations: CS SI, France
SEonSE: Multi-sensor Maritime Situational Awareness And Geospatial Analytics (ID: 161)
Presenting: Filippo, Daffinà

The fast growing of maritime data acquired by using different sensors (e.g. commercial and free of charge satellite Earth Observation data, AIS) enhances the Maritime Situational Awareness (MSA) and the capability to detect and analyse the maritime dynamics. The combined adoption of free of (Sentinel-1, Sentinel-2, Sentinel-3, Landsat) and commercial satellite missions (e.g. COSMO-SkyMed) is operationally adopted for a systematic monitoring of the maritime traffic and the detailed analysis of specific phenomena (e.g. anomaly detection) by getting the best benefits by revisit time, data availability and resolution. At the same time the AIS operators are providing enhanced services by increasing the number of satellites, adopting different technologies (e.g. micro-satellite), with the scope to provide near-real-time services characterized by a reduced latency in the order of few minutes at global level. The capability to ingest and process this huge amount of geospatial big data gives the possibility to provide Geoinformation-based services for different applications to institutional and private entities active in the domains of maritime safety, security, environmental protections and resource management. The process of data fusion between the heterogeneous contents provided by the different information sources offers now the possibility to detect and track cooperative and non-cooperative vessels, to faster and better detect anomalous behaviours and also to monitor the maritime trends (e.g. the commercial routes, average duration of voyages, ports activities). The generation and adoption of density maps, using long time series and applying different profiles (e.g. ship type, speed), is a consolidated approach to extract and analyse the Maritime Patterns of Life, deriving information about the Sea Line of Communications (SLOC), new emerging routes, chokepoints, standing features, new maritime trends and intensive fishery areas. The adoption of a cloud infrastructure guarantees the right level of scalability of ICT resources and the performance requested in the application of big data analysis techniques. In this context, the adoption of machine learning and deep learning algorithms starts to provide good results in the detection of maritime features (e.g. ship detection, wake detection, oil spill detection, anomaly detection) and for the prediction of vessels behaviour.

Authors: Filippo, Daffinà; Dino, Quattrociocchi; Marco, Corsi
Organisations: e-GEOS, Italy
The E2mC Project: Pre-operational Results Combining Social Media And Crowdsourcing For Rapid Mapping (ID: 160)
Presenting: Grandoni, Domenico

The goal of the E2mC H2020 project is to demonstrate feasibility and usefulness of the integration of social media analysis and crowdsourced information within both the Rapid Mapping and Early Warning Components of Copernicus Emergency Management Service (EMS). In recent years, several operational experiences (large earthquakes such as the one in Central Italy in 2016/2017 or large hurricanes and floods such as Harvey in Texas in 2017) - have shown the high potential contribution of social media and crowdsourcing in the improvement of the overall quality and timeliness of satellite-based Rapid Mapping services. The E2mC project has accepted these challenges and designed an innovative approach. E2mC has succeeded in implementing a prototype platform (the “Social&Crowd” platform) that implements the necessary modules to demonstrate under pre-operational conditions the added value of social media and crowdsourcing in a Rapid Mapping context. In particular, the “Social&Crowd” platform specific characteristics are: a) a multi-source social media and news crawling engine, b) a customized geocoding engine based on semantic analysis coupled with open source gazetteers, c) a deep learning engine to automatically tag media contents and filter out irrelevant contents, d) a multi-purpose crowdsourcing platform to manage simple micro-tasks to be assigned to the crowd such as keywords translation, media relevance assessment, content geolocation improvement, simple mapping tasks, etc. e) a web interface to interact with the platform, trigger ad hoc activations of the “Social&Crowd” platform, inspect and download the results, further integrate them into other generic GIS environments. In this way, the “Social&Crowd” platform demonstrates how crowdsourcing, data mining and Artificial Intelligence can be combined to deliver higher quality data driven services (e.g. crowdsourcing data are used for feeding AI algorithm for image recognition, while AI is used for removing duplicated images automatically or for detecting false positive from images coming from previous disasters). The E2mC project has also made significant progresses in the crowdsourcing component and it is now actively managing a hybrid crowdsourcing community composed by heterogeneous groups such as general purpose ones (e.g. BOINC, through CERN) and emergency specific ones (e.g. HOT, SBTF). In particular, the E2mC project has started a process to establish links and federate with other relevant crowdsourcing initiatives active in the emergency response domain to join forces and efforts in providing effective and timely answers to Copernicus EMS Rapid Mapping needs. This paper presents the technological achievements of the E2mC project as well as the results of the testing and demonstration of the “Social&Crowd” platform during both past events (cold cases) as well as during real and time critical Copernicus EMS Rapid Mapping activations. In particular, the results of the demonstrations have been used for a qualitative and quantitative assessment of the benefits and added value brought by the E2mC project to satellite-based mapping activities, alone or in combination with complementary data analysis techniques such as, for example, hydraulic modelling of large floods where the data generated by the Social&Crowd platform are integrated as ground truth for model calibration in time critical operational conditions.

Authors: Grandoni, Domenico (1); Corsi, Marco (1); Biscardi, Mariano Alfonso (1); Francalanci, Chiara (2); Pernici, Barbara (2); Scalia, Gabriele (2); Ravanelli, Paolo (2); Fernandez-Marquez, Jose Luis (4); Mondardini, Rosy (4); Allenbach, Bernard (3); Benatia, Fahd (3)
Organisations: 1: e-GEOS, Italy; 2: Politecnico di Milano, Italy; 3: University of Strasbourg, France; 4: University of Geneva, Switzerland
Rheticus® Aquaculture: Satellite Support For Smart Aquaculture (ID: 158)
Presenting: Drimaco, Daniela

In the last few years, the European Blue Growth strategy has recognised seas and oceans as drivers for the European economy, having great potential for innovation and growth. The European Commission has identified aquaculture as a key component of both the Common Fisheries Policy and the Blue Growth Agenda, with high potential for sustainable jobs and growth. The aquaculture sector in Europe is highly competitive, whilst its development is blooming in many other parts of the world. Companies face many costs and challenges that represent a great effort for them as the aquaculture sector is mainly dominated by SMEs with limited funding capabilities. To this regards, aquaculture activities need to be optimised in order to maximise the profitability, fulfil constraints set by environmental legislation and avoid risky situations for the production activities and the marine environment as well. There is room to support the optimisation of farming activities (e.g. estimation of shellfish growing time in water, feeding rate with sea surface temperature and chlorophyll, days to market size, etc.), the management of crisis (e.g. storm surge forecast, harmful algae bloom alert, etc.), the respect of regulation duties, the careful planning of aquaculture development and a posteriori environmental analysis/characterisation for licensing or certification. Moreover, there is a huge lack of access to funding and investments for aquaculture SMEs. Thus, an optimised and well-supported finfish and shellfish farming can easily attract investors and insurance companies, and gain licensing and certification. In this framework, Rheticus® Aquaculture (developed by Planetek Italia in partnership with BlueFarm) addresses aquaculture professionals’ real-needs through the exploitation of EO data for the optimisation of their activities, the monitoring of production sites and the maximisation of profitability. EO data regularly provide synoptic and useful information on the environment thanks to the European Copernicus Programme. However, to boost their usefulness, Rheticus® combines EO data with simulation models of finfish and/or shellfish growing rate together with information about market prices and profitability trends. In particular Rheticus® Aquaculture, by ingesting the CMEMS products (Chlorophyll-a Concentration, Sea Surface Temperature, Water Transparency, Turbidity, Dissolved Oxygen, Significant Wave Height, Salinity, and Current Velocity), implements a new business model that provides accurate, updated, user friendly and market oriented information for: • Identifying best locations for new aquaculture farms; • Monitoring and forecasting of environmental conditions for operational aquaculture; • Estimating products growth rates, days to market size, product values in comparison with market prices and profitability trends; • Aquaculture development planning and a posteriori environmental analysis and characterisation. Through the Rheticus® web interface, end-users are able to gain a daily key information over their production sites and 4-day forecasts for key parameters connected to good operational conditions. Moreover, they are able to access pre-set weekly reports with summary information on their farming sites and outlook on their production, for supporting their sites management and boosting their profitability.

Authors: Ceriola, Giulio; Aiello, Antonello; Drimaco, Daniela
Organisations: Planetek Italia s.r.l., Italy
The Research and User Support (RUS) Service: an innovation catalyst platform for Sentinel Data Users (ID: 149)
Presenting: Mora, Brice

The RUS Service aims to promote the uptake of Copernicus data, and supports the scaling up of R&D activities with Copernicus data. RUS Service is configured in a scalable cloud environment that offers the possibility to remotely store and process EO data. The RUS Service offers support from a helpdesk and a team of EO and IT experts who can address any request, coming from beginners or skilled practitioners. Cloud ICT resources are procured with Free and Open-Source Software and are tailored to the user needs. The RUS service proposes also on-site training sessions, webinars, and online materials. The RUS Service is offered at no cost and is available for a large community of users and types of institutions. The objective of this talk is to present the different aspects of the service, and the latest evolutions of the offer. Particularly, we plan to discuss the recent availability of advanced visualisation tools, the continuous update of the image processing tools, and more importantly the new possibility to set up cloud computing platforms that can be shared across multiple users, in the context of the newly developed European DIAS. The RUS Service is funded by the EC, managed by ESA, and operated by Communications & Systèmes – Systèmes d’Informations (CS SI) and its partners: Serco SPA, Noveltis, Along-Track, and CS Romania.

Authors: Mora, Brice (1); Guzzonato, Eric (1); Bonneval, Béatrice (2); Palazzo, Francesco (2); Remondière, Sylvie (2)
Organisations: 1: Communications & Systèmes, France; 2: Serco
Broadview Radar Altimetry Toolbox (ID: 146)
Presenting: Ambrózio, Américo

The universal altimetry toolbox BRAT (Broadview Radar Altimetry Toolbox) is a collection of open source tools and tutorial documents designed to facilitate the processing of radar altimetry data. It can read all previous and current altimetry missions’ data. It now incorporates the capability to read the upcoming Sentinel-3 L1 and L2 products. ESA endeavoured to develop and supply this new capability to support the users of the Sentinel-3 mission. The BRAT suite is mostly made of command line tools, of which the BratGUI is the front-end. BRAT can be used in conjunction with MATLAB/IDL (via reading routines) or C/C++/Python/Fortran via a programming API, allowing users to obtain the desired data, bypassing the data-formatting hassle. BRAT can also be used to simply visualise data quickly, or to translate the data into other formats such as NetCDF, ASCII text files, KML (Google Earth) and raster images from the data (JPEG, PNG, etc.). Several kinds of computations can be done within BRAT, involving both user-defined combinations of data fields that can be saved for posterior use and the BRAT’s predefined formulas from oceanographic altimetry. BRAT also includes the Radar Altimeter Tutorial, which contains an extensive introduction to altimetry, showing its applications in different fields. Use cases are also available, with step-by-step examples, covering the toolbox usage in different thematic contexts. Both the toolbox and the tutorial can be accessed through http://earth.esa.int/brat or http://www.altimetry.info/.

Authors: Benveniste, Jérôme (1); Garcia-Mondejar, Albert (2); Escolà, Roger (2); Moyano, Gorka (2); Roca, Mònica (2); Terra-Homem, Miguel (3); Friaças, Ana (3); Martinho, Fernando (3); Schrama, Ernst (4); Naeije, Marc (4); Ambrózio, Américo (5); Restano, Marco (6); Ambrozio, Americo (7)
Organisations: 1: ESA-ESRIN, Largo Galileo Galilei 1, Frascati, Italy; 2: isardSAT Ltd., United Kingdom.; 3: DEIMOS Engenharia, Portugal; 4: TU Delft, Faculty of Aerospace Engineering, Holland.; 5: DEIMOS c/o ESA/ESRIN, Largo Galileo Galilei 1, Frascati, Italy; 6: SERCO c/o ESA/ESRIN, Largo Galileo Galilei 1, Frascati, Italy; 7: DEIMOS c/o European Space Agency
SAR Altimetry Processing On Demand Service For CryoSat-2 And Sentinel-3 At ESA G-POD (ID: 144)
Presenting: Ambrózio, Américo

The scope of this presentation is to feature the G-POD SARvatore service to users for the exploitation of CryoSat-2 and Sentinel-3 data, which was designed and developed by the Altimetry Team at ESA-ESRIN EOP-SER. The G-POD service coined SARvatore (SAR Versatile Altimetric Toolkit for Ocean Research & Exploitation) is a web platform that allows any scientist to process on-line, on-demand and with user-selectable configuration CryoSat-2 SAR/SARIN and Sentinel-3 SAR data, from L1a (FBR) data products up to SAR/SARin Level-2 geophysical data products. The G-POD graphical interface allows users to select a geographical area of interest within the time-frame related to the Cryosat-2 SAR/SARin FBR and Sentinel-3 L1A data products availability in the service catalogue. The processor prototype is versatile, allowing users to customize and to adapt the processing according to their specific requirements by setting a list of configurable options. Pre-defined processing configurations (Ocean, Inland Water, Ice and Sea-Ice) are available for the Sentinel-3 service. After the task submission, users can follow, in real time, the status of the processing. The output data products are generated in standard NetCDF format (using CF Convention), therefore being compatible with the Multi-Mission Radar Altimetry Toolbox (BRAT, http://www.altimetry.info/toolbox/) and typical tools. The following upgrades have been recently introduced: 1) Inclusion of SAR echo and SAR RIP (Range Integrated Power) waveforms in the NetCDF files; 2) Inclusion of STACK Data in the NetCDF files. Initially, the processing was designed and uniquely optimized for open ocean studies. It was based on the SAMOSA model developed for the Sentinel-3 Ground Segment using CryoSat data (Cotton et al., 2008; Ray et al., 2014). However, since June 2015, a new retracker (SAMOSA+) is offered as a dedicated retracker for coastal zone, inland water and sea-ice/ice-sheet. Following the launch of Sentinel-3, a new flavour of the service has been initiated, exclusively dedicated to the processing of Sentinel-3 mission data products. The scope of this new service is to maximize the exploitation of the Sentinel-3 Surface Topography Mission’s data over all surfaces providing user with specific processing options not available in the default processing chain. The service is open, free of charge (supported by the ESA SEOM Programme Element) for worldwide scientific applications and available at https://gpod.eo.esa.int/services/CRYOSAT_SAR/.

Authors: Benveniste, Jérôme (1); Dinardo, Salvatore (2); Sabatino, Giovanni (3); Restano, Marco (4); Ambrózio, Américo (5)
Organisations: 1: ESA-ESRIN, Largo Galileo Galilei 1, Frascati, Italy; 2: He Space/EUMETSAT, Eumetsat-Allee 1, 64295 Darmstadt; 3: Progressive Systems c/o ESA/ESRIN, Largo Galileo Galilei 1, Frascati, Italy; 4: SERCO c/o ESA/ESRIN, Largo Galileo Galilei 1, Frascati, Italy; 5: DEIMOS c/o ESA/ESRIN, Largo Galileo Galilei 1, Frascati, Italy
Lake Bracciano Water Level Variation from Sentinel-3 Measurements Processed at the GPOD SARvatore Service (ID: 143)
Presenting: Benveniste, Jérôme

Lake Bracciano is a lake of volcanic origin located in the Italian region of Lazio 32 km northwest of Rome. It is one of the major lakes of Italy and has a circular perimeter of approximately 32 km. Its inflow is from precipitation only as there are no inflowing rivers. As the lake serves as a drinking water reservoir for the city of Rome, it has been under control since 1986 to avoid the pollution of its waters. For this reason, Bracciano is among the cleanest lakes of Italy. The absence of motorized navigation favors sailing, canoeing and swimming. The Sentinel-3A satellite, successfully launched on 16 February 2016, carries the SAR Altimeter (SRAL). SRAL is the main topographic instrument and is expected to provide accurate topography measurements over sea ice, ice sheets, rivers and lakes. It operates in dual frequency mode (Ku and C bands) and is supported by a microwave radiometer for atmospheric correction and a DORIS receiver for orbit positioning. Sentinel-3A overflies the lake collecting data every month (27 days). According to in-situ measurements, the Lake Bracciano water level has significantly decreased between March and December 2017. Therefore, considering the 27-day repeat period of Sentinel-3, water level variations detected by the SRAL altimeter can be regularly compared to in-situ measurements to infer the performance of the instrument over such a very small lake. In this study, Sentinel-3 products made available by the ESA GPOD SARvatore for Sentinel-3 online and on-demand processing service (https://gpod.eo.esa.int/services/SENTINEL3_SAR) have been generated and analysed. The SARvatore service exploits the computational power provided by ESA Grid Processing on Demand system (GPOD) that is a generic GRID-based operational computing environment where specific data-handling Earth-Observation services can be seamlessly plugged-in. One of the goals of GPOD is to provide users with a fast computing facility without the need to handle bulky data. SARvatore for S3 Products have been processed in SAR mode at 20 Hz (330 m along-track resolution) and 80 Hz (83 m) and retracked with the advanced inland water SAMOSA+ retracker. Results obtained have been compared to official Sentinel-3 inland water products including water level estimates from physical and empirical retrackers. Considering the signal degradation recorded in L1b waveforms (multiple-peaks), well known criteria have been adopted to correctly filter the data improving the quality of the estimates. Future works will aim at improving the SAR processing by properly selecting the waveforms composing the stack before multi-looking (e.g. as successfully done in the ESA-funded CRUCIAL Project, research.ncl.ac.uk/crucial). Following the launch of the Sentinel-3B satellite in April 2018, the possibility to select additional water bodies at Sentinel-3 A/B crossovers will be investigated.

Authors: Restano, Marco (1); Dinardo, Salvatore (2); Benveniste, Jérôme (3); Ambrozio, Americo (4)
Organisations: 1: SERCO c/o ESA/ESRIN, Largo Galileo Galilei 1, Frascati, Italy; 2: He Space/EUMETSAT, Eumetsat-Allee 1, 64295 Darmstadt; 3: ESA-ESRIN, Largo Galileo Galilei 1, Frascati, Italy; 4: DEIMOS c/o European Space Agency
GEOSS Preparing For A New Era Of Big (EO) Data Management (ID: 141)
Presenting: van Bemmelen, Joost

The Group on Earth Observations (GEO) is an intergovernmental organization working to improve the availability, access and use of Earth observations for the benefit of society. GEO works to actively improve and coordinate global EO systems and promote broad, open data sharing. A central part of GEO’s Mission is to build the Global Earth Observation System of Systems (GEOSS) which is a set of coordinated, independent Earth observation, information and processing systems that interact and provide access to diverse information for a broad range of users in both public and private sectors. GEOSS links these systems through its GEOSS Platform which facilitates the sharing of environmental data and information collected from the large array of observing systems contributed by countries and organizations within GEO. Via the GEOSS Platform, GEO ensures that these data are accessible, of identified quality and provenance, and interoperable to support the development of tools and the delivery of information services. Furthermore, the GEOSS Platform provides a set of instruments and tools that can be customized and used centrally as well as from community-owned systems according to specific user needs related to discovery, visual inspection, access and use of Earth observations. Different examples have already been implemented and are being used successfully for various communities. The paper will elaborate on these examples and will provide an outlook on future enhancements that will further improve the use of the already over 400 million EO data records accessible via GEOSS.

Authors: van Bemmelen, Joost (1); Nativi, Stefano (2); De Salvo, Paola (3); Colangeli, Guido (4); Santoro, Mattia (5)
Organisations: 1: ESA-ESRIN, Italy; 2: CNR-JRC; 3: GEO Secretariat; 4: RHEA; 5: CNR-IIA
EO Data for End Users: Creating Meaningful User Experiences (UX) to Deliver More Value to Professional Researchers and General Public (ID: 134)
Presenting: Surodina, Svitlana

Breakthrough in data technologies and availability of computational power means that wider, often non-specialist, audiences can benefit from access to the search observation data and products based on them. How to create platforms that enable better insights and more efficient use of the data resources? Our research suggests that understanding user profiles and their behavioural patterns to build a relevant UX for each type of the user can dramatically improve end results, increase precision and quality and reduce time required for accomplishing tasks. The analytic techniques of user type clusterisation based on unsupervised machine learning and K-means can enable intelligent automation, personalisation and mass-customisation of interfaces, visualizations and data representations.

Authors: Surodina, Svitlana; Nimets, Anastasiia
Organisations: Skein, United Kingdom
Close Range Hyperspectral Remote Sensing Image Processing for Banana Disease Detection (ID: 101)
Presenting: Liao, Wenzhi

Bananas are one of the most appealing fruits in the world. It is reported according to the United Nations that global banana exports reached about 18 million tons in 2015. Black Sigatoka (BS) is a constant threat to banana farmers worldwide. It can cause yield losses of more than 30%, particularly in the small farms. Detection of BS is difficult, once visible symptoms appear in the leaves, the whole crop may be already compromised. Therefore, early detection of BS is very important to prevent the disease spread and reduce damage to crop production. Hyperspectral (HS) imagery is relatively recent and has not been widely applied in plant pathology. Most work focused on measuring the crop damage using satellite images and more recently using cameras mounted on unmanned aerial vehicles (UAV). Typically, the aim of HS imaging in remote sensing is the discrimination of healthy and unhealthy plants. In particular, close range hyperspectral remote sensing imaging extents the radius of operation, scale, and enhances traditional measurement techniques in terms of radiometric, spectral, spatial and time resolution. However, challenges remain in these data processing and interpretation. One challenge is the preprocessing (e.g., denoising, deblurring, etc.), especially for real-time observations. Spatial distortion due to the movements of observed objects, hampers the interpretation of the data. Last but not least, labelling the groundtruth data is very difficult, especially at the beginning stage, where symptoms of the plant diseases are not obviously. In this paper, we present our current work on close range hyperspectral image (including visible and near infrared spectrum) processing and its applications to detect BS pre­symptomatic responses in banana leaves. We will also demonstrate that machine learning on time­series HS images analysis can benefit earlier detection of potential banana diseases. Both the method’s details and the results of a comprehensive test will be presented at ESA Φ-week event.

Authors: Liao, Wenzhi
Organisations: Ghent University, Belgium
EO4wildlife A Cloud Platform to Exploit Satellite Data for Animal Protection (ID: 133)
Presenting: Castel, Fabien

All the new sets of data provided by Copernicus satellites open up the way for innovative scenarios. In the domain of wildlife protection, combining animal tracking data with remotely-sensed earth observation data is appealing. To reach that such capability, EO4wildlife proposes an open cloud platform with a toolbox of interoperable data processing services and features to connect to animal tracking databases, access large data collections from Copernicus Marine core service, sample relevant environmental indicators, and finally run environmental models in a scalable processing environment. Exploiting the rich Copernicus datasets is a challenge for scientists. A wide diversity of products is available through different interfaces, but this profusion of options can be overwhelming for scientists that do not have the technical capabilities to access and process this data. EO4wildlife aims at providing an easy access to a comprehensive set of EO datasets. EO4wildlife is a cloud application, deployed over Internet with a flexible infrastructure, "big-data" storage and scalable web services. It is structured in distinct layers. An infrastructure layer provides flexibility and efficiency in order to easily scale out processing and storage capabilities. A platform layer deals with applicative components deployment, resource management and security issues. A software layer allows user to access the require data and to run the service available on the platform. There are many benefits with such an approach. Besides the economical and practical aspects, there is a strong incentive for sharing. On the platform, everyone can be both a producer and a consumer, and discover new opportunities in the catalogue of resources and added value services that is continuously enriched by new members. The platform hosts a series of data analytics services, divided in three main categories. The first one is data pre-processing, cleaning and aggregation, which is an important step when dealing with potentially imprecise and noisy information such as animal positions. The second category is about data mining and contains services processing animal tracks and satellite marine observations in order to model animals’ use of space and correlate this information with available environmental observations. The last category contains fusion services. These services make use of multiple data sources to better estimate animals’ position, behavior and modelling animals’ habitats. The platform is composed of several functional components. An internal data catalogue aggregates georeferenced products metadata from various external sources. An ingestion component allows retrieving this data on-demand for exploitation by the platform services. The service manager component allows developers to manage the lifecycle and the execution of their services. At the end of the chain, EO4wildlife makes available built-in visualization features for standard geographic data (OGC WMS/WFS standards). The service management mechanism is built on the containerization concept (Docker). By encapsulating each service into an independent and self-sufficient container, the platform ensures total freedom for the service developers (preventing language, framework or libraries constraints) and portability on the cloud. Kubernetes, an orchestration technology is used to manage container life cycle so that the underlying infrastructure becomes totally transparent.

Authors: Castel, Fabien (1); Rodera, Daniel (2); Correndo, Gianluca (3)
Organisations: 1: Atos Integration, Toulouse France; 2: Atos Research & Innovation, Madrid Spain; 3: University of Southampton, IT Innovation Centre, Southampton United Kingdom
openEO: An Open Interface To Allow Standardized Communication With EO Service Providers (ID: 132)
Presenting: Schramm, Matthias

Recent developments in the Earth Observation (EO) sector led to the parallel rise of several EO cloud service providers and programmes. The resulting variety of customized solutions by the back-end providers forces users to choose between very different data platforms and interfaces. This forbids users to interact with different back-ends using the same code, but also hinders the comparison between offerings regarding available (EO) datasets and processes and their costs respectively. The H2020-funded project openEO aims to connect users and EO service providers with an interface that allows standardised communication between them. On the user’s front-end side, software libraries are being created for the most widely used data science languages Python, R and JavaScript. Instances of the openEO interface are mounted at distinct back-end service providers to enable direct communication with the user communities. In this way openEO brings together federated user communities and EO service providers. We will show the actual state of the ongoing project and discuss next steps. The openEO interface shall offer an additional way for EO service providers to connect to their customers. At the same time, users may contact various service providers in a standardised manner and thus create broader communities. Four European SMEs and two private research centers are taking part on the development of the communication interface, representing a cross-section of provided EO data infrastructure and standards, which are able to be connected to the interface. EO cloud services with file-based metadata infrastructures are compatible with the openEO API as well as data management approaches and interfaces as GRASS GIS, GeoTrellis, Rasdaman, or Sentinel Hub. Independently of the underlying organisation of the EO data at the back-end providers (e.g. by granule, as collections of GRIB or NetCDF files, as one or several arrays in an array database, etc), openEO will allow users to work on a ‘data cube view’ of the EO imagery and directly filter, aggregate or create map functions over all data cube dimensions. While the final communication interface shall cover processes of the whole EO data life cycle, the main focus is currently on EO data discovery, processing of image collections, and interacting with and downloading of results. The performance of a given operation will depend on its type and on the underlying data organisation. openEO is an open interface and shall be adapted to user needs. We try to widen the already existing user community by responding to user needs, which we retrieve from direct discussions and questionnaires. This shall help identifying additional processes that should be implemented for the openEO interface with priority. For guaranteeing its usability, four use cases have been designed for five pilot users from the Austrian Federal Ministry of Sustainability and Tourism, the Action Against Hunger, the International Centre for Integrated Mountain Development, the Food and Agricultural Organization of the United Nations, and the Autonomous Province of Bolzano.

Authors: Schramm, Matthias (1); Pebesma, Edzer (2); Wagner, Wolfgang (1); Soille, Pierre (3); Kadunc, Miha (4); Gorelick, Noel (5); Verbesselt, Jan (6); Reiche, Johannes (6); Appel, Marius (2); Dries, Jeroen (7); Jacob, Alexander (8); Gößwein, Bernhard (1); Neteler, Markus (9); Gebbert, Soeren (2); Briese, Christian (10); Kempeneers, Pieter (3)
Organisations: 1: Vienna University of Technology, Department of Geodesy and Geoinformation, Austria; 2: University of Münster, Institute for Geoinformatics; 3: European Commission DG Joint Research Centre; 4: Sinergise Laboratorij Za Geografske Informacijske Sisteme Doo; 5: Google Switzerland; 6: Wageningen University and Research, Laboratory of Geo-information Science and Remote Sensing; 7: VITO; 8: Eurac Research, Institute for Earth Observation; 9: mundialis GmbH & Co. KG; 10: EODC Earth Observation Data Centre for Water Resources Monitoring GmbH
RSS: Tailored Cutting-edge Solutions For EO Open Science And FutureEO (ID: 130)
Presenting: Rivolta, Giancarlo

The ESA Research and Service Support (RSS) service makes available various solutions to facilitate EO Open Science and innovation. Such solutions initially designed to support the EO research process in all its phases, have been successively extended and evolved in order to comprehensively and timely satisfy new emerging needs of the larger and larger community of EO data users. Compared to ten years ago, today the RSS user community includes new types of data users such as start-ups, educators, students, data scientists. To respond to the new requirements defined by these new users, besides the existing ones, RSS has developed new ad-hoc solutions. Examples of the available categories of solutions include the RSS CloudToolbox, thought for autonomous users and equipped with customisable resources and software; the RSS algorithm development environment thought for users needing support during the development phase; and the RSS scalable processing environment supporting parallel computation and capable to dynamically allocate cloud resources as needed in case of research projects with challenging timelines. All these types of solutions can be properly tailored according to the requirements provided by the users, and are applicable to any kind of EO data, including ESA Heritage missions, Earth Explorers, Sentinels, and Third Party missions. Besides algorithm development and data processing support, it is worth to mention at least the RSS e-collaboration environment providing thematic Wikis and Forums to EO data user communities (e.g. Cryosat, Biomass, Data Science students, etc), as well as the RSS OGC services supporting fast and efficient EO data visualisation. The RSS service model has been thought to support EO data exploitation during all the phases of the innovation process, thus covering feasibility assessment, research, development, prototyping, demonstration and validation. Upstream of the innovation process, RSS provides tools and training to support EO education, both for Universities and Industry, thus contributing to the professional growth of the new generation of innovators and Data scientists. Downstream of the innovation process, RSS offers to interested scientists or developers the possibility to share as Web services within selected user communities their EO applications based on own algorithms (e.g. for beta testing) and/or open such applications to the wider community once fully validated. In this paper we introduce the operational RSS solutions that are currently available for EO Open Science, as well as the new solutions that are envisages for future EO data users.

Authors: Rivolta, Giancarlo (1,2); Cuccu, Roberto (1,2); Sabatino, Giovanni (1,2); Delgado, José Manuel (1,2); Van Bemmelen, Joost (3)
Organisations: 1: Progressive Systems, Italy; 2: ESA Research and Service Support; 3: ESA/ESRIN
Tests of 3D Mapping Technologies Applied To The ESA PANGAEA Program (ID: 127)
Presenting: Santagata, Tommaso

The PANGAEA (Planetary Analogue Geological and Astrobiological Exercise for Astronauts) ESA training course is designed to prepare European astronauts to become effective partners of planetary scientists and engineers in designing for the next exploration missions and to give them a solid knowledge in the geology of the solar system studying several caves, especially lava tubes, through geological field training courses and tests of new technologies. In November 2017, the PANGAEA-X expedition ventured into the “Cueva de los Verdes” lava tube in Lanzarote, one of the world’s largest volcanic cave complexes with a total length of about 8 km, with the aim of test some of the most innovative technologies for 3D mapping. Precisely measuring the geometry of lava caves will allow scientists to improve their models and better understand their evolution on other celestial bodies. During the five days of tests, the Leica Pegasus Backpack and the new BLK360 image laser scanner were used to measure about one kilometer of this lava tube. The Pegasus Backpack is a wearable 3D mobile mapping solution that collects geometric data even without a satellite signal. This instrument synchronises images collected by five cameras and two 3D imaging LIDAR profilers, the laser equivalent of radar. It enables accurate mapping where satellite navigation is unavailable, such as in caves. ESA astronaut Matthias Maurer learned how to operate the equipment in just 20 minutes. The upper levels of the Cueva de Los Verdes were mapped using the BLK360 scanner, the smallest and lightest imaging scanner on the market of only 1kg weight that allowed to obtain 360° images of the environment in just three minutes by pressing one button and aligning the scans directly through a tablet app. In less than three hours, data from both instruments obtained a complete 3D model of a 1.3 km section of the lava tube. The results of tests have shown the advantages of integrating the data from different instruments to obtain better results and 3D map different environments.

Authors: Santagata, Tommaso (1); Del Vecchio, Umberto (1); Sauro, Francesco (2); Bessone, Loredana (3); Kadded, Farouk (4); Goudard, Raphael (4); Madero, Elena Mateo (5)
Organisations: 1: VIGEA - Virtual Geographic Agency, Reggio Emilia, Italy; 2: Department of Biological, Geological and Environmental Sciences, Italian Institute of Speleology, Bologna University; 3: Directorate of Human Space Flight and Operations, European Space Agency, Linder Höhe, 51147 Köln, Germany; 4: Leica Geosystems France; 5: Geopark of Lanzarote, Cabildo Insular, Lanzarote, Spain
In Situ Data Collection and Land Cover Validation with LACO-Wiki Mobile (ID: 124)
Presenting: See, Linda

LACO-Wiki is an openly available online land cover validation tool (https://www.laco-wiki.net) that allows users to upload a land cover map, create a sample, interpret the sample using very high resolution satellite imagery (or imagery provided by the user) and calculate an accuracy report. In the ESA-funded CrowdVal project, we have developed a mobile version of LACO-Wiki that allows the sample created in LACO-Wiki to be viewed on a mobile phone and validated in situ, where users are directed to the sample locations. The land cover classes are displayed and users can then verify or correct the land cover class. Algorithms have been added that optimize the creation of the sample, taking into account distance from roads and other constraints such as topography and rivers. Another feature includes opportunistic data collection, i.e. the ability to gather land cover data at any location or while driving along a road. Such data collection can be useful for verifying visually interpreted samples or complementing training data for the development of land cover maps. LACO-Wiki Mobile will be released as an open source project to encourage further use and development by anyone interested in situ data collection of land cover and land cover validation.

Authors: Fritz, Steffen (1); See, Linda (1); Perger, Christoph (1); Dresel, Chrisopher (1); Mora, Brice (2); Pascaud, Mathieu (2); Ligeard, Frédéric (2); Joshi, Neha (3)
Organisations: 1: International Institute for Applied Systems Analysis (IIASA), Austria; 2: CS, France; 3: GISAT, Czech Republic
Artificial Intelligence for Earth Observation, AI4EO: Beyond End-To-End Inductive Deep Learning-From-Data to Accomplish An Artificial Mind In Electronic Brain Paradigm Capable of Hybrid Inference (ID: 121)
Presenting: Baraldi, Andrea

“Artificial intelligence (AI) was founded as an academic discipline at a conference on the campus of Dartmouth College in Hanover, New Hampshire, United States, in the summer of 1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding, followed by new approaches, success and renewed funding. For most of its history, AI research has been divided into subfields that often fail to communicate with each other” [1]. According to the portion of AI traditionally known as cybernetics, it is neither convenient nor possible to mimic biological mental functions, e.g., human reasoning, by an artificial mind whose physical support is not an electronic brain implemented as an artificial distributed processing system, known as complex system or artificial neural network (ANN). Hence, the “connectionist approach” to AI adopted by traditional cybernetics postulates an “artificial mind in electronic brain” paradigm, alternative to the symbolic approach to “traditional” AI, which investigates an artificial mind independently of its physical support, such as in a Von Neumann computational architecture [2]. Although inductive learning from data in deep ANNs, known as deep learning-from-data (DL), has historical roots going back decades [3], nobody was able to figure out how to train ANNs from supervised (labeled) data samples until 1986, when Prof. G. Hinton (Univ. of Toronto and by now also a Google researcher) showed that the idea of backpropagation could train ANNs, including DL nets. Because of the lack of computational power of the time, it was not until 2012 that Hinton, known as the father of DL, was able to demostrate his breakthrough in image recognition (synonym of pattern matching in imagery) by deep convolutional neural networks (DCNNs) [4]. Such DCNN breakthrough set the stage for this recent progress in “connectionist” AI known as DL, Since 2012, due to considerable progress in areas such as game playing and pattern matching, encompassing both speech recognition and image recognition, considerable enthusiasm on DL has been spreading across popular press, short-term thinking governments and companies, which provide the funding for research, and relevant portions of the scientific community focusing on publish-or-perish and short-term incremental research to improve performance on benchmark data sets. As a consequence, in the mainstream computer vision (CV) and remote sensing (RS) literature, breaking points and failure modes of DL have been largely oversighted [5]. In spite of the current hype about DL, an increasing number of long-term AI experts is suggesting that DL is not enough and must be supplemented by other techniques to achieve artificial general intelligence (AGI), synonym of hybrid inference, combining deductive/top-down/learning-by-rule and inductive/bottom-up-learning-from-examples. Among this critical community of AI experts, in 2013 Hinton claimed that “DCNNs are doomed” [6]. In Sept. 2017, Hinton stated that “science progresses one funeral at a time… The future depends on some graduate student who is deeply suspicious of everything I have said, including backpropagation… I don't think it is how the brain works. We clearly don't need all the labeled data… My view is throw it all away and start again" [7]. These statements should not sound surprising if we consider that, since 2013, it is well known that DCNNs are easily fooled in classification tasks by slight changes in images undistinguishable by human visual perception [8], [9]. Although the linchpin of success of any information processing system is known to be localized at the levels of understanding of system design and knowledge/information representation, rather than algorithm and implementation [10], lack of robustness of DCNNs to changes in input data has led the mainstream CV community to propose no change in ANNs at the level of abstraction of system design and visual knowledge/information representation, but increasingly complex DCNN algorithms and implementations have been proposed instead [11]. In the meanwhile, a minority of AI experts has kept warning the scientific community and general public involved with the current hype about DL. In [12], the quote is that, in a fundamental sense, DL and deep reinforcement learning (RL) are “no more advanced today in 2017 than back in 1990s… The reasons have much to do with the emphasis being on “performance” on a single task, at whatever the training cost. Most DL systems still take millions of simulated steps, because they all start with tabula rasa - a blank slate. Humans never begin any task with a blank slate. So, I would hope that DL researchers today would give up the futile question of doing DL tabula rasa, and return to find efficient ways of teaching DL agents new tasks, after they have suitably pre-programmed the agent with the elements of the task (which could come from previous training). Till this problem is seriously addressed, progress will remain slow.” In [13], the quote is: “DL is actually (function) regression (and classification) analysis on steroids. Fundamental scientists, however, continue pushing from the opposite direction towards (physical model-based cause-effect) modelling and understanding rather than (statistical model-based data) crunching… Current approaches to AI and machine learning (ML) are statistic in nature, they perform well to describe data but provide little to none understanding of causal mechanisms. As a consequence, they also fail to be scalable to domains for even slightly different data… AI and ML, hopefully, will get more into model-driven approaches leaving traditional statistics behind and incorporating algorithmic universal first principles. This means pushing fundamental science rather than simply throwing more computational resources to solve (every function regression problem) as current AI and ML do.” In [14], the quote is: “DL has been incredibly successful in recent years, but it is still merely a tool for classifying items into categories or for nonlinear regression. AI needs to go way beyond classification and regression because plentiful labeled data is difficult to obtain and we do not know how to build sophisticated background knowledge or reasoning capabilities into DL systems… Thought is not a vector, and AI is not a problem in statistics.” In [15], the quote is: “As is very often acknowledged in DL papers, without massive machine parallelism (MP), DL methods, based on gradient-descent, are intractably slow. Yet, forms of MP used in DL are patently biologically implausible. The brain uses 20 watts and is necessarily leveraging different forms of MP. There is something fundamentally different about the biological algorithm of intelligence… It would probably serve the DL community well to take a fresh and deep look at the paradigm-shifting power of sparse distributed representation, a.k.a. cell assemblies, enabling learning that requires vastly less data and is algorithmically likely at least several orders of magnitude more efficient than current DL learning methods on a combined time/power metric.” In [16], [17], the quote is: “DNNs are the most overrated ML models in the history of AI… I do believe DL is approaching a wall, but it is not late to make a sharp turn. Essentially, DL is being driven by three forces: (a) The actually tangible exceptional performance over traditional methods. (b) The hype from media and blogs claiming that DL is at par with human level performance. Some research papers even make it worse by having titles like “DL is approaching/surpassing human-level performance on …”. That just further adds fuel to the already overhyped capabilities of recent advances in AI…. In reality, DL models are struggling to even do what a tiny fruit fly insect’s brain is capable of doing. (c) The money that is being heavily invested in AI research is huge. The first and last points are genuine, but the hype is actually bad and some of that hype is also affecting what researchers think is possible and not possible with DL. Other AI researchers like Geoffrey Hinton think that we need to actually start all over. The fact is that DL alone is not enough to achieve AGI. We even have the no free lunch theorem that states that any algorithm that performs well on a particular set of tasks will pay for that by performing poorly on the other remaining sets of tasks. A few points I agree with to put across my arguments are the following. (i) DL is data hungry. Unlike humans, animals or insects, DL requires millions of training iterations or examples in order to learn useful mapping functions. Unlike DL solutions, an AGI is supposed to be efficient when it comes to learning quickly from as little data as possible. (ii) DL models actually lack strong transfer learning capabilities such that representations are transferred from one task to another in order to make efficient use of available training data. (iii) Injecting priors in DNN models is extremely hard (actually, a priori knowledge is encoded into DCNNs by design, non-adaptive to data in terms of number of processing units, number of layers, receptive field size, inter-filter spatial stride, layer functionality, either convolutional or subsampling, etc.). We know that animal/human brains have instincts that helps them learn quickly, not everything can be learnt from data yet most researchers are obsessed with letting the DL models learn directly from data (end-to-end) with little to no priors. There is no need to learn everything from scratch if some (priors available in addition to data) are constant across the world. (iv) They are mostly supervised models, but a lot of AI is unsupervised. (v) Approaches like deep reinforcement learning (RL) are actually notoriously hard to train in reality. Deep RL may work for nice and clean virtual worlds like games but the real world is too harsh for RL to work well. (vi) DL models are mainly differentiable and hence this limits them because not every AI problem can be reduced to a differentiable form. (vii) DL models do not model data the way we would like them to. For example, humans learn concepts in vision or language. Concepts like a dog has four legs while a chicken has two legs and two wings. DCNNs when trained on dog or chicken images will not form high-level concepts (such as relationships part-of and subset-of), about dogs and chickens. I think Gary Marcus was actually even relient (in his paper [18]) because others like Geoffrey Hinton think that we need to actually start all over”. Last but not least, in [18] the quote is: “I present ten concerns for DL, and suggest that DL must be supplemented by other techniques if we are to reach AGI. Comment 1. Is DL approaching a wall, much as I anticipated at beginning of the resurgence (2012), and as AI leading figures like Hinton have begun to imply in recent months? DL is not likely to disappear, nor should it. But five years into the field’s resurgence seems like a good moment for a critical reflection, on what DL has and has not been able to achieve. Comment 2. What DL is, and what it does well? Deep learning, as it is primarily used, is essentially a statistical technique for classifying patterns (numeric pattern matching), i.e., to decide which of a set of categories a given input data vector belongs to, based on sample labeled data, using deep ANNs. With enough imagination, the power of classification is immense; outputs can represent words, places on a Go board, or virtually anything else. In a world with infinite data, and infinite computational resources, there might be little need for any other technique. Comment 3. Limits on the scope of DL. We live in a world in which data are never infinite (actually, we live in a world affected by the data-rich information-poor syndrome, DIPS [20]), systems frequently have to generalize beyond the specific data that they have seen and where the ability of formal proofs to guarantee high-quality performance is more limited. Here are ten challenges faced by current DL systems. Challenge 3.1. Deep learning thus far is data hungry, while human beings can learn abstract relationships and language-like rules in a few trials and from a small number of unlabeled samples. Challenge 3.2. DL thus far is shallow and has limited capacity for (knowledge) transfer. Even down-to-earth concepts like “ball” or “opponent” (or part-pf or subset-of) can lie out of reach. DL doesn’t really understand what a tunnel, or what a wall is; it has just learned specific contingencies for particular scenarios. Transfer tests, in which the deep RL system is confronted with scenarios that differ in minor ways from the one ones on which the system was trained show that deep RL’s solutions (in pattern extraction from data) are often extremely superficial. Challenge 3.3. DL thus far has no natural way to deal with hierarchical structure (such as network of networks). Chomsky has long argued that language has a hierarchical structure, in which larger structures are recursively constructed out of smaller components. One can see indirect evidence for this in the struggles with transfer in the field of robotics, in which systems generally fail to generalize abstract plans well in novel environments. The core problem, at least at present, is that DL learns correlations between sets of features that are themselves “flat” or nonhierachical, as if in a simple, unstructured list, with every feature on equal footing. Hierarchical structure (e.g., syntactic trees that distinguish between main clauses and embedded clauses in a sentence) are not inherently or directly represented in such systems, and as a result DL systems are forced to use a variety of proxies that are ultimately inadequate, such as the sequential position of a word presented in a sequences (where syntax and semantics are completely lost). Challenge 3.4. DL thus far has struggled with open-ended inference. If you can’t represent nuance like the difference between “John promised Mary to leave” and “John promised to leave Mary”, you can’t draw inferences about who is leaving whom, or what is likely to happen next. Humans, as they read texts, frequently derive wide-ranging inferences that are both novel and only implicitly licensed, as when they, for example, infer the intentions of a character based only on indirect dialog. Challenge 3.5. DL thus far is not sufficiently transparent. The relative opacity of “black box” ANNs has been a major focus of discussion in the last few years. How much that matters in the long run remains unclear. If systems are robust and self-contained enough it might not matter; if it is important to use them in the context of larger systems, it could be crucial for debuggability (according to well-known engineering principles of modularity, regularity and hierarchy typical of scalable systems [19]). Challenge 3.6. DL thus far has not been well integrated with prior knowledge. The dominant approach in DL is hermeneutic, in the sense of being self-contained and isolated from other, potentially usefully knowledge. (Physical models and universal first principles, such as) Newton’s laws, for example, are not explicitly encoded; the system instead (to some limited degree) approximates them by learning contingencies from raw, pixel level data. Typical researchers in DL appear to have a very strong bias against including prior knowledge even when (as in the case of physics) that prior knowledge is well known. It also not straightforward in general how to integrate prior knowledge into a DL system:, in part because the knowledge represented in DL systems pertains mainly to (largely opaque) correlations between features, rather than to abstractions like quantified statements (e.g. all men are mortal), or generics (violable) statements like dogs have four legs or mosquitos carry West Nile virus. A related problem stems from a culture in ML that emphasizes competition on problems that are inherently self-contained, without little need for broad general knowledge. This tendency is well exemplified by the ML contest platform known as Kaggle, in which contestants vie for the best results on a given data set. Everything they need for a given problem is neatly packaged, with all the relevant input and outputs files. The trouble, however, is that life is not a Kaggle competition. Real-world learning offers data much more sporadically, and so-called open-ended problems aren’t so neatly encapsulated. Should I major in math or neuroscience? No training set will tell us that. Problems that have less to do with categorization and more to do with commonsense reasoning essentially lie outside the scope of what DL is appropriate for. Such apparently simple problems require humans to integrate knowledge across vastly disparate sources, and as such are a long way from the sweet spot of DL-style perceptual classification. Instead, they are perhaps best thought of as a sign that entirely different sorts of tools are needed, along with DL, if we are to reach human-level cognitive flexibility, identified as AGI. Challenge 3.7. DL thus far cannot inherently distinguish causation from (statistical) correlation. If it is a truism that causation does not equal correlation, the distinction between the two is also a serious concern for DL. Roughly speaking, DL learns complex correlations between input and output features, but with no inherent representation of causality. A DL system can easily learn that height and vocabulary are, across the population as a whole, correlated, but less easily represent the way in which that correlation derives from growth and development. Challenge 3.8. Deep learning presumes a largely stable world, in ways that may be problematic. The logic of DL is such that it is likely to work best in highly stable worlds, like the board game Go, which has unvarying rules, and less well in systems such as politics and economics that are constantly changing. To the extent that DL is applied in tasks such as stock prediction, there is a good chance that it will eventually face the fate of Google Flu Trends, which initially did a great job of predicting epidemological data on search trends, only to complete miss things like the peak of the 2013 flu season. Challenge 3.9. DL thus far works well as an approximation, but its answers often cannot be fully trusted. DL systems are quite good at some large fraction of a given domain, yet easily fooled. An ever-growing array of papers has shown this vulnerability and no robust solution has been found yet. Challenge 3.10. DL thus far is difficult to engineer with. While an airplane design relies on building complex systems out of simpler systems for which it was possible to create sound guarantees about performance, machine learning lacks the capacity to produce comparable guarantees in compliance with robust structured engineering criteria (such as modularity, regularity and hierarchy [19])… My own largest fear is that the field of AI could get trapped in a local minimum, dwelling too heavily in the wrong part of intellectual space, focusing too much on the detailed exploration of a particular class of accessible but limited models that are geared around capturing low-hanging fruit, potentially neglecting riskier excursions that might ultimately lead to a more robust path… Another potential valuable place to look is human cognition. There is no need for machines to literally replicate the human mind, which is, after all, deeply error prone, and far from perfect. But there remain many areas, from natural language understanding to commonsense reasoning, in which humans still retain a clear advantage; learning the mechanisms underlying those human strengths could lead to advances in AI, even the goal is not, and should not be, an exact replica of human brain. For many people, learning from humans means neuroscience; in my view, that may be premature. We don’t yet know enough about neuroscience to literally reverse engineer the brain, per se, and may not for several decades, possibly until AI itself gets better. AI can help us to decipher the brain, rather than the other way around. Either way, in the meantime, it should certainly be possible to use techniques and insights drawn from cognitive and developmental and psychology, now, in order to build more robust and comprehensive artificial intelligence, building models that are motivated not just by mathematics but also by clues from the strengths of human psychology.” In line with Marcus’ concerns, the RS discipline is increasingly involved with the recent hype about DL [5]. The RS discipline is a meta-science, like engineering, whose goal is to transform knowledge of the world, provided by other scientific disciplines, into useful user- and context-dependent solutions in the world. Specific solutions (algorithms) provided by other disciplines, including inductive ML and CV, should be well understood by the RS community before use. Unfortunately, to date, this has not been the case. For example, existing Earth observation (EO) image processing software toolboxes, either commercial or open source, consist of overly complicated collections of inherently ill-posed inductive machine learning-from-data algorithms to choose from based on heuristics. By now, the new release of Trimble Definiens includes a DCNN instantiation as yet-another CV function to be selected, based on heuristics, from the ever-increasing Definiens software library, where the DCNN function is treated as a black box no typical RS user and practitioner will ever be knowledgeable of. Following papers like [5], the trend of the RS community will be even more so. In practice, neither paper [5] nor existing EO image processing software toolboxes recommend a CV system design (architecture) in compliance with the well-known engineering principles of modularity, regularity, and hierarchy typical of scalable systems, because "scalability of open-ended evolutionary processes depends on their ability to exploit functional modularity, structural regularity and hierarchy" [19]. The ongoing lack of insight in cognitive (information-as-data-interpretation) processes affecting the RS and CV literature is indeed enigmatic. The risk for the RS meta-science is to stay trapped in a local minimum of intellectual space. This holds so true that a large majority of CV systems proposed in the RS literature are in contrast with common sense. For example, in vision, synonym of scene-from-image reconstruction and understanding, spatial information dominates color information [21]. This unquestionable true-fact is familiar to everybody wearing sunglasses: human chromatic and achromatic visions are nearly as effective. Hence, to prove it fully exploits spatial information in addition to color information, any CV system should be required to perform nearly as well in panchromatic and chromatic vision [22]. On the contrary, the dominant trend in digital Earth big data analytics is "time first, space later" [23], to be largely implemented as "time first, space never", equivalent to pixel-based image analysis, which was temptatively abondoned since the 1970s. For example, the Google Earth Engine (GEE) is either pixel-based ("time first, space never") or local window-based, but spatial topology-non-preserving (non-retinotopic). In 2018, considered as a reference standard in digital Earth big data analytics and raster databases [24], GEE purses 1D image analysis rather than 2D image analysis (topology-preserving, retinotopic). Unfortunately, 1D image analysis is invariant to permutations in the order of presentation of the 1D vector data sequence of local features, either pixel-based or local window-based. 1D image analysis, which is order-insensitive to permutations in local features, has NOTHING to do with vision, because vision is 2D image analysis, synonym of topology-preserving feature mapping, sensitive to permutations in the input data sequence [25]. In [22], in agreement with recommendations by Marcus [18], it is claimed that the RS community will not move forward in coping with the five Vs of EO big data analytics, specifically, variety, veracity, volume, velocity and value, until it does not unequivocally acknowledge its membership to the interdisciplinary realm of cognitive science. Cognitive science is the interdisciplinary scientific study of the mind (information-as-data-interpretation, which is the theory of information, qualitative and equivocal, dual to information-as-thing, which is quantitative and unequivocal, known as Shannon's theory of data transmission/communication [26]) and its processes, where mind (software) and brain (distributed processing hardware) cannot be disentangled according to cybernetics, embracing “an artificial mind in an electronic brain” paradigm. Cognitive science examines what cognition (learning, adaptation, self-organization) is, what it does and how it works. It especially focuses on how information/knowledge is represented, acquired, processed and transferred either in the neuro-cerebral apparatus of living organisms or in machines. Reported in [22], few useful quotes from literature. “There is no semantics in (sensory) data” [26]. In biological cognitive systems, “there is never an absolute beginning” because (deductive, top-down) genotype provides initial conditions to (inductive, bottom-up) phenotype” [27]. “One of David Marr’s key is the notion of constraints [28]. The idea that the human visual system embodies constraints that reflect properties of the world is foundational. Indeed, this general view seemed (to me) to provide a sensible way of thinking about Bayesian approaches to vision. Accordingly, Bayesian priors are Marr’s constraints. The priors/constraints have been incorporated into the human visual system over the course of its evolutionary history, according to the “levels of understanding” manifesto extended by Tomaso Poggio in 2012 [28]” [29]. For a useful example on reverse engineering primate visual perception, where primate visual perception phenomena are adopted to constrain an inherently ill-posed CV system to become better posed for numerical solution, refer to [30]. Keywords: artificial intelligence, artificial neural network; Bayesian (hybrid, combined deductive and inductive) approach to vision; cognitive science; computer vision; deep convolutional neural network; deep learning-from-data; Earth observation (EO) image understanding; human vision; remote sensing. References [1] Artificial intelligence, Wikipedia. Date: 16 Jan. 2018. [Online] Available: https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf [2] R. Serra and G. Zanarini, Complex Systems and Cognitive Processes, Berlin: Springer-Verlag, 1990. [3] Haohan Wang and Bhiksha Raj, 2017. On the Origin of Deep Learning. arXiv:1702.07800v4. [4] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012. [5] D. Tuia et al., "Deep Learning in Remote Sensing", IEEE Geoscience and Remote Eensing Magazine, Dec. 2017 [6] Geoffrey Hinton, Advanced Machine Learning Taking Inverse Graphics Seriously, Department of Computer Science, University of Toronto, 2013. [7] Artificial intelligence pioneer, G. Hinton, says we need to start over (2017). https://www.axios.com/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html [8] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus (2013). Intriguing properties of neural networks. arXiv, cs.CV. https://arxiv.org/abs/1312.6199 [9] Nguyen, A., Yosinski, J., & Clune, J. (2014). Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. arXiv, cs.CV. https://arxiv.org/abs/1412.1897 [10] Marr, D. (1982). Vision. New York, NY: Freeman and C. [11] Alex Dimakis, Leveraging GANs to combat adversarial examples, Approximately Correct, 2018. http://approximatelycorrect.com/2018/03/02/defending-adversarial-examples-using-gans/ [12] Sridhar Mahadevan, What is the future of reinforcement learning?, Quora, 2018. [13] Hector Zenil, What are the main criticism and limitations of deep learning?, Quora, 2018. [14] Oren Etzioni, What shortcomings do you see with deep learning?, Quora, 2017. [15] Rod Rinkus, What shortcomings do you see with deep learning?, Quora, 2017. [16] Chomba Bupe, What are the most 'overrated' machine learning models?, Quora, 2018. [17] Chomba Bupe, Is Deep Learning fundamentally flawed and hitting a wall? Was Gary Marcus correct in pointing out Deep Learning's flaws?, Quora, 12018. [18] Marcus, G. (2018). Deep Learning: A Critical Appraisal. arXiv: 1801.00631. Date: 16 Jan. 2018. [Online] Available: https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf [19] Lipson, H. (2007). "Principles of modularity, regularity, and hierarchy for scalable systems," Journal of Biological Physics and Chemistry, 7, 125–128. [20] Bernus P., and O. Noran. 2017. Data Rich – But Information Poor. In: Camarinha-Matos L., Afsarmanesh H., Fornasiero R. (eds) Collaboration in a Data-Rich World: PRO-VE 2017. IFIP Advances in Information and Communication Technology, 506: 206-214. [21] Matsuyama, T. & Hwang, V. S. (1990). SIGMA – A Knowledge-based Aerial Image Understanding System. New York, NY: Plenum Press. [22] Baraldi, A., 2017. “Pre-processing, classification and semantic querying of large-scale Earth observation spaceborne/airborne/terrestrial image databases: Process and product innovations”. Ph.D. dissertation in Agricultural and Food Sciences, University of Naples “Federico II”, Department of Agricultural Sciences, Italy. Ph.D. defense: 16 May 2017. DOI: 10.13140/RG.2.2.25510.52808. Accessed 30 Jan. 2018. https://www.researchgate.net/publication/317333100_Pre-processing_classification_and_semantic_querying_of_large-scale_Earth_observation_spaceborneairborneterrestrial_image_databases_Process_and_product_innovations [23] G. Camara, e-Sensing: Big Earth observation data analytics for land use and land cover change information, Big Data From Space Conference 2017, Toulouse, France, Dec. 2017. [24] J. Wagemann et al., "Geospatial web services pave new ways for server-based on demand access and processing of Big Earth Data," International Journal of Digital Earth, 2017. [25] Tsotsos, J. K., Analyzing vision at the complexity level. Behavioral and Brain Sciences, 13, 423-469, 1990. [26] Capurro, R., and B. Hjørland. 2003. “The concept of information.” Annual Review of Information Science and Technology 37: 343-411. [27] J. Piaget, Genetic Epistemology, New York: Columbia University Press, 1970. [28] T. Poggio, “The Levels of Understanding framework, revised,” Computer Science and Artificial Intelligence Laboratory, Technical Report, MIT-CSAIL-TR-2012-014, CBCL-308, May 31, 2012. [29] P. Quinlan, Marr’s Vision 30 years on: From a personal point of view, 2012. [30] James DiCarlo, Keynote: The Science of Natural intelligence: Reverse engineering primate visual perception, CVPR17. https://www.youtube.com/watch?v=ilbbVkIhMgo.

Authors: Baraldi, Andrea (1); Tiede, Dirk (2); Sudmanns, Martin (2); Lang, Stefan (2)
Organisations: 1: Italian Space Agency, Italy; 2: Department of Geoinformatics – Z_GIS, University of Salzburg, Austria
Sentinel Hub - where are the limits to on-the-fly processing? (ID: 114)
Presenting: Milcinski, Grega

Sentinel Hub has been around for a bit longer than two years now, more or less as long as Sentinel-2 is operational. It has evolved quite a bit - from "Postcards from space" application, which was able to visualize a small tile, everywhere in the world, in five seconds, to satellite imagery power hose, which is processing more than one million requests every day, vast majority of them in under one second. We will provide latest updates about the platform and share lessons learned on building earth observation processing in cloud platforms, comparing the experience of not just Amazon Web Services but also three out of four DIAS platforms, which will be online by the time of Phi-week. Multi-temporal processing, long time-series analysis, generation of a global cloudless mosaic, data fusion. Where is the limit of what we can do without significant pre-processing and HPC infrastructure?

Authors: Milcinski, Grega; Kolaric, Primoz; Mocnik, Rok; Repse, Marko; Kadunc, Miha; Batic, Matej
Organisations: Sinergise, Slovenia
Jupytep Ide As A Cloud-Based Integrated Software Infrastructure For Eo Data Processing (ID: 112)
Presenting: Zinkiewcz, Daniel

JupyTEP IDE (https://wasat.github.io/JupyTEPIDE/) is an open source, cloud-based software based on Jupyter notebooks solution, highly integrated with Docker isolation approach in services propagation. The idea behind JupyTEP IDE is based on building Jupyter notebook IDE for EO data processing. It is the environment for EO data scripting, algorithms building, solution validations and dynamic testing of the processing solutions. JupyTEP IDE as an “all-in-one” software integrates most common EO-based, GEO and GIS tools, software, libraries and toolboxes for EO data (SNAP, OTB), vector and raster data processing (Grass, GDAL, PostGIS, etc.), visualization and presentation in most suitable form. Jupyter approach and extended Python environment integrated with Docker prerogative allow for an interconnection with most of existing services (WPS, WMS) and tools for geodata storage and distribution (PostGIS, Geoserver, etc.). In terms of building a platform infrastructure, JupyTEP IDE software provides highly configured and optimized software for any EO cloud infrastructure. The general approach in JupyTEP IDE is based on configuration, customization, adaptation and extension of Jupyter, Docker and Spark components and EO data cloud infrastructure. At development stage, JupyTEP IDE runs on the EOCloud, and is integrated with EO data repository and search engine. It allows to make any development and EO data processing tasks in place without any data transfer. Alpha versions of JupyTEP IDE environment are accessible on http://jupytepide.wasat.pl where a user can test it and create his first EO-based Jupyter notebook. On infrastructure (cloud) side JupyTEP is built on top of Docker environment. It integrates Docker, Docker Compose, Docker Engine, Docker Swarm composed in separated JupyTEP Docker Stacks. The cluster management and orchestration features embedded in the Docker Engine are built into JupyTEP IDE environment. Based on Docker infrastructure JupyTEP IDE enables high performance of EO data processing with environment isolation to a large scale of users and services. JupyTEP IDE is designed as a part of the EO data cloud infrastructure and network of exploitation platforms software. In order to cope efficiently with problems present in EO data cloud processing, the JupyTEP IDE meets the following objectives and allow: • EO tools integrations: JupyTEP IDE allows for significant leveraging of the available EO developer tools (readers, mappers, libraries), • Interoperability: JupyTEP IDE extends an ability to integrate the notebooks in existing solutions/ • Multi-user/Multi-tenancy: JupyTEP IDE serves as a web-based solution for multiple different groups of users with a proper isolation and mutualisation of cloud resources, • Scalability: JupyTEP IDE provides an ability to process a growing amount of EO data by an expanding community of developers and scientists, • Parallelization: allows for making use of a large amount of resources to perform more work at the same time or the same work in a shorter period. The full adaptation and reuse of open source components are the main paradigm in JupyTEP IDE development. JupyTEP IDE and Jupyter notebooks as an implemented set of reusable open source components aim for integration in the Network of Platforms Architecture. This should be time- and cost-effective, as flexible as possible, and realized in line with ESA’s up-to-date plans.

Authors: Zinkiewcz, Daniel (1); Bednarczyk, Michał (1,2); Rapiński, Jacek (2)
Organisations: 1: Wasat Sp. z o.o., Poland; 2: University of Warmia and Mazury, Olsztyn, Poland
Spot World Heritage (SWH) : exploring the past (ID: 111)
Presenting: Nosavan, Julien

SPOT 1-5 satellites have collected more than 25 million images all over the word during the last 30 years from 1986 to 2015 which represents a unique historical dataset. Spot World Heritage (SWH) is the CNES initiative to preserve and promote this SPOT archive by providing new enhanced products to users. A first step has begun in 2015 with the start of the repatriation of the SPOT data hosted in the remote Direct Receiving Stations spread across the world in order to complete the centralized CNES SPOT 1-5 archive. Since 2017, the SWH initiative has moved into a new operational phase with the launch of the official CNES SWH project and first activities. Thereby, while remote data are currently being repatriated, CNES has started the extraction of SPOT 1-5 data from CNES archive system to make them available to valorization processing. Meanwhile, valorization processing is being put in place to provide first enhanced products in 2019. The valorization processing will provide new SWH products. L1A product will be the first image product (GeoTIFF+DIMAP) including basic radiometric corrections and preliminary cloud cover estimation; this product will be principally based on the current SPOT N1A scene format which is the reference for years. This L1A level will replace the current SPOT 1-5 raw GERALD archive level and will form the new official SPOT archive. L1B product will be in segment format and will provide geometric corrections in line with Sentinel-2 L1B product. First of all, inter-bands registration will be reprocessed with optimized L1B ground parameters and geometric model will be refined with Sentinel-2 Global Reference Images and Digital Elevation Model (Planet Observer). On radiometric side, L1B will include new corrections providing technical masks (water, cloud …) and SPOT 5 THR processing using a new denoising algorithm based on Non Local Bayes technique. L1C product will be the orthorectified product in Top Of Atmosphere reflectance still using Sentinel-2 references. SWH software architecture relies on a strong reuse of existing tools to minimize development costs and secure validation phases regarding to the volume of data to process. SWH processing will take place on CNES High Performance Computing Centre to take advantage of SPOT 1-5 archive proximity and will use Big Data technologies such as Elastic Stack for production cataloguing and supervision. The whole SPOT 1-5 archive is expected to be valorized and will be accessible to users.

Authors: Nosavan, Julien; Moreau, Agathe
Organisations: CNES, France
Open Collaborative Astronomy in the Era of Data-driven Science (ID: 108)
Presenting: Kendrew, Sarah

In this presentation, I would like to present the perspective from the field of Astronomy on the themes of Phi Week, to foster new knowledge exchanges and sharing of expertise. Earth Observation and Astronomy share many of the same opportunities and challenges presented by the availability of petabyte-scale datasets, cheap data storage and fast networks for global connectivity. Our imaging technologies and data processing and analysis challenges have many similarities, albeit for different goals. The James Webb Space Telescope is the next-generation flagship infrared observatory for astronomy – a NASA/ESA/CSA collaboration to be launched in 2019. JWST will be operated from the Space Telescope Science Institute in Baltimore, MD (the home of scientific operations for the Hubble Space Telescope). The astronomy community has been at the forefront of science in adopting open approaches to data sharing, software development, and open access to scientific literature. I will describe how some of these principles will be adopted for the JWST, for example through the Early Release Science programme, which promotes a fast return of processed data and software tools to the global community; and adoption of Astropy, the Python library for astronomy built by a distributed open network of contributors. As well as such top-down initiatives to promote open science, Astronomy has numerous community-driven initiatives. The .Astronomy conference (“dot-astronomy”) has for 10 years been at the forefront of bottom-up community building. We hosted the first astronomy-themed Hack Days at the conference, which have sparked similar events at larger conferences around the world. We invite scientists to showcase their innovative work – novel approaches to research or public engagement, via citizen science or exploitation of astronomical data, provide tutorial sessions, and collaborate on new ideas and projects during the Hack Day. We encourage creativity and exploration rather than driving concrete outcomes, and promote a model of open, collaborative and inclusive science. Through encouraging participation from other disciplines, we promote knowledge exchange between subjects – we have in the past had speakers and participants from theoretical physics, library science, ecology and the humanities. A major survey conducted among participants in 2017 has allowed us to measure the impact our conference has had on our community, and learn lessons for the future.

Authors: Kendrew, Sarah
Organisations: European Space Agency, United States of America
ESA EO Level 2 Product Generation in Operating Mode as Pre-condition to Future Intelligent EO Imaging Satellites, Semantic Content-Based Image Retrieval and Incremental Information/Knowledge Discovery in Digital Earth EO Big Data (ID: 106)
Presenting: Baraldi, Andrea

The visionary goal of a Global Earth Observation System of Systems (GEOSS) implementation plan for years 2005-2015 proposed by the Intergovernmental Group on Earth Observations (GEO)-Committee on Earth Observation Satellites (CEOS) was multi-source Earth Observation (EO) big data transformation into timely, comprehensive and operational EO value-adding information products and services. To date the GEOSS mission in dealing with the five Vs of EO big data (volume, velocity, variety, veracity and value) cannot be considered fulfilled by the remote sensing (RS) community. In addition, no EO semantic content-based image retrieval (SCBIR) system, suitable for semantics-enabled knowledge discovery in EO big image bases, has ever been developed in operating mode. This work presents an integrated EO Image Understanding for Semantic Querying (EO-IU4SQ) system as proof-of-concept of a GEOSS capable of systematic ESA EO Level 2 product generation, never accomplished to date at the ground segment, as necessary not sufficient pre-condition to EO-SCBIR. By definition an ESA EO Level 2 information product comprises: (i) a single-date multi-spectral (MS) image corrected for atmospheric, adjacency and topographic effects, stacked with (ii) its data-derived general-purpose, user- and application-independent scene classification map (SCM), including quality layers cloud and cloud-shadow. Our working hypothesis is: ‘human vision -> computer vision (CV) >> EO-IU in operating mode >> systematic ESA EO Level 2 product generation -> EO-SCBIR -> GEOSS', where symbol ‘->’ denotes relationship part-of pointing from the supplier to the client and symbol ‘>>’ denotes relationship superset-of. This working hypothesis postulates that GEOSS depends on the development of an inherently ill-posed CV system capable of scene-from-image reconstruction and understanding in operating mode, which requires a priori knowledge in addition to sensory data to become better posed for numerical solution. Equivalent to genotypic initial conditions, a priori knowledge is encoded into the CV system by design. In a Bayesian approach to vision, Bayesian priors, also known as Marr’s constraints, have been incorporated into the human visual system over the course of its evolutionary history. Inferred from well-known human visual perception phenomena, such as the Mach bands visual illusion, we propose an original set of CV system requirements as Bayesian priors/constraints. In agreement with biological cognitive systems that never start from an absolute beginning (tabula rasa) in a solution space, the proposed hybrid (combined deductive and inductive) CV system explores the neighborhood of its genotypic initial conditions by phenotypic learning-from-examples capabilities in compliance with human visual perception. Alternative to feedforward inductive learning-from-data inference systems constrained by heuristics which dominate the RS and CV literature, the proposed hybrid closed-loop EO-IU4SQ system is capable of incremental semantic learning starting from systematic ESA EO Level 2 product generation as initial condition. When semantic enrichment of EO imagery, synonym of intelligence/cognition guaranteed by a CV system, moves backward from the ground segment to the EO imaging sensor mounted on board, then Future Intelligent EO imaging Satellites (FIEOS), conceived in the early 2000s, become feasible. Keywords: Bayesian hybrid approach to vision; cognitive science; computer vision; Earth observation (EO) image understanding; ESA EO Level 2 product; Future Intelligent EO imaging Satellites (FIEOs), human vision; radiometric calibration; semantic content-based image retrieval; world ontology.

Authors: Baraldi, Andrea (1); Tiede, Dirk (2); Sudmanns, Martin (2); Lang, Stefan (2)
Organisations: 1: Italian Space Agency, Italy; 2: Department of Geoinformatics – Z_GIS, University of Salzburg, Austria
The ESA Earth Observation O - week project (ID: 105)
Presenting: Sathnur, Ashwini

The aim of the project is to create new innovative solutions. This is based on an Information and Communication technologies product in the research areas of Life Sciences in the Space Science and Space Technologies sectors. This solution and product would be created to be deployed in the International Space Station.

Authors: Sathnur, Ashwini
Organisations: United Nations Development Programme, India
SatNOGS - Open Source Satellite Ground Station Network (ID: 393)
Presenting: Papadeas, Pierros

During the last decades, an increasing number of academic and research institutes are capable of launching their experiments to Low Earth Orbit. Due to the nature of LEO, communication with a satellite is possible for only a few minutes per day for a given location. This raises the need for multiple ground stations in several geographic locations. Although such an infrastructure is possible, most of the times is both complicated and expensive for a research or educational entity to obtain. Given the fact that each ground station will have a small per day utilization for the project's satellite, the ground station idle time can be used for reception of other missions. SatNOGS is an open source and open hardware project that addresses this problem by interconnecting all participating ground stations offering each station's idle time to users of the SatNOGS network. This way each satellite owning entity can maintain even a single ground station and by participating in the SatNOGS network will get the benefit of using the idle time of other stations in order to increase its communication coverage. SatNOGS infrastructure orchestrates the scheduling of each ground station while allowing the owner of a ground station to have complete control over their hardware. A detailed database consisting of all the satellites currently in orbit, their orbital elements and transceiver information provides all the necessary information for satellite communication. SatNOGS client is the software that runs on a computer on the ground station and is capable of communicating with the network in order to get informed for the scheduled observations. Once a scheduled satellite is about to rise over the horizon, SatNOGS client coordinates the antenna rotator movement, tracking the satellite, while executing a sophisticated GNU radio script that operates the software-defined radio, compensating doppler shift in frequency and real time, demodulates, decodes and also records the received signal. Once the observation is complete, the client software posts all the decoded data and a recording of the observation to the network. Furthermore all recorded data are accessible by SatNOGS users while an intuitive web graphical representation of satellite data is offered for data visualization. Finally, SatNOGS Rotator is a fully open software and hardware antenna rotator that mainly consists of 3D printed parts and readily available materials. As of 2018, there are more than 100 ground stations in the SatNOGS Network, tracking 280+ satellites in VHF, UHF and S-band with more than 2000 observations per day, totalling more than 22M telemetry packets already acquired and stored.

Authors: Papadeas, Pierros; Papamatthaiou, Matthaios; Tsiligiannis​, Vasileios; Kosmas, Eleftherios; Zisimatos, Agisilaos; Shields, Corey; Papadeas, Dimitrios; Damkalis, Alfredos-Panagiotis
Organisations: Libre Space Foundation, Greece

Future EO (part1)
09:00 - 10:30
Chair: Amanda Regan - ESA-ESRIN

09:00 - 09:20
From Student Projects to Satellite Constellations (ID: 295)
Keynote: Praks, Jaan
(PDF )

Rapidly declining launch cost and affordable small satellite technology disrupts the space field. Small agile teams and start-up companies change the game and democratize access to space. Earth Observation is one of the first fields where small satellites are about to bring new generation services and applications on commercial basis. When this development started and where it will lead us? What happens to EO science? Can those new sensors help us to handle global problems? Those and many other topics are discussed in the talk.

Authors: Praks, Jaan
Organisations: Aalto University, Finland
09:20 - 09:35
Big Birds And Small Satellite Swarms - How To Make Them Work Together? (ID: 109)
Presenting: Jochum, Markus
(PDF )

An impressive number of heritage missions will be surrounded by an ever increasing amount of massive small satellite swarms. Will one technology replace the other? Is there an opportunity for both concepts to find a commercially viable market niche - or even collaborating across missions (e.g. in tactical tip and cue monitoring scenarios)? The talk will highlight opportunities, drawbacks and practical challenges looking into the example of radar missions.

Authors: Kaptein, Alexander; Jochum, Markus; Janoth, Juergen
Organisations: Airbus Defence and Space GmbH, Germany
09:35 - 09:50
Ongoing Mission Data System Innovation For EO Mission (ID: 116)
Presenting: Reggestad, Vemund
(PDF )

In the last years, a number of innovation activities has started in the area of Mission Data Systems for Earth Observation Missions. They promotes the adoption of new technologies in the domain of EO missions as well as enables new opportunities. This paper gives an overview of the ongoing innovation activities and their objectives. Clearly the adaptation of the technologies inside mainstream operations will depend on the future missions needs and operational scenarios of these missions. To avoid making predictions about technology adoption by future missions, this paper present each technology by describing the use cases they enable. The following chapters contain a short described of each of the innovation initiatives. A. Utilization of FBO In order to answer the needs of future EO missions, ESA is investigating the usage of the CCSDS File Delivery Protocol (CFDP) for EO missions via an ongoing TRP study. The protocol is already planned to be heavily used within Science missions, hence should be a good candidate also for future EO missions. The introduction of CFDP for EO missions would bring a number of benefits, both for transferring files between spacecraft and ground segments, but also between spacecraft on inter satellite links. (e.g. in the context of data relay satellites or constellations). Reliable CFDP (Class 2) would address the future issues of lossy data downlinks. CFDP supports selective data downlinks and prioritization and allows easier identification of complete sets of data (e.g. one measurement), and could improve the capabilities to implement emergency services with requirements for minimum latency. B. Utilization of DTN DTN is for example heavily used in various communication scenarios with the International Space Station where its powerful features has been shown. In particular in scenarios where the complexity of the communication network is growing, DTN is an enabling technology. DTN is however so far not utilized for traditional EO mission. Due that ESA is studding the benefits and best architectures for applying DTN for EO use cases. Of particular interest for this technology is spacecraft with inter satellite links, but also for traditional spacecraft, the flexibility of DTN can bring several advantages, for example combined with optical communication where weather conditions cannot be predicted upfront. C. More Flexible Mission Planning Software ESOC is in the process of migrating to a more flexible Mission Planning Software originally developed for the needs of the planetary missions. The adoption of this MPS system is a pre-request and enabler for incorporation of more advanced planning techniques, like AI based optimization algorithms, into the MPS planning cycles for EO missions. Identification of use cases for such technologies would be very interesting activities to perform as follow up to the current ongoing migrations. D. LTDP for Operational Data As of CMIN16, Director of Operation is also part of the ESA Data Heritage Program (LTDP+). This imply that activities has started to ensure availability and accessibility of operational data for various Earth Observation missions. One of the main issues related to LTDP for Operational Data is to lower the learning curve required to successfully exploit operational data. This exercise will first be done with the known use cases for such data, however it is expected that easy access to operational data will allow new research to be performed beyond what we can predict up front. E. Preparation activities for next generation M&C systems based on EGS-CC The development of the EGS-CC system is progressing at full speed. Adoption of this technology will be a key issue in the M&C domain for the next years. Due to that two activities are under preparation to actively prepare for the adaption of EGS-CC for Copernicus missions, paying the way for EO missions to follow afterwards. F. OPSAT ESOC is also preparing a flexible Cubesat platform for in orbit demonstration of a wide range of operational technologies, including among other the utilization of CCSDS MO services as replacement for PUS based communication with the spacecraft. Various experiments are also planned to be performed to test approaches to security and compression of telemetry data, of particular interest for EO missions.

Authors: Reggestad, Vemund
Organisations: ESA, Germany
09:50 - 10:05
HYBRIS: An Earth Observation Hyperspectral CubeSat Mission in Synergy with ESA Sentinels Missions (ID: 142)
Presenting: Piro, Alessandro
(PDF )

Hyperspectral observation in the VIS and NIR from satellites allows the simultaneous imaging of Earth surface and the accurate acquisition of the corresponding spectral responses. The technologic development of off-the-shelf miniaturized instrumentation allows the utilization of these instruments also as nano-satellite payload. Moreover, the small cost of nano-satellites development and launch has set them as the latest trend in satellite technology. The HYperspectral BRIdge for Sentinels (HYBRIS) proposed mission is a polar-orbiting 3U CubeSat for hyperspectral applications that will provide acquisitions in the VNIR spectral range (400 nm - 1000nm). The hyperspectral observations provided by HYBRIS will be integrated with Sentinel-2 and Sentinel-3 data through an un-mixing approach, with the aim to acquire a fused inter-calibrated product with improved spectral and spatial resolution. Moreover the proposed payload can serve as a low-cost precursor for testing the feasibility of a possible future Sentinel mission for hyperspectral imaging, implementing innovative technical solutions and applications. Hyperspectral observations from nano-satellites has been prone to several issues that has been largely overcome by the actual off-the-shelf components. In this context, the main identified issues related to hyperspectral imaging and worsen by the miniaturization of the components are presented and discussed.

Authors: Piro, Alessandro (1); Casella, Daniele (1); Pinori, Sabrina (1); Di Ciolo, Lorenzo (1); Cappelletti, Chantal (2); Battistini, Simone (2); Graziani, Filippo (2)
Organisations: 1: Serco Spa, Italy; 2: Gauss Srl, Italy
10:05 - 10:20
Accurate Segmentation of Hyperspectral Images Using Deep Neural Networks – Are We There Yet? (ID: 235)
Presenting: Nalepa, Jakub
(PDF )

Deep neural networks (DNNs) have achieved unprecedented success in a wide range of pattern recognition tasks, including medical imaging, speech recognition, text processing, and satellite imaging. An extremely rapid development of remote sensors made the acquisition of hyperspectral image data (HSI), with up to hundreds of spectral bands over a given spatial area, much more affordable. Although hyperspectral images have been already shown useful in accurate identification of a variety of materials, efficient analysis and segmentation of such imagery became a big issue in practical applications and it is currently being faced by the machine-learning and image-processing communities worldwide. The classification performance of any DNN strongly depends on its architecture, hyper-parameter values, and – most of all – the quality (and the amount) of the available training data. The architectures are commonly designed by human experts; however there exist algorithms which are aimed at automating this cumbersome process. In the first part of the talk, we will discuss the current advances in deep learning-powered HSI segmentation techniques. We will go for a thorough (yet concise) journey through the state of the art: we will show how evolutionary algorithms can be exploited for (i) designing deep architectures (we will focus on memetic algorithms, being the hybrids of evolutionary techniques and local-refinement routines), and for (ii) optimizing the DNN hyper-parameter values (particle swarm optimization technique will be presented). Although DNNs are currently being applied to HSI, such applications are still relatively fresh. This is due to the difficulties in understanding multiple hyperspectral bands by humans, and thus lack of high-quality annotated sets that could be effectively used for training. We will discuss the current advances in band selection algorithms that help extract only those bands from HSI that convey the most important information, hence make such data more understandable. Also, we will go through HSI augmentation techniques. The state-of-the-art journey will be concluded with HSI visualization approaches – we will present algorithms based on manifold alignment, band selection, and band fusion. The second part of the presentation will be devoted to our latest convolutional neural network exploited to segment two benchmark HSI datasets – Salinas Valley (224 bands, 16 classes, 512 x 217 pixels with a spatial resolution of 3.7 m, AVIRIS sensor), and Pavia University (103 bands, 9 classes, 610 x 340 pixels with a spatial resolution of 1.3 m, ROSIS sensor). We will show that our initial design of a (fairly shallow) DNN provides high-quality segmentation (overall multi-class accuracy was over 84% for Salinas, and over 78% for Pavia) in very short time. Also, it can be quantized to reduce its size (it is applicable in very hardware-constrained environments, e.g., on a small satellite) without lowering its segmentation capabilities (the size of the trained DNN was almost halved for Salinas with only 1% decrease in classification accuracy). Finally, we will present our rigorous validation procedure (backed up with statistical tests) to show how to assess emerging HSI segmentation algorithms in a fair and thorough manner.

Authors: Nalepa, Jakub (1,3); Marcinkiewicz, Michal (1); Ribalta Lorenzo, Pablo (1); Czyz, Krzysztof (2); Kawulok, Michal (3)
Organisations: 1: KP Labs, Poland; 2: FP Instruments, Poland; 3: Silesian University of Technology, Poland
10:20 - 10:35
Ocean Property characterization Over Italian Waters from a CubeSat with novel Digital micromirror device imaging system (ID: 369)
Presenting: Twardowski, Michael
(PDF )

A core technology for ocean and atmospheric sensing is passive imaging of reflected solar radiation, typically detected with multispectral CCD array based systems. General challenges in adapting such imaging technology to CubeSat platforms over the littoral environment include 1) sufficient signal-to-noise for adequate algorithm retrieval of environmental properties, 2) acceptable photon efficiency, 3) efficient information transmission to the ground station that enables the rendering of high quality images given severe data transmission limitations, and 4) saturation, blooming and edge effect problems with water adjacent to bright land and clouds. We are developing a novel optical acquisition hardware architecture for a pushbroom-type CubeSat imager based on a Digital Micromirror Device (DMD) and an improved backend compression processing scheme to optimize information transmission given data bandwidth limits. SNR will also be substantially improved relative to CCD arrays. A DMD consists of millions of electrostatic-actuated micro-mirrors that can be used to control light collection dynamically for each individual pixel equivalent. The imager will allow for adaptive optimization of spectral resolution, spatial resolution, and SNR based on a particular scene being imaged. The spectral range of the DMD imager will cover 350 to 900 nm over 1600 bands, capable of resolving a host of ocean and aerosol parameters, including ocean turbidity, phytoplankton, productivity, and bathymetry from published algorithms. Projected spatial resolution will be as fine as 20 m over a 50 km swath. The concept would be have tremendous research potential as an imaging system dedicated to monitoring Italian coastal waters. The CubeSat group at SSC Pacific is developing our satellite operations program plan.

Authors: Twardowski, Michael (1); Brando, Vittorio (2); Ouyang, Bing (3); Sanborn, Graham (3)
Organisations: 1: Florida Atlantic University, United States of America; 2: Istituto di Scienze dell’Atmosfera e del Clima, CNR, Rome, Italy; 3: SPAWAR Systems Center – Pacific, San Diego, CA, USA

Future EO (part2)
11:00 - 12:30
Chair: Amanda Regan - ESA-ESRIN

11:00 - 11:15
Timely, Reliable Information for Critical Decision Making (ID: 237)
Presenting: Ulmer, Andrew
(PDF )

Capella Space Capella Space is uniquely positioned to build and operate a constellation of small, powerful SAR (Synthetic Aperture Radar) satellites. Our constellation will track changing conditions on earth and provide timely, reliable information for critical decision making. Unlike traditional radar systems, Capella’s constellation of 36 small and nimble satellites will provide cost-effective, hourly re-visits and quick turnaround of information for locations around the globe. Timing In preparation for full-scale operation of our constellation, Capella Space plans to launch our first satellite in Q4 of 2018. The first satellite is meant for internal commissioning and end to end operational testing. Capella plans a second launch in Q1 of 2019 from which we will provide access to data for partners and plan pilot trials. We plan to launch our first 6 satellites in Q3 of 2019 and plan to scale to our full constellation of 36 satellites by 2021. With the ability to continually update our satellite technology and refresh the satellites in our constellation, customers will maintain continuous access to leading edge and customizable SAR capabilities. About Capella Capella Space is a Silicon Valley, venture-backed satellite imaging company launching the first U.S. commercial Synthetic Aperture Radar (SAR) satellite. With offices in San Francisco and Boulder, Capella’s team is bringing innovation to SAR and small satellite design and operations to help realize a new era in remote sensing.

Authors: Ulmer, Andrew
Organisations: Capella Space, United States of America
11:15 - 11:30
"HORUS Cluster: the S5Lab CubeSat-based multi-angle and multi-spectral Earth Observation system" (ID: 238)
Presenting: Pellegrino, Alice
(PDF )

The CubeSat standard has drawn a considerable amount of attention as a vehicle to save cost and time in space missions, evolving from purely educational tools to a standard platform for technology demonstration and scientific experiments in Space. Small satellites became the core business of several start-up companies related to space field, such as, for example, SkyBox, Planet Labs, PlanetiQ, Spire, etc. Indeed, many of them entered the market with EO imaging CubeSat constellations and clusters with daily revisit capability and reasonable resolution by using nadir-pointing sensors. For the EO environmental monitoring these kinds of data are often not sufficient and imagery acquired with off-nadir view-angles may offer significant improvements in different applications, such as the quantification of the atmospheric properties. The MISR (Multi-angle Imaging SpectroRadiometer) sensor on-board the NASA EOS TERRA satellite, launched into a polar orbit in 1999, is currently the only system able to provide imagery with these features. It records images of the Earth simultaneously at nine different angles in four spectral bands (red, green, blue and near infrared). This system provided valuable data during its long operational life-time, but the TERRA spacecraft is a very large satellite, equipped with heavy and expensive instrumentation. Taking advantage of electronics and on-board systems miniaturization for current electro-optical systems, the basic concept of MISR could be implemented on a nano-spacecraft. In this contest, the Sapienza Space Systems and Space Surveillance Laboratory (S5Lab) research team at University of Rome “La Sapienza” proposed the “HORUS Cluster”, a new concept, conceived to reproduce the MISR’s performances at a lower cost, by using the CubeSat standard. The proposed system ensures a continuity of multi-angle and multi-band data of the Earth atmosphere and surface by splitting the MISR instrument main capabilities into a cluster of four 6U CubeSat, operating in the same orbital plane and looking at nine different view angles in four spectral bands. The paper describes the feasibility study of a CubeSat-based multi-angle and multi-spectral Earth Observation system. It outlines the main technical and scientific objectives of the mission, including system requirements and key performance parameters. The envisaged technical solutions to implement the HORUS concept into a cluster of four 6U CubeSat are described, discussing how the payload can be hosted on-board and operate in synergy with traditional and well established EO in-orbit missions, such as the ESA Sentinels.

Authors: Pellegrino, Alice; Santoni, Fabio; Curianò, Federico; Gianfermo, Andrea; Feliciani, Francesco
Organisations: University of Rome "La Sapienza", Italy
11:30 - 11:45
Image Information Mining at Planet (ID: 250)
Presenting: Marchisio, Giovanni
(PDF )

Planet operates the largest constellation of Earth-observing satellites in human history, collecting high-resolution (3.7m) imagery of the entire Earth’s surface each day the sky is clear. Intersecting this new satellite data source with modern deep learning solutions allows us to do at least two revolutionary things: 1) establish a robust spatiotemporal baseline against which to measure change daily; and 2) enable training of new deep learning architectures on dense time series of high resolution remotely sensed data. A new family of monitoring solutions is beginning to emerge from this effort. For instance, the reduction in size and cost while simultaneously improving in resolution and revisit rates is unlocking unprecedented opportunities for timely, high-resolution, wall-to-wall mapping of the world’s tropical forests. This offers new possibilities to develop different MRV systems for REDD+. ‘Next-generation’ (automated, analytical) MRV tools can be developed to not only improve accuracy, efficiency, and capacity, but also to inform low-carbon financial product innovation. The same deep learning models for semantic segmentation can be jump started on the results of high resolution LULC classification maps obtained from classical computer vision analyses and adapted to track both deforestation and urban growth in the world’s fastest growing cities (secondary cities in Asia and Africa). An additional innovation is Planet’s maritime ability to provide daily high-resolution coverage of the Earth’s open water and maritime areas. We already routinely cover vast areas of the Mediterranean, Red Sea, Java Sea, East and South China Sea, Yellow Sea, Sea of Japan and Gulf of Mexico every day. These high cadence, broad-area capabilities allow us to identify and track vessels and objects, detect maritime movement/activity (e.g., shipping routes and ports), validate AIS data feeds and protect against spoofing, and uncover activity in unmonitored areas. By training several flavors of convolutional neural networks to detect vessels and capture transshipment events, Planet can monitor traffic patterns in multiple AOIs on a daily-basis in areas as large as 1,000,000km2. We populate the results of these ships information feeds into a database, which is accessible via the Planet Collections API. The Collections API is the endpoint for querying and reading the collection of objects identified in a spatial information feed. Through it, our users can see the collections they have access to and search them for specific data of interest. Longer term goals on our road map are the development of a general purpose scalable analytic architecture for change detection, with the aim of producing daily change heat maps and relatively short cadence (weekly, monthly) thematic information layers at scale. The first would reduce the search area for more refined analytics to produce the second. We are developing some of these models with the aim of deploying on board our spacecrafts in the future.

Authors: Marchisio, Giovanni; Erinjippurath, Gopal; Goldenberg, Benjamin; Ferraro, Matthew; George, Matt; Martinez Manso, Jesus; Wilson, Nicholas; Uzkent, Burak; Kargol, Agata; Gonzalez-Riveo, Manuel; Clough, Christian; Nair, Ramesh; Whipps, Henry; Zuleta, Ignacio; Soenen, Scott
Organisations: Planet, United States of America
11:45 - 12:00
Video From Space - A New Dimension In Earth Observation (ID: 262)
Presenting: Teo, Xu
(PDF )

A new dimension in Earth Observation (EO) is about to come into its own - Time. This is achieved not just through multiple revisits over a specific location on Earth in a single day, but also through the ability to see much more in one imaging opportunity beyond just a snapshot. Armed with the ability to “stare” and record Very High Resolution (VHR) full-colour video of a target for up to 120 seconds from a range of viewing angles, satellites of the Vivid-i constellation will change the way we view our dynamic planet. It will enable new depths of analysis and much improved situational awareness, as well as a deeper understanding of what is happening ‘on location’. Such a capability of fusing both aspects of time become important for high-value assets in dynamic environments such as Oil & Gas installations, mining infrastructure, ports or transportation hubs worldwide, just to name a few. The multi-angle video data acquisitions allow for advanced parameters to be derived, producing outputs such as densified point clouds and allowing 3-dimensional reconstructions which could be critical for applications such as disaster response. It will also create the opportunity for motion analysis via the detection of both speed and direction of moving targets such as vehicles and ships. The high revisit rate of the constellation will also allow for close to persistent monitoring, a necessity not only when observing critical, time-sensitive targets, but also when regional cloud cover is a problem. When tied in with in-situ auxiliary streams of “Big Data” such as social media, through the use of Natural Language Processing (NLP), or closed-circuit television (CCTV) footage, VHR EO can provide highly advanced analytics and insights to better inform the end-user, aiding them in critical decision-making situations. In the energy sector, publicly available data such as energy consumption and spot price trends could be utilised to promptly task satellites to monitor particular power stations and their operations to predict availability on short notice. Combining these data streams will enable us to not just react with increased timeliness, but also be able to forecast future occurrences of critical scenarios, especially when paired with the growing use of Machine Learning techniques. Daily VHR monitoring with video capability, at multiple times a day, will create new and exciting opportunities in a geolocated world and benefit customers in a variety of industries especially ones in high value, high importance and high-risk environments. The ultimate goal of such EO application efforts are the insights they offer, so join us as we look to explore and exploit this exciting and growing realm of EO.

Authors: Teo, Xu; Peter, Hausknecht
Organisations: Earth-i Ltd, United Kingdom
12:00 - 12:15
Automatic Land use and Land Cover Classification by Bird'sAI (ID: 277)
Presenting: van der Maas, Rosalie
(PDF )

International Organizations, NGOs, (Local) governments, ?and private sector actors often require accurate measurements of land cover and land use (change) for strategic decision making, impact assessment, law/subsidy enforcement, administration, and management. They are currently losing a lot of time, money, and resources in labor intensive measuring schemes to monitor this at regular (often at yearly, five yearly, or ten yearly) intervals. Aside from the cost of monitoring, low change of detection are also resulting in land use and subsidy violations, as well as the missing of environmental early warning signs. Bird'sAI's answer to this challenge is to provide accessible, low cost, & highly automated land cover mapping & monitoring to information driven entities across the globe. We do this by running Artificially Intelligent (AI) image recognition models on open satellite data within our own data infrastructure. In doing so we aim to maximise automation, case-by-case uniformity, and scalability.

Authors: van der Maas, Rosalie
Organisations: Bird'sAI, Netherlands, The
12:15 - 12:30
The Earth Observation Geo Spatial evolution : next technology challenges over the value chain and new business models (ID: 286)
Presenting: Grandoni, Domenico
(PDF )

The aim of this keynote speech is to present how the impact of the New Space together with new business models is changing the EO market. Geo information domain is fast changing, the evolution of new space systems with the possibility to conceive and realize, through miniaturization of on board functions, advanced digital processors, a new class of space based radars, new generation optical sensors, cost effective microsatellite constellations and satellite formations open the way to a new class of space services and applications in Earth Observation. Constellation undoubtedly represent an interesting breakthrough and trend in democratization of space and an opportunity to contribute to space activities for a broad range of institutions including emerging innovative start-ups, research centers and universities. The possibility to complement “traditional” space systems based on large space infrastructure and very high end performance payloads and sensors allows today to conceive very high revisit observation capabilities and in perspective to realize a quasi-persistent surveillance of our Planet. In the coming year a huge amount of data will be generated in space by radar, optical and hyperspectral sensors, this exponential growth in the amount of data will open the way to a complete new class and generation of Geo spatial service and application platforms. Use of advanced Information Technologies domain like data analytics and big data, the unprecedented growth of computing power and evolution of machine learning / AI algorithms make possible to conceive a new application and services, to reach new institutional and commercial user communities and finally will represent more than evolution a true revolution in the business models. The big IT/ ADV companies are investing in space and we are benefiting out of it; to catch this challenge we need to invest in new algorithms that can manage such quantity of data in real time and to derive information flow for a broad range of user communities. Fundamental is also the role of new innovative companies dedicated to data analytics. EO Data and information reports are live ingredients of the new dashboard, dedicated both to the decision makers and infield operators. e-GEOS is managing the largest radar constellation of four “big class1" satellites. We are investing on AI and Cloud systems to rethink all the algorithms to deal with such amount of data and at the same time we are interrogating ourselves on the revolution in the business approach that is required to serve customers with the best results at the suitable price. The e-GEOS Matera Space Centre, one of the most important EO data hub in the world, is ready for this mission and the company is investing in new antennas to be able to gather data along the Mediterranean region.

Authors: Comparini, Massimo Claudio; Grandoni, Domenico
Organisations: e-GEOS, Italy

Future EO (part3)
14:00 - 15:30
Chair: Fabio Santoni - Università di Roma "La Sapienza"

14:00 - 14:15
High performance low-SWAPP EO payload for smallsat (ID: 302)
Presenting: Geyl, Roland
(PDF )

Safran is developing an ultra low-SWAP high-performance EO payload for smallsats starting from 18U size. The payload is able to deliver 1.5-m GSD imagery from 500 km orbit with 190-mm aperture multispectral optics fitted with a 30 Mp 2D focal plane sensor. Associated image processing capability shall enable 0,75-m GSD in super-resolution mode and many other on-board functions. In addition an IR channel can be added as an option to increase payload overall effeciency and dedication to greatest variety of applications from conventional earth observation, tactical imagery, debris detection or interplanetary science missions.

Authors: Geyl, Roland (1); Rodolfo, Jacques (1); Girault, Jean-Philippe (2)
Organisations: 1: Safran Reosc, France; 2: Safran Electronics & Defense
14:15 - 14:30
Towards the Internet of Flying Objects (ID: 309)
Presenting: Coliolo, Fiorella
(PDF )

In the last years the Apulian Aerospace Cluster (DTA - Distretto Aerospaziale Pugliese) pushed a big effort in the development and deployment of an RPAS Testbed inside the Grottaglie Airport which has been qualified by National Civil Aviation Authority – ENAC - as an integrated logistics platform for research, development and testing in 2014. It means that the GTA (Grottaglie Testbed Airport) represents an enormous opportunity for testing, validate and demonstrate flying assets like RPAS, HAPS and even suborbital flights. Not only for the relevant investment costs but also because of the number of constrains and regulation needed to experiment the RPAS in an ATM environment where the flight areas are controlled. Thanks to the past, current and future projects the GTA can provide services to simplify the different phases of testing of different platforms. Among them, the GTA aims to provide services for: • Mission Planning; • Mission Control System; • Mission Simulation Platform; • Communication Channels Emulation Platform; • Cyber Security assessment • Mission payload data processing & post-processing Moreover, the GTA provides also infrastructures – hangars and other facilities – as a service, together with the capability to setup and provide educational and training programs. Finally, the GTA is fully connected by the GARR Network to the RECAS supercomputer center capable to provide to the GTA 8.000 cores and 3.500 TB together with a GPU based HPC grid composed by 800 cores each equipped with a NVIDIA K40 GPU. Considering that, the GTA represents a perfect remote companion for complementing and implementing a distributed phi-lab vision in order to co-create, test and experiment innovative NewSpace EO applications and architectures composed by constellations of heterogeneous orbiting satellite platforms together with a network of other sensing and computational nodes like HAPS, Drones and in situ sensors. Different innovative Concepts of Operations and Technologies, from Artificial Intelligence to Blockchain, from IoT integration to cybersecurity management, can be then effectively investigated to understand the possibilities provided by this new architectures. The final goal is to provide the needed information at the right time according to the new paradigm of SpaceStream, overcoming the separation between Data nodes, Processing nodes and Communication nodes to reach the goal of the Timeliness.

Authors: Coliolo, Fiorella (1); Acierno, Giuseppe (1); Romani, Marco (2); Abbattista, Cristoforo (2); Morsillo, Francesco (3)
Organisations: 1: Apulian Aerospace Cluster; 2: Planetek Italia s.r.l., Italy; 3: SITAEL S.p.A.
14:30 - 14:45
PandaSat – A New Constellation For Nature Conservation (ID: 312)
Presenting: Shapiro, Aurelie
(PDF )

The World Wide Fund for Nature (WWF-UK, Germany, Cameroon), Stanford University, and University of Colorado Boulder are collaborating for the first time to build PandaSat: applying new space and engineering technologies targeting the most urgent conservation applications. We propose “PandaSat” a new satellite constellation that is innovative and completely unprecedented in the conservation NGO community: WWF and its partners will be the first organization of its kind to design, develop, launch and manage its own satellite constellation and tracking system for conservation applications: from tracking endangered species to geo-locating illegal trade. PandaSat is providing a new approach to wildlife tracking and monitoring for conservation applications. Existing satellite tracking and communication systems are costly, bulky, and expensive. They are often designed and planned years in advance with little relatively technological evolution to meet changing needs. PandaSat is developing low cost accessible satellites and tracking tags which are open source and flexible in order to be multiplied to expand and adapt over time, so that as new technologies come online or new challenges arise, PandaSat can fit to conservation’s most urgent needs. PandaSat is currently in the design phase and seeking partners to support the first satellite launch by 2020.

Authors: Shapiro, Aurelie (1); Harper, Samuel (2); Manchester, Zachary (3); MacCurdy, Robert (4)
Organisations: 1: WWF-Germany; 2: Global Data Labs; 3: Standford University; 4: University of Colorado Boulder
14:45 - 15:00
Deep Learning For Enhanced On-Board Autonomy: Earth Observation Applications (ID: 317)
Presenting: Feruglio, Lorenzo
(PDF )

Specific technological innovations are required to accomplish more ambitious commercial and scientific goals for Earth Observation missions. One of the key areas of potential innovation is mission autonomy: an increased degree of on-board autonomy helps in implementing more effective mission operations. In particular, functionalities like event detection, autonomous planning and goal management, if implemented on-board, introduce several benefits to the way operations are managed: 1) spacecraft are able to focus on specific, interesting targets autonomously; 2) the decision making loops in the Mission Control Centres, that often introduce delays, can be avoided; 3) data sent to ground can be prioritized and selected considering the objectives of the missions. The characteristic that enables the presented level of autonomy is the ability to extract useful information from the imaging payloads present in the spacecraft, directly on-board. At AIKO, we have developed an on-board deep learning algorithm to perform state of the art detection on images acquired by payloads, making high level information from payload data available on the spacecraft, for enhanced operations planning.

Authors: Feruglio, Lorenzo; Franchi, Loris; Varile, Mattia
Organisations: AIKO SRL, Italy
15:00 - 15:15
A sub 5-m GSD remote sensing payload for a 3U Cubesat (ID: 320)
Presenting: Cronje, Matthys Louwrens
(PDF )

We present the development of a novel optical payload which addresses the important trends in the remote sensing industry. These include the demand for higher spatial, spectral and radiometric resolution, more frequent imaging, on a more affordable platform. A key driver for this project is the rapid time-to-space and commercialization of a CubeSat payload. The modular, standardized, off-the-shelf approach of CubeSat suppliers results in a configuration for remote sensing payloads with non-optimal use of available space. The proposed payload design is focused on making full use of the available space within a 2U form factor to integrate the optical front-end, an attitude, and control system and the focal plane electronics. The following design philosophy and parameters were used: 1. Maximize the use of the 100x100mm opening on the satellite for the optical front-end aperture to achieve

Authors: Cronje, Matthys Louwrens (1); Du Toit, Johann (1); De Swart, Ana-Mia (1); Steyn, Hano (1); Grobbelaar, Eben (1); Kearney, Mike-Alec (2)
Organisations: 1: Simera Sense, Somerset West, South Africa; 2: CubeSpace, Stellenbosch, South Africa
15:15 - 15:30
Levers of the New Space Economy (ID: 321)
Presenting: Lausten, Kevin
(PDF )

Intro DigitalGlobe is an active player driving success in the new space economy. Through the launch and operation of next generation satellites, the provisioning of imagery data, derivative products plus analytics tools through cloud APIs and the development of collaborative business models we are driving the space industry into new markets and verticals around the world. This unique blend of technical capabilities, business processes and ecosystem partnerships help to demonstrate the opportunities, challenges and future state of the new space economy. Imagery component With a constellation of high resolution multispectral imagers, DigitalGlobe delivers unique datasets for users and business partners to understand the earth in new and innovative ways. The current constellation of EO imaging systems are used by researchers investigating coastal erosion, land cover change, vegetation health and agricultural yield estimation (to name a few). These applications are facilitated by a set of next generation satellites operating at sub meter resolution with multispectral capabilities that sense a wide range of surface characteristics. The future constellation will allow for intra-day revisits of sites around the world, therefore unlocking a whole new set of use cases. Cloud analytics Exposing DigitalGlobe content online in an analytics ready form, in concert with other EO data like the Sentinel-2 data, enables support to a range of solutions at scales not previously possible. The elastic characteristic of the cloud coupled with tremendous amounts of EO data enable developers to begin experimenting with new product concepts, and to unlock innovations not previously possible. In addition, cloud based processing and storage of Satellite data enables a set of new business models that widen the pool of users beyond the traditional remote sensing scientists. Collaborative business models Partnerships are critical to developing the new space economy. A successful partnership is one where both parties achieve their desired objectives, and new business models are required to capitalize on technology innovations and achieve a set of outcomes that are mutually desired. DigitalGlobe has implemented a set of collaborative approaches that enable 3rd parties to build new products, services and business offerings which are in place with organizations around the world. New markets Joining together a powerful EO constellation with cloud infrastructure and the business models needed to drive a collaborative ecosystem; DigitalGlobe is facilitating success in a range of new industries and markets. FinTech, Insurance and Telecommunications are not industries that are traditionally represented as core to the Remote Sensing space. However, in the new space Economy, DigitalGlobe and its market partners are bringing innovative new solutions to these emerging markets.

Authors: Lausten, Kevin
Organisations: DigitalGlobe, United States of America

Future EO (part 4) High Altitude Pseudo-Satellites
16:00 - 18:00
Chair: Thorsten Fehr - ESA

16:00 - 16:15
Facts and Figures of Earth Observation Services from High Altitude Psedo-Satellites (HAPS) (ID: 239)
Presenting: Gonzalo, Jesús
(PDF )

Most of the High Altitude Pseudo-Satellites (HAPS) are nowadays in design and development phases. Besides, many of them have Earth Observation (EO) as one of their target markets, where high resolution and persistent monitoring are considered quantum leaps with respect to current data providers. In parallel, the aerospace industry is preparing the development of dedicated payloads, evolving the space and airborne concepts to the new operational environment. Both active and passive instruments are under study, with promising tests already developed in balloons and airplanes. This paper presents a comprehensive analysis of the performance of EO services and products, to assess capabilities and limitations of platforms and sensors. This includes technology surveys, geometric and radiometric budgets, operational performance evaluation (e.g. revisit time), data processing and storage analysis, communication link budgets and mass/volume estimations. Finally, a synthesis exercise from the above results provides simplified models to preliminarily evaluate the expected performance of several kind of instruments, together with their dimensioning figures, from the major technical and operational requirements.

Authors: Gonzalo, Jesús; López, Deibi; Domínguez, Diego; Escapa, Alberto; García-Gutiérrez, Adrián
Organisations: University of León, Spain
16:15 - 16:30
Experience With Stratospheric Flight of Airbus Zephyr HAPS and Potential for Earth Observation (ID: 249)
Presenting: Tidswell, Roger David
(PDF )

High Altitude Pseudo Satellites (HAPS) have the potential to be a transformative technology for earth observation, offering a complement to existing satellite based systems or opening up completely new capabilities and services, through a combination of persistence and flexibility. Airbus Zephyr is a solar-electric High Altitude Pseudo Satellite (HAPS) UAV which is designed to complement and extend existing satellite networks and services by providing affordable, persistent, local satellite-like capability. Zephyr offers a ‘skyhook’ to support Airbus or third party payloads and may be operated in ‘pseudo-SAT’ mode, with quasi-geostationary flight pattern to provide GEO like capability, or in ‘pseudo-UAV’ mode with dynamic flight pattern to complement LEO missions, but with more flexibility. In ‘pseudo-SAT’ mode, Zephyr offers significant potential for local persistent earth observation applications in missions lasting 30 days or more. Coupled with change detection, this could enable early detection before events escalate and the ability to plan and implement measures more effectively. This has application to border protection, maritime security, migration, environment and pollution monitoring, contingency planning and humanitarian aid. An overview of recent Zephyr experience with stratospheric flight is provided. This includes both initial testing of payloads using balloon launch and recent flight of the production model Zephyr S platform, which follows nearly 1,000 hours of flight in the development programme. The implications and potential for new capabilities and services in earth observation are discussed.

Authors: Tidswell, Roger David; Barker, Alexandra
Organisations: Airbus, United Kingdom
16:30 - 16:45
The Role Of HAPS In Future Multilayer EO Systems. (ID: 297)
Presenting: Sills, Liam Ronald
(PDF )

We outline a future EO concept utilising current and near term satellite, HAPS and UAV technology. These platforms have a wide range of performance characteristics and CONOPS (concept of operations) that complement each other. Together they can provide situational awareness of locations around the globe at a range of revisit times, resolutions and data types. The flexibility of assets with complimentary CONOPS gives rise to applications across a number of sectors. With 3 layers of EO platforms including; Satellites with global access or coverage with a range of possible resolutions HAPS equipped with payloads giving higher resolution and persistent imaging Drones can provide the highest resolution and access to environments at the human level our ability to detect and respond to specific events will be greatly increased. We report on analysis of scenarios using current or expected system specifications and highlight any areas where more development is required to make data collection and delivery to end users relevant and timely. HAPS in such coordinated systems play a key role in bridging the gaps between access, resolution and persistence we find in current EO systems.

Authors: Sills, Liam Ronald
Organisations: SSTL, United Kingdom
16:45 - 17:00
Future Earth Observation today: high altitude balloon EO systems (ID: 298)
Presenting: Peris Marti, Izan

Stratospheric balloons are the only concept of High Altitude Platforms (HAPS) that is operational today. Future stratospheric drones and airships will present enhanced capabilities to provide stationary platforms. However, they are both facing the same challenge today: the lack of a test platform to mature Earth Observation payload technologies and vehicle subsystems. Zero 2 Infinity (Z2I) has developed a balloon-based test bench to test these payloads and has also been working on specific HAPS payload validation missions over the past year. Additionally, Z2I is also developing an agile balloon-based EO payload system.

Authors: Peris Marti, Izan; Garcia Bravo, Jose Luis
Organisations: Zero 2 Infinity S.L., Spain
17:00 - 17:15
AlphaLink: The next-generation High-Altitude Platform (ID: 300)
Presenting: Cracau, Daniel
(PDF )

AlphaLink is the next-generation High-Altitude Platform (HAP). AlphaLink is the first HAP with a modular approach avoiding an extreme wingspan and, hence, reducing high structural loads. Because the joints do not transmit the bending moments, the structural weight is significantly reduced. Powerde by solar energy, AlphaLink can be operated 365 days a year continuously at 40° N/S latitude while carrying a distributed payload of 450kg. Maintenance is possible during flight as individual aircraft can be replaced, which enables for the very first real long-term operation. AlphaLink will bring internet to remote places that currently lack sufficient infrastructure to enjoy the benefits of the World Wide Web.Operating from the stratosphere at 20 km, AlphaLink will also facilitate long-term surveillance as part of ad hoc disaster management or monitoring of high-risk wildlife areas.

Authors: Cracau, Daniel (1); Köthe, Alexander (1,2)
Organisations: 1: AlphaLink; 2: AlphaLink, Institute of Aerospace and Aeronautics TU Berlin
17:15 - 17:30
ORCA: A New Platform for a Wide Range of Applications (ID: 307)
Presenting: Rammos, Irina
(PDF )

Various low cost (small) satellite hardware exist for LEO constellations, yet small satellites face issues such as high manufacturing and launch costs, creation of space debris, and low operational lifetimes. Alternative platforms are being developed, such as HAPS and RPAS. These, however, face issues of their own such as limited coverage, the cost of manufacturing and operations and regulatory limitations. Developed by SkyfloX, ORCA stands for: Optical and RF Constellation on Aircraft. It proposes to use Commercial Airliners aircraft as a ‘platform’ to carry small satellite-like payloads. Several equipped aircraft form a ‘constellation’ which can support several Earth Observation and Telecommunication services. ORCA is a new source of Earth Observation data, offering unprecedented revisit potential (sub-hourly when fully deployed) combined with high spatial resolution (due to lower operating altitudes when compared to LEO satellites) and very low cost (no launch costs, no dedicated platform, no need for payloads with space environment specs, and no constellation operations). It is a complementary layer of EO data, where for example the fusion of Copernicus reference EO data with high spatio-temporal data series obtained by ORCA can create a platform with unprecedented technical capabilities, addressing current operational gaps and allowing the development of new EO applications and markets, forming a catalyst for downstream industry growth. ORCA is a highly sustainable system as it uses existing platforms/infrastructure, and also does not create space debris. SkyfloX runs, as a prime contractor, the ESA co-funded activity ‘Demonstration of ORCA Constellation Services, with an Airliner (TUI) and experts in Aircraft Certification, EO, Telecoms, Legal and EO Markets, addressing ORCA from technical, business, and legal points of view, including an initial flight test campaign, consolidating ORCA capabilities. Initial results of this activity will be presented.

Authors: Rammos, Emmanuel; Rammos, Irina; Heijmann, Tim; Rammos, Lyssandre
Organisations: SkyfloX, The Netherlands

EO Open Science

AI4EO (Part5)
09:00 - 10:30
Chairs: Patrick Helber - German Research Center for Artificial Intelligence (DFKI), Sveinung Loekken - ESA- ESRIN

09:00 - 09:20
Cognitive Discovery: Pushing the Frontiers of R&D with AI (ID: 314)
Keynote: Bekas, Costas

We are experiencing an unprecedented increase of the volume of data. Next to structured data, that originates from sensors, experiments and simulations, unstructured data in the form of text and images poses great challenges for computing systems. Cognitive computing targets at extracting knowledge from all kinds of data sources and applies powerful big data and analytics algorithms to help decision making and to create value. In this context, large scale machine learning and pattern recognition hold central role. Advances in algorithms as well as in computing architectures are much in need in order to achieve the full potential of cognitive computing. We will discuss changing computing paradigms and algorithmic frontiers and we will provide practical examples from recent state of the art cognitive solutions in key areas such as novel materials design.

Authors: Bekas, Costas
Organisations: IBM Research, Switzerland
09:20 - 09:35
Fuelling the Artificial Intelligence Revolution with Gaming (ID: 287)
Presenting: Nardone, Carlo
(PDF )

Artificial Intelligence is impacting all areas of society, from healthcare and transportation to smart cities and energy. AI won’t be an industry, it will be part of every industry. NVIDIA invests both in internal research and platform development to enable its diverse customer base, across gaming, VR, AR, AI, robotics, graphics, rendering, visualisation, HPC, healthcare & more. This talk will introduce the hardware and software platform at the heart of this Intelligent Industrial Revolution: NVIDIA GPU Computing. She’ll provide insights into how academia, enterprise and startups are applying AI, as well as offer a glimpse into state-of-the-art research from world-wide labs & internally at NVIDIA, demoing examples including combining robotics with VR and AI in an end-to-end simulator to train intelligent machines. Beginners might like to try our free online 40-minute class using GPU’s in the cloud: www.nvidia.com/dli

Authors: Nardone, Carlo; Lowndes, Alison B
Organisations: NVIDIA, United Kingdom
09:35 - 09:50
FDL Europe AI4EO: DISASTER RESPONSE / INFORMAL SETTLEMENTS (ID: 301)
Presenting: Parr, James
(PDF )

AI4EO: DISASTER RESPONSE Disaster events such as earthquakes, hurricanes, and floods cause loss of human lives and create substantial economic damage. Lack of information about affected communities and the level of damage restricts first-responder efforts and hinders efficient response coordination by authorities The FDL Europe 2018 team enriched high-resolution optical satellite images with multi-temporal, low-resolution multi-spectral optical and radar satellite imagery to automate the creation of EO-based disaster impact maps for first responders, affected communities and aid/assistance coordinators. AI4EO: INFORMAL SETTLEMENTS One-third of the world’s urban population lives in informal settlements. People living in these areas often have no security of tenure, they often lack basic service and city infrastructure and housing does not often comply with planning and building regulations. Yet quite often, the location and size of these settlements is simply unknown. The FDL Europe 2018 team team combined EO and AI to identify and test spectral and textural networks to automate mapping of these settlements to enable governments, aid organisations and businesses to better protect and support these communities, enable infrastructure planning and promote long term security.

Authors: Parr, James; Gram-Hansen, Bradley; Helber, Patrick; Azam, Faiza; Kopackova, Veronika; Bilinski, Piotr; Rudner, Tim; Bischke, Ben; Pelich, Romana; Fil, Jakub; Russwurm, Marc
Organisations: Frontier development lab, United Kingdom
09:50 - 10:05
SafeScale, a cloud agnostic management platform (ID: 129)
Presenting: Dorgan, Sébastien

SafeScale offers an APIs and a CLI tools to deploy versatile computing clusters that span multiple clouds. These APIs and CLIs are divided in 3 service layers: - SafeScale Broker to manage cloud infrastructure - SafeScale Perform to manage cloud computing platforms - SafeScale Security to secure user environments SafeScale Broker is a truly portable IaC (Infrastructure as Code) tool to automate the creation and management of virtual infrastructure on any cloud. SafeScale Broker have been created to face the instability and the heterogeneity of the cloud computing market, to help European cloud users to sustain their investment in this teeming context and to mutualize computing power to be able to compete with mainstream US and Chinese cloud providers. SafeScale Perform is PaaS (Platform as a Service) system designed to provide on-demand versatile computing platforms on any cloud. Platforms deployed with SafeScale Perform can manage any type of workload (Big Data, Deep Learning, containerized application) without needing to manage explicitly computing resources. SafeScale Security enables to add encryption, accounting and authentication to cloud applications and services with minimum fuss and provide advanced features such as User Federation, Identity Brokering and Social Login

Authors: Dorgan, Sébastien
Organisations: CS SI, France
10:05 - 10:20
HyperLabelMe: Benchmarking Image Classifiers (ID: 219)
Presenting: Ruescas, Ana B.
(PDF )

Classification of remote sensing data is an active field of research and one of the topics that produces more publications per year. However, many of the contributions are based on a few samples or images, and this scarcity of accurate labeled data pose a problem for comparison and replication of approaches and results. With the aim of addressing such problems, we introduce HyperLabelMe, a web system that allows benchmarking of supervised remote sensing image classifiers automatically. The system is available at http://hyperlabelme.uv.es. Hyperlabelme contains a large dataset of harmonized and labeled multispectral and hyperspectral data, representative and independent data from different land cover/use sets, which can be downloaded by the registered user. The system provides a table containing metadata of each dataset, which are stored in simple plain compressed text files. The training datasets come in pairs -spectra + labeled land cover/use- and the test data are simply the spectra. The system provides snippets for reading the distributed data files in MATLAB, Pyhton and R. Users must run their own classifiers off-line, and submit their predictions for testing and comparison. The system reports their results computed using only the test sets, it generates the confusion matrix and its overall accuracy, the Cohen kappa coefficient, confidence intervals, significance levels, and user's and producer's accuracy. There is also a "Hall of Fame" page in HyperLabelMe, where users can compare their results,methods, and strategies against those of other users. HyperlLabelMe is an easy and fair tool for comparison of algorithms and approaches based on a open philosophy, allowing data sharing and reproducibility. The system is in its first version, and it is expected to grow with the contribution of more researchers and institutions. This leads to an increase in experimentation, and better understanding of the performance of supervised classifiers not only in the remote sensing field, but also in the computer vision, image and signal processing, applied statistics and machine learning areas.

Authors: Muñoz-Marí, Jordi; Gómez-Chova, Luis; Ruescas, Ana Belen; Mateo-García, Gonzalo; Izquierdo-Verdiguier, Emma; Campos Taverner, Manuel; Camps-Valls, Gustau
Organisations: Image Processing Laboratory, Universitat de València, Spain

AI4EO (Part6)
11:00 - 12:30
Chairs: Sveinung Loekken - ESA- ESRIN, Pierre-Philippe Mathieu - ESA- ESRIN

11:00 - 11:15
AI4EO: Artificial Intelligence Platform for Earth Observation Data Scientists (ID: 150)
Presenting: Pomente, Andrea
(PDF )

We propose a Web-Based Platform to bring Artificial Intelligence models directly into the browser. The use of Artificial Intelligence for Earth Observation (AI4EO) field is one of the most important trends nowadays. An important growth in the number of people working in this area is likely to happen in the near future; this creates the necessity to provide new platforms and tools aiming at simplifying the access to this set of new technologies. The aim of our work is to provide a platform for remote sensing engineers and researchers in earth observation field, which want to use the new state-of-art Machine Learning techniques directly in their browser without having neither to perform time-consuming installations nor coding skills. The platform allows users to import and visualize raster data in a typical WebGIS environment and select the area of interest where to perform a certain Machine Learning application. Users can take advantage of pre-trained neural networks provided by the platform itself, to obtain different results such as ship detection, land cover classification, change detection, etc. The platform core is based on deeplearn.js, an open-source library powered by Google that brings machine learning to the web. This allows our users to use modern Deep Learning architectures and make predictions through WebGL GPU technology. AI4EO is fully compatible with the most common Deep Learning frameworks such as Tensorflow and Keras; this means that our users are also able to design and train custom models using their favourite framework in their local environment and subsequently import them in our proposed platform. Results, obtained using a specific model, can be exported in the most common formats, letting the users to perform graphs or other post-processing tasks exploiting them. One of the main AI4EO perspectives is to foster the creation of an active community focused on applying this new methods on earth observation data. To reach this objective our platform will allow the exchange of neural networks architectures, designed to solve a specific problem, among the users.

Authors: Pomente, Andrea; De Laurenti, Leonardo; Del Frate, Fabio
Organisations: University of Rome Tor Vergata, Italy
11:15 - 11:30
Machine Learning Tool for Calibration of Hyperspectral data (MATCH) (ID: 227)
Presenting: Esposito, Marco

We report about the first step performed towards the development of a framework to cross-calibrate the data produced by commercial small satellite constellations against the data produced by the Copernicus program and other institutional reference satellites. The framework tool is called MATCH, and the development is part of ESA efforts to stimulate AI for EO within the framework of the newly established Φ-lab. MATCH will leverage the machine learning analytical power to enable the scientific use of commercial data blended with institutional data, thus, making use of the best features of the two worlds: the unprecedented high temporal resolution of data produced by large commercial constellations, and the undebatable high quality standard of the data produced by Copernicus satellites. The concept of complementary use of constellations of small satellites with institutional satellites within a single, automated framework: combining high quality data from Copernicus with the higher revisit time and new data products offered by small satellites will improve the availability of reliable and versatile remote sensing data significantly. The hyperspectral data from space are acquired by HyperScout, while the reference high quality data are acquired by the Sentinel 2A and 2B. HyperScout is the first ESA hyperspectral instrument for small satellites that has been already developed and integrated into a 6U cubesat satellite. HyperScout has been launched on the the 2nd of February 2018. We will also report about the status of HyperScout-1 currently in space, and the preliminarily results for the in-orbit demonstration mission. HyperScout is the first ever miniaturized hyperspectral instrument for nanosatellites, developed by cosine under ESA GSTP contracts, producing up to 50 bands in the VNIR spectral range. HyperScout is expected to produce hyperspectral data sets all over the world and for the initial demonstration period of 6 months cosine is leading the operations planning as well as airborne cross-calibration activities.

Authors: Esposito, Marco; Vercruyssen, Nathan
Organisations: cosine measurement systems, Netherlands, The
11:30 - 11:45
Data Science for Space (ID: 284)
Presenting: Dzeroski, Saso
(PDF )

Data related to space is increasing at an astronomical rate. This includes data about Earth collected via Earth observation missions, data about celestial objects collected from Earth or space, and data about various operational aspects of different space operations. Opportunities abound to apply data science approaches to these data and unlock their potential, increasing the scientific value of the data collected or improving the engineering aspects of space operations. The talk will discuss a selection of recent developments in data science, including machine learning methods for mining big and complex data, and ontologies for data science. It will also give examples of using such methods for analyzing space-related data, including the estimation of forest properties from remotely sensed data and the analysis of data about the operations of the Mars Express Orbiter. Finally, opportunities for further use of data science approaches on space-related data will be discussed.

Authors: Dzeroski, Saso
Organisations: Jozef Stefan Institute, Slovenia
11:45 - 12:00
Artificial Intelligence for Space Operations (ID: 328)
Presenting: Donati, Alessandro
(PDF )

Artificial Intelligence is being researched and used in operations at ESOC since more than ten fifteen years. Here we are presenting the summary results of research activities performed by the AI Operations Team in the area of early detection, diagnostics, dependency analysis and planning of earth observation spacecraft constellation. Being able to quickly identify anomalous behaviour allows to reduce downtimes and keep our spacecraft healthy. The current approach built in our Mission Control System is to use out-of-limits checks. However, specific behaviours are anomalous even if they are within limits. Novelty Detection allows to detect new behaviours in all parameters. It automates the process of having engineers looking every day to 20,000 parameters and noticing new behaviours. Novelty Detection needs very little configuration and can run unattended. Novelty Detection is currently used by Flight Control Engineers to automatically detect potential anomalies. The ESA Patent Group has decided to protect this Novelty Detection monitoring paradigm by filing an international patent application. Once engineers have realized that an anomaly has happened they face the problem of identifying which are the possible causes and other effects of this anomaly. In some cases, it is not possible to know if this is the first occurrence of an anomaly or if it happened before and went unnoticed. DrMUST allows to perform pattern matching to find similar behaviours and correlation analysis to find which other subsystems or parameters are connected with a given anomaly. DrMUST can be used not only for anomaly investigation but also to perform characterizations. DrMUST is protected by European and USA granted patent. Fully understanding the relationships and interactions between thousands of parameters is a large and complex task. However, it is vital in order to make better utilization of spacraft resources, as a reduced uncertainty allows for reduced operational margins. In addition, a good understanding of the telemetry relationships is a precondition for operations preparation. DependencyFinder automatically analyses the interactions between telemetry parameters, providing mathematical evidence of known, unknown and surprising parameter relationships. The results from the Dependency Finder have been analyzed by the Mars Express Flight Control Team. They found these dependencies very useful in confirming some patterns and interactions known and seen on the spacecraft. In particular, they find useful the ability to quantify interactions seen and known intuitively, though before not proven in a manner. A Coverage Planning problem consists of finding a way to cover all the parts of an area of arbitrary shape. In Earth Imaging applications, an EO satellite is expected to image all the points of an area taking into account the available on board memory and the other existing constraints. An Ant Colony Optimization (ACO) algorithm is employed to solve the planning problem - Real world ant colonies are able to find the shortest paths between their nests and a food source, using no direct communication with each other. This mechanism is called stigmergy, a means of indirect coordination of a number of individuals, through their environment.

Authors: Donati, Alessandro (1); Martinez Heras, Jose Antonio (2); Policella, Nicola (2); Moreira Da Silva, Jose Fernando (2); Fratini, Simone (2); Boumghar, Redouane (1); Neves Madeira, Rui Nuno (1); Ntagiou, Evridiki (3)
Organisations: 1: ESA, Germany; 2: Solenix, CH; 3: University of Surrey, UK
12:00 - 12:15
Spatio-temporal Modeling And Monitoring Of Extreme Weather Events And Conditions (ID: 139)
Presenting: Möller, Markus
(PDF )

The monitoring of extreme weather conditions and events is crucial to adapt measures for farmers, support decision making and refining soil policies especially in the context of climate change. A precondition for an effective monitoring is the availability of indices representing the spatiotemporal dynamic of influencing factors like precipitation, temperature or soil coverage. Against this background, we introduce core algorithms of the „Extreme weather Monitoring and Risk Assessment“ tool EMRA which enables a dynamic geodata integration and the spatial and temporal identification of extreme weather events and conditions in Germany. On the example of the county Uckermark situated in north-eastern Germany and the crop type winter wheat, we show process chains for the derivation of dynamic weather and soil erosion indices. The algorithms couple phenological information, satellite imagery as well as daily data sets of precipitation and temperature (Gerstmann et al. 2016, Möller et al. 2017, Möller et al. 2018). The resulting database opens allows both (1) the localization of hot spot parcels, which show a potentially high risk of soil erosion as well as (2) the identification of extreme weather conditions like drought during sensitive growing periods of winter wheat. The modeling results act as background information for a practice-oriented online consulting tool for farmers and agricultural consultants. In doing so, modeling results are both validated and trained. References Gerstmann, H., Doktor, D., Glässer, C., Möller, M., 2016. PHASE: A geostatistical model for the kriging-based spatial prediction of crop phenology using public phenological and climatological observations. Computers and Elelctronics in Agriculture 127, 726-738. Möller, M., Doms, J., Gerstmann, H., Feike, T., 2018. A framework for standardized weather index calculation in Germany. Theoretical and Applied Climatology. in press Möller, M., Gerstmann, H., Gao, F., Dahms, T. C., Förster, M., 2017. Coupling of phenological information and simulated vegetation index time series: Limitations and potentials for the assessment and monitoring of soil erosion risk. CATENA 150, 192-205.

Authors: Möller, Markus (1); Krengel, Sandra (1); Deumlich, Detlef (2); Lessing, Rolf (3); Golla, Burkhard (1)
Organisations: 1: The Julius Kuhn Institute (JKI), Germany; 2: Leibniz Centre for Agricultural Landscape Research (ZALF), Germany; 3: DELPHI InformationsMusterManagement GmbH (DELPHI IMM)

Visualisation and Science Communication
14:00 - 15:30
Chairs: Rene Schulte - Valorem, Robert Meisner - ESA

14:00 - 14:20
Beam me up, Scotty! Teleporting people and objects via 3D holographic livestreaming. (ID: 305)
Keynote: Schulte, Rene
(PDF )

Space travel and colonization presents not only an engineering challenge, but a social communication and collaboration conundrum. Helping friends, families, and colleagues on Earth feel connected with their counterparts in space, will be increasingly difficult with current generation video conferencing technology. How can people and objects be magically teleported in 3D to other realities, to enable richer and more immersive communication and collaboration throughout the galaxy and beyond? What was previously science fiction is here today. In this session, Rene will talk about the aforementioned communications challenges, and our solution for immersive telepresence and collaboration, even when participants are worlds apart. Rene will be showcasing Valorem’s breakthrough 3D holographic livestreaming solution, dubbed “HoloBeam”. See how HoloBeam can capture and teleport people and objects from the physical world to the latest virtual, augmented, and Mixed Reality device hardware such as Microsoft HoloLens, with low latency and minimal bandwidth requirements.

Authors: Schulte, Rene
Organisations: Valorem
14:20 - 14:35
SAMI: High Resolution 3D Visualisation of ESA Earth Observation Satellite Missions (ID: 178)
Presenting: Pinol Sole, Montserrat
(PDF )

The SAMI (SAtellite MIssion Editor & Player) is a software application for visualization of high-resolution 3D satellite mission scenarios distributed by the ESA-ESTEC EOP System Support Division to users part of the ESA Earth Observation Earth Explorer and Copernicus satellites community. SAMI is a freely available application that displays stunning high-resolution 3D and 2D scenarios of ESA Earth Observation satellites. SAMI is a response to the need to visualize ESA EO satellite mission scenarios in high resolution including realistic satellite elements, for example, orbit tracks, ground-tracks and footprints of the instruments on-board. The software also highlights entering and exiting the area of visibility between the satellite and the ground stations. It is possible as well to trigger animations with the deployment sequence of solar arrays and antennas and schedule thruster firing events. The time window in the application can be configured as real-time or as simulated time (scene set in the past or in the future). In addition an endless loop simulation mode is available, with the objective to replay a given sequence. With the editing capabilities of SAMI, the user can drive the various camera views (camera attached to the Earth or to the satellite), change the global Earth map images used as layer texture and enable or disable objects in the scene, running standalone animations to display scenes involving the ESA EO satellites e.g. for public relation purposes. The SAMI embedded capability to export image snapshots or HD video can be exploited to share media content and enhance the demonstration of mission concepts. Another more specific use case for this application would be the playback of a scenario within a given time window to observe a particular satellite geometry, e.g. to inspect solar illumination on satellite parts, which is possible due to the realistic Sun illumination and shadow casting. The missions currently supported are Sentinel 1A/1B, Sentinel 2A/2B, Sentinel 3A/3B, Sentinel5P, SWARM, Cryosat, SMOS, EarthCARE and Aeolus. The capability to seamlessly display several satellites simultaneously is one of the stronger features of SAMI. The coherence and accuracy of the orbital and geometrical calculations within the SAMI application is ensured by the use of embedded Earth Observation CFI Software libraries (EOCFI SW). The libraries are used to obtain the satellite position, orbit ground-track, attitude and swath footprint. The application runs on desktop platforms (Mac OS X, Windows) and mobile platforms (iOS based, e.g. iPad).

Authors: Pinol Sole, Montserrat; Zundo, Michele
Organisations: ESA/ESTEC, Netherlands, The
14:35 - 14:50
High dimensional data visualization through Virtual Reality (ID: 218)
Presenting: Laurencich, Bruno

The analysis of high-dimensional datasets has become a common practice in several fields of applied science. The tools for storing and processing such information has developed enormously in the last years. On the other hand the tools for intuitively visualizing this kind of data didn’t evolve much from the traditional bidimentional techniques. In this talk we will review the state of the art of the virtual reality frameworks that are opening the possibility of a new kind of data-visualization. We’ll put our focus on its capacities for emulating high-dimensional spaces, allowing the user to achieve new degrees of freedom in the exploration of complex datasets. The techniques described on this talk well be completely open-source and webGL compatible, meaning that can be can be potentially deployed on any web server and accessed from an average smartphone or tablet without the need of dedicated software or hardware.

Authors: Laurencich, Bruno
Organisations: Chordata, Italy
14:50 - 15:05
EarthStartsBeating, Communicate Science through Blog – Message in a bottle! (ID: 154)
Presenting: Piro, Alessandro

The project EarthStartsBeating (https://earthstartsbeating.com/) herein presented is a divulgation website active since 2015. EarthStartsBeating mission is to reach and stimulate the far-end users, i.e. the European citizens, not the experts or the scientists but the Space and Earth Observation (EO) enthusiast’s audience. In order to improve the communication of science to the general public, the Blog promote the use of significant earth observation images spanning from geological and atmospheric phenomena to changes due to human presence throughout the years. EO images of natural (or anthropic) phenomena following the most significant events occurring throughout the Earth are exploited from mainly Copernicus Sentinel-1, Sentinel-2, and Sentinel-3 satellites. This presentation will explore the means of blogs as a way to communicate with the general public. In particular, the innovation of EarthStartsBeating is that it combines classical communication methods as text with advanced design and visualization techniques. An example is provided by the creation of interactive maps of specific world regions able to shows the temporal variation of a set of variables, such as winds, land surface temperature, and vegetation index directly derived from Satellite measurements. With such a tool, the readers will be able to easily visualize over the area of interest the status of the above-cited variables in different time periods. Blogging about eye-catching EO images gives the chance to get readers as fascinated about environmental issue or curiosities and with fascination it might come interest, followed also by understanding and awareness of Earth observation. To increase visibility of our website, publication of articles are regularly shared over Facebook, Twitter, and Instagram EarthStartsBeating profiles.

Authors: Piro, Alessandro; Tarchini, Salvatore; Cadau, Enrico Giuseppe; Cerreti, Fiammetta; Iannone, Rosario Quirino; Marinò, Fernando; Palumbo, Giovanna
Organisations: Earth Starts Beating Team
15:05 - 15:20
EO Time Series Viewer - A QGIS Plugin to explore Earth Observation Time Series Data (ID: 136)
Presenting: Jakimow, Benjamin

Satellite missions like the Sentinels, Landsat or Pléiades provide Earth observation (EO) data with repeated and global coverage. These time series allow the identification and characterization of land cover and land use (LCLU) change, such as deforestation, burning, agricultural intensification, urbanization or long-term gradual changes of bio-physical properties. The simultaneous visualization of the different spatial, spectral and temporal domains in such time series helps to better understand change processes and is an essential step to identify and describe reference areas for model calibration and validation in many studies. Recently, several software tools have been developed to support the visualization of and feature extraction from time series data. Unfortunately, these are often bound to specific sensors, selected data sources, or specialized scientific workflows. They can usually not be utilized in flexible ways and especially the simultaneous and consistent visualization of data from different sensors is often not foreseen. Our major goals were to overcome these limitations by (i) combining proven visualization concepts from professional remote sensing software and geographic information systems, (ii) providing maximum flexibility in terms of supported data sources, (iii) minimizing the user-interactions required to focus on specific visualizations and to interpret the spatial, spectral and temporal domain, simultaneously, and to (iv) ease the derivation of reference information. Addressing users with different backgrounds and levels of experiences, we developed the EO Time Series Viewer (Fig. 1) to visualize and label dense multi-sensor time series data in QGIS. Programmed in Python, it makes use of the QGIS 3, Qt5, GDAL 2.2 and PyQtGraph API to realize a simultaneous visualization of spatial maps and spectral profiles (2D/3D) and to extract labeled features as vector data and spectral libraries. We will demonstrate the EO Time Series Viewer along with a real use-case where a multi-sensor time series consisting of Landsat 7, Landsat 8, RapidEye and Pléiades observations is loaded and interactively explored to extract reference areas and exemplary spectral profiles for a study site in the Brazilian Amazon. Keywords—Time Series, Validation, Landsat, Sentinel-2, QGIS, Python, Open Source

Authors: Jakimow, Benjamin; van der Linden, Sebastian; Thiel, Fabian; Hostert, Patrick
Organisations: Geography Department, Humboldt-Universität zu Berlin, Berlin, Germany
15:20 - 15:35
Visualisation and Analysis of Climate Data in the ESA CCI Climate Analysis Tooling Environment (CATE) (ID: 169)
Presenting: Brockmann, Carsten

Climate Change is impacting our lives, tomorrow as well as today. Understanding Climate Change requires data, people, and enabling technologies. The European Space Agency (ESA) is running the Climate Change Initiative (CCI), a 140M€ programme, to provide an adequate, comprehensive, and timely response to the extremely challenging set of requirements for long-term satellite-based products for climate. One element of this programme is the CCI Toolbox, Cate, which is an enabling technology connecting scientists, decision makers and the knowledgable public to climate data. CATE is a software environment for ingesting, operating on and visually analysing all ESA CCI datasets as well as other climate data from various sources. The toolbox works by mashing data into a common data model and letting users visualise the results of their investigations in various ways for understanding, knowledge exchange and ideas sharing. CATE’s is open source, and its backend operating on the data is fully implemented in python to allow users to modify and extend it. The graphical user interface (GUI) frontend is using web technologies (TypeScript, html5) with powerful visualisation capabilities (React, Blueprint). The GUI is designed as a native desktop application, using Electron technology for the desktop operating system integration. This brings full computing power to CATE operations. The GUI uses a Python (RESTful) web server providing the CCI Toolbox’ WebAPI service to the GUI. The split of backend and frontend with communication via Web Service makes CATE ready for deployment in cloud environments. In addition to the GUI, CATE provides two additional user interfaces for programmatic application, namely a command line interface (CLI) and an application programming interface (API). Workflows can be developed interactively in the GUI, and executed for automation, large batch processing and near-real time processing in these interfaces. All three user interfaces communicate with the Python core via its WebAPI. This design allows for later extensions towards a web application with possibly multiple remote WebAPI services. But CATE is more than the software and also fosters collaboration between climate scientists through its collaboration tools and the climatetoolbox.io presence. CCI is a long-term programme assuring sustainability of CATE, and in the roadmap section our plans for future releases are presented.

Authors: Brockmann, Carsten (1); Fomferra, Norman (1); Zühlke, Marco (1); Smollich, Susan (1); Corlyon, Anna (2); Bernat, Chris (2); Gailis, Jänis (3); Hollmann, Rainer (4); Priemer, Vivien (4); Paul, Frank (5); Pierson, Kevin (6); Pechorro, Ed (7)
Organisations: 1: Brockmann Consult, Germany; 2: Telespazio Vega, UK; 3: s&t, Norway; 4: DWD, Germany; 5: University Zurich, CHE; 6: University Reading, UK; 7: ESA ECSAT, UK

Citizen Science
16:00 - 17:30
Chairs: Margaret Gold - European Citizen Science Association (ECSA), Diego Fernandez Prieto - ESA- ESRIN

16:00 - 16:20
The Landscape of Citizen Observatories across the EU (ID: 299)
Keynote: Gold, Margaret
(PDF )

Citizens' Observatories are defined as community-based environmental monitoring and information systems. They build on innovative and novel Earth Observation applications embedded in portable or mobile personal devices. This means that citizens can help and be engaged in observing our environment (EASME, 2016). Amongst the benefits of Citizen Observatories are that citizens’ observations, data and information can be used to complement authoritative, traditional in-situ and remote sensing Earth Observation data sources in a number of areas such as climate change, sustainable development, air monitoring, flood and drought monitoring, land cover or land-use change (GEO, 2017); they provide new data sources for policy-making (Schade et al., 2017) and; they can result in increased citizen participation in environmental management and governance at a large scale, for example public participation in the implementation of the European Flood Directive (Wehn et al., 2015). As a result, in the EU, efforts have been channeled into developing the concept of Citizen Observatories, and there are several currently in operation (e.g. Ground Truth 2.0, GROW, LandSense, Scent) that are intended to complement the EU’s Earth Observation monitoring framework, vastly increasing available in-situ or ground-based information. With the increasing prevalence of Citizen Observatories globally, there have been calls for a more integrated approach to handling their complexities with a view to providing a stable, reliable and scalable Citizens’ Observatory programme (Liu et al., 2014). Answering this challenge, in the European context, the Horizon 2020-funded project, WeObserve aims to improve coordination between existing Citizen Observatories and related European activities, while tackling three key challenges that inhibit the mainstreaming of citizen science: awareness, acceptability, and sustainability. Systematically tackling these challenges first requires the aggregating, building and strengthening of the Citizen Observatory knowledge base. In this talk, I will present the outcomes of the first initiative to strengthen the Citizen Observatory knowledge base within the WeObserve project - a map of the EU landscape of existing Citizen Observatory networks and their associated networks, key stakeholders and insights into the development, operation and challenges facing Citizen Observatories in Europe.

Authors: Gold, Margaret
Organisations: European Citizen Science Association (ECSA), United Kingdom
16:20 - 16:35
Engaging citizens in science and policy: lessons from the Ground Truth 2.0 citizen observatories (ID: 128)
Presenting: Moreno, Laura
(PDF )

The exponential rise of citizen science initiatives is often welcomed as a solution for data scarcity, ground truthing EO data and calibrating models, and new forms of participation in Open Science as well as environmental management, decision making and policy. Yet in order for citizen science initiatives to deliver on these promises, citizens (and other stakeholders) need to engage in such efforts – in the short and in the long run. Ground Truth 2.0 is developing a novel methodology for co-designing citizen observatories by involving the three main stakeholders that any citizen science project relies on: citizens, scientists and decision makers. Having these stakeholders interact right from the start of the citizen science initiative allows the creation of strong communities that are working jointly and with common goals while developing and sustaining each observatory. These co-designed citizen observatories become more efficient and with stronger influence on decision making as a result of the cooperative co-creation process. Ground Truth 2.0 has set up and is validating six co-designed citizen observatories in real world conditions, in four European and two African demonstration cases. The project is proving that such observatories are technologically feasible, can be implemented sustainably and that they have many societal and economic benefits. They are based on a common social innovation approach and rely on existing enabling technologies. The six demo cases are thematically diverse and located in different parts of the world. The Zambia CO is dealing with sustainable community-based natural resources management, the Kenya CO is focused on balancing livelihoods and biodiversity conservation, the Sweden CO is monitoring water quality management, the Spain CO is collecting evidence of climate change, the Netherlands CO is helping climate proof water management and Belgium CO is monitoring environment quality of life in urban areas. In all COs, the overarching objective is to empower role of citizens in planning, decision making and governance for improved management of the respective local environmental issues. The project aims to integrate the observatories in the Global Earth Observations System of System GEOSS (by applying a set of standards) and, ultimately, to devise a concept that permits global market uptake as well as the sustainability of the observatories. Along the way, the project is generating a rich set of experiences and lessons learned on how to reap the opportunities of citizen science and how to address the challenges.

Authors: Wehn, Uta (1); Masó, Joan (2); Pelloquin, Camille (3); Vranckx, Stijn (4); Giesen, Rianne (5); Sichilongo, Mwape (6); Cerratto-Pargman, Tessy (7); van der Kwast, Hans (1); Anema, Kim (1); Moreno, Laura (3)
Organisations: 1: IHE Delft, Netherlands, The; 2: CREAF; 3: Starlab; 4: VITO; 5: Hydrologic; 6: WWF Zambia; 7: Stockholm University
16:35 - 16:50
Complete and operational system of marine litter monitoring using EO, COPERNICUS models, and citizen/participative science (ID: 231)
Presenting: Mangin, Antoine
(PDF )

Macro waste discharge at sea is one – if not the one – of the most important environmental issue of the present decade for marine environment. Ocean is the ultimate unfortunate receptacle for all anthropic garbage and as such is the subject to an increasing number of environmental regulations (at local, regional, transnational levels) as well as private and associative initiatives. The discharge of macro-waste at sea is assumed to reach, in tons, the amount of living (fish) )resources in sea at the very soon horizon of 2050. Seventy percent of this garbage is assumed to finish its ‘life’ on the sea floor, while 15% are floating and the last 15% reaches and pollutes coasts. Actions to be undertaken urgently (as monitoring and forecasting) are dealing with many aspects of this threat; the control and reporting of macro-waste release locations and sequences, transportation of marine litters, their sinking in the oceans and sedimentation at coasts. For surface pollution, Earth Observation is going to play a key role to support this environmental and citizen concerns, but it cannot do it alone. In the frame of COPERNICUS program’s entry into force, we will present a complete system of monitoring and reporting for the NRT monitoring and short-term forecast of marine litter drifting using a combined exploitation of Earth Observation, hydrodynamic modelling and participative science. Earth Observation exploitations are mainly based on Sentinel-1 and 2 acquisitions; to that purpose, specific algorithms have been adapted and tuned to the detection and reporting of marine litters. Drift modelling is relying on COPERNICUS marine currents modelling at regional scale. Citizen observations, already in place with voluntary observers (e.g. in Mediterranean), are completing, and, somewhat, validating the system. An example of a complete regional deployment, supported by COPERNICUS uptake program and other funding, will be presented.

Authors: Mangin, Antoine (1,2,3); Martin-Lauzer, Francois-Regis (2); Fanton d'Andon, Odile (3)
Organisations: 1: ACRI, France; 2: ARGANS, UK; 3: ACRI-ST, France
16:50 - 17:05
The LandSense Engagement Platform: Connecting citizens with earth observation data to promote environmental monitoring (ID: 135)
Presenting: Moorthy, Inian
(PDF )

The Horizon 2020 project, LandSense, is building a modern citizen observatory for Land Use & Land Cover (LULC) monitoring, by connecting citizens with Earth Observation (EO) data to transform current approaches to environmental decision making. Citizen Observatories are community-driven mechanisms to complement existing environmental monitoring systems and can be fostered through EO-based mobile and web applications, allowing citizens to not only play a key role in LULC monitoring, but also to be directly involved in the co-creation of such solutions. A critical component within the project is the LandSense Engagement Platform, a service platform comprised of highly marketable EO-based solutions that contribute to the transfer, assessment, valuation, uptake and exploitation of LULC data and related results. The platform engages citizens to view, analyze and share data collected from different citizen science campaigns and create their own maps, individually and collaboratively. In addition, citizens can participate in ongoing demonstration pilots using their own devices (e.g. mobile phones and tablets), through interactive reporting and gaming applications, as well as launching their own campaigns. This interaction is achieved by bringing together and extending various key pieces of technology like Geo-Wiki, LACO-Wiki, Geopedia, SentinelHub and the Earth Observation Data Centre. Furthermore, a key pillar of the platform is the LandSense Federation which supports users to authenticate from a variety of login providers using social media (i.e. Facebook and Google) and some 2500 academic institutions globally via eduGAIN. Such a federated approach will promote the awareness, outreach, uptake and ultimately the science of citizen science. Services and solutions from the LandSense Engagement Platform are currrently deployed through a series of citizen science campaigns in Vienna, Toulouse, Amsterdam, Serbia, and Spain covering topics such as urban greenspaces, agricultural management and bird habitat/biodiversity monitoring. The presentation will not only showcase the results from these campaigns, but also highlight how one can link to the platform to exploit its EO services and launch your own citizen science campaigns.

Authors: Moorthy, Inian (1); See, Linda (1); Batič, Matej (2); Matheus, Andreas (3); Milčinski, Grega (2); Fritz, Steffen (1)
Organisations: 1: International Institute for Applied Systems Analysis, Laxenburg, Austria; 2: Sinergise, Ljubljana, Slovenia; 3: Secure Dimensions GmbH, Germany
17:05 - 17:20
Improving Crisis Event Management through EO and Citizens’ Voluntary Engagement (ID: 118)
Presenting: Duro, Refiz
(PDF )

Managing natural or human-made crisis events involves making decisions based on information gathered directly from the field via reports from responders, in combination with technologies delivering images and other satellite data. For efficient crisis and disaster management (CDM), this information needs to be timely available and as accurate as possible, meaning that the current state of the art technologies need to be intelligently selected and combined to meet such requirements. We present such an approach in which semi-automatized analytics using combination of very high-resolution (VHR) Earth Observation (EO) imagery, crowdsourcing technologies and crisis mapping components are applied to improve and optimize the efficiency of crisis management before, during and after disaster events [1, 2]. VHR imagery provides swath widths that cover large spatial areas with a pixel resolution on a sub-meter level, however, critical pieces of information will be lacking due to, e.g., viewing angles and missing ground truth information. Crowdsourcing and crowdtasking activities specifically [3, 4], in which volunteers/citizens are assigned tasks to execute (e.g., provide photos from a specific location, confirm/reject information from EO), as well as community outreach and participatory mapping technologies come to aid to bypass these mentioned limitations. Confirmation of damages, improvement and quality of information, and accuracy are all benefits when combining satellite and crowdsourcing/tasking activities in CDM. The used approach involving the mentioned technologies will be implemented in an earthquake scenario during the taiwanese National Disaster Prevention Day(in September). Future steps involve adjustments and implementations in other, scientific and business domains and applications as well, where ground-truth data is critical in assisting remote sensing assessments (e.g., land-use, urban heat islands, water-quality, etc.) [5]. [1] http://quinjunsat.info [2] M. Gähler, “Remote Sensing for Natural or Man-made Disasters and Environmental Changes,” 2016. [3] G. Neubauer et al., “Crowdtasking – A New Concept for Volunteer Management in Disaster Relief | SpringerLink,” in: Environmental Software Systems. Fostering Information Sharing, 2013. [4] D. Auferbauer, “Centralized Crowdsourcing in Disaster Management: Findings and Implications,” 2017. [5] S. Fritz, C. Fonte, and L. See, “The Role of Citizen Science in Earth Observation,” Remote Sens., vol. 9, no. 4, p. 357, Apr. 2017

Authors: Duro, Refiz (1); Klug, Christoph (2); Sturm, Kevin (2); Chuang, Kuo-Yu (3); Kutschera, Peter (1); Schimak, Gerald (1); Auferbauer, Daniel (1); Sippl, Sebastian (1); Ruggenthaler, Christoph (1)
Organisations: 1: AIT Austrian Institute of Technology GmbH; 2: GeoVille Information Systems GmbH; 3: GeoThings Inc.

Women in Science@Phiweek
09:00 - 10:30
Chairs: Jennifer Adams - ESA- ESRIN, Sara Aparício - European Space Agency

09:00 - 09:10
Women in Science@Phiweek Introduction (ID: 377)
Presenting: Adams, Jennifer

Diversity in science and high-tech is a key issue. This salon will address this issue and highlight how, despite being highly under-represented in the field, women are beating the odds. In an informal atmosphere with refreshment, this Salon will highlight how women are inspiring the Space field and discuss how to stimulate more diversity in ESA.

Authors: Adams, Jennifer
Organisations: ESA- ESRIN, Italy
09:10 - 09:30
Breaking The Glass Ceiling for Women in the World Of Government and Commercial Space (ID: 397)
Keynote: Lukaszczyk, Agnieszka

The space sector has traditionally been seen as a domain dominated by men. This doesn’t mean that women have not greatly contributed to the space endeavor; however, the recognition for their efforts and leadership opportunities have been quite limited throughout the space age. The conditions have been improving but there is still a long way to go. Although, organizations such the United Nations, Space Generation Advisory Council and Women in Aerospace have played an important role in promoting women and young professionals in the space sector, many women still experience the effect of the glass ceiling in various steps of their career. As a woman in the space sector who, for many years, has worked with young space professionals through-out the globe I would like to share my experience as well as lessons learned in order to provide some food for thought in regard to breaking the glass ceiling for women in the world of government and commercial space.

Authors: Lukaszczyk, Agnieszka
Organisations: Planet, Germany
09:30 - 09:45
Multimedia exhibition « Space Girls, Space Women : Space as seen by Women » (ID: 304)
Presenting: Coliolo, Fiorella

Produced by Sipa Press, the multimedia exhibition Space Girls Space Women is a tribute to women who are today at the heart of the space adventure. A team of female reporters met them around the world, from the Atacama Desert to Munich, from Moscow to Bangalore. The photo exhibition, translated in several languages, it is regularly enriched with new profiles of women, and comes with videos testimonies and a multimedia app. This project is supported by ASI, CNES, ESA, Universcience, la Cité de l’Espace, NEREUS, the GSA and WIA-E (more info: www.spacewomen.org ) Based on that experience, this panel presentation would be an opportunity to discuss with the audience – and with the testimonial of scientific women - a new multimedia exhibition focusing on Women in EO. That new project aims to : - attract youngest in STEM studies - support women at leadership position - show the different opportunities of space careers in EO - make the general public more familiar with EO topics – from services to applications - raise awareness about European EO - and space in general - programmes and activities.

Authors: Coliolo, Fiorella
Organisations: Astronomer, curator & co-producer, Exoworld - Sipa Press, Italy
09:45 - 10:00
Why not defy statistics working on deep learning? (ID: 414)
Presenting: Ruiloba Quecedo, Rosario

The latest statistics still show that women are under-represented in sciences including the spatial domain, the only exception being the medical field. In 2015 in France, less than 30% of students in engineering schools were women(1). These figures are decreasing in 2018 (2). Concerning management positions, women are gradually gaining representation among Executive Committees (ECs) in Fortune Global 100 companies, but are still a small minority(3). There are 15% of women in EC in Europe(4) and only 24% of women have senior roles globally. Nevertheless, in Europe 46% of the work force are women(5). The International Business Report (IBR) Women in business 2018(6), concludes « The percentage of businesses around the world with at least one woman in senior management has increased significantly, rising from 66% to 75% in the last year. But at the same time the proportion of senior roles held by women has marginally declined. » These percentages are of 73% and 27% in EU and of 79% and 33%, in France. The pay gaps, the stereotypes, the laws implementation difficulties limit the progress of women representation in management roles. Nevertheless, some experiences defy the statistics. The story of one of these experiences, fortunately, a no singular one (7), will be presented. Three women working in Space activities for at least 15 years, have integrated three senior roles in a company recently created: a manager director, a senior project manager and a business developer. They are accompanied by men and women not scared by their gender, their 8 children, their family constraints, but confident on their skills and values. This is just one example to share, showing how gender is not a matter and should not be a drag to careers development and projects achievement. This business adventure has just started. The team will develop its activities in Deep Learning, whose implementation for EO issues remains a challenge. Deep Learning methods provide a powerful analysis tool for scientists in the domain of climate change observation and natural process understanding. They open the possibility of a systematic analysis of available HR data providing useful information for administration and general public applications (traffic and infrastructures management, economy development in local regions, etc.). But these algorithms face several challenges: the need of a combined expertise on methodology (AI) and thematic applications; in terms of implementation, the need of important ground-truth (training data) for model learning and finally, the need to surpass the common reticence of the scientific community attached to the physical understanding of the data and the strong requirements on data quality (L1, L2 products). The difficulties and possibilities open by these methods will be presented and discussed. 1 https://www.orientation-education.com/article/infographie-les-femmes-sont-plus-diplomees-que-les-hommes 2 http://focuscampus.blog.lemonde.fr/2018/03/10/ingenieur-au-feminin-mais-quattendent-donc-les-filles/ 3 20-first’s 2018 Global Gender Balance Scorecard, https://20-first.com/thinking/ 4 https://20-first.com/wp-content/uploads/2018/10/2018-Scorecard_France.pdf 5 “Employment by Sex, Age and Economic Activity(From 2008 Onwards, NACE Rev. 2)",http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=lfsq_egan2&lang=en (From 2008 Onwards, NACE Rev. 2) – 1 000”. 6 Women in Business: Beyond Policy to Progress (2018), https://www.grantthornton.global/en/insights/articles/women-in-business-2018-report-page/ 7 https://www.researchgate.net/publication/286754729_Women_On_French_Corporate_Board_Of_Directors_How_Do_They_Differ_From_Their_Male_Counterparts

Authors: Ruiloba Quecedo, Rosario; Audenino, Pauline; Fernandez-Martin, Christine
Organisations: Agenium Space, France
10:00 - 10:15
LiveEO – Delivering Answers From Above (ID: 294)
Presenting: Scholz, Nastasja Anais

Ongoing changes in the global climate system are expected to bring along increases in the rate and severity of storms and floods as well as a rise in the number of landslide events. Europe, with its dense network of railways, oil & gas and electricity grids, is becoming more and more vulnerable to natural and climate hazards. In Germany, which has a total network length of more than 500,000 km, 50% of the train tracks go through forested areas. Additional tracks pass by single, wind and weather exposed trees. 125 Mio € are spent each year for a vegetation management. Train collisions with trees still reach several hundred incidences every year. This number is likely to increase with the observed and predicted changes in climate patterns. To monitor the extensive grids and properly assess the possibility of future damages fast, frequent, thorough analyses are needed. LiveEO, a young company that combines the expertise of aerospace and business engineers, computer scientists, geoinformatics specialists and geoscientists, draws from the vast potential of an increasing amount and diversity of earth observation data to support the inspection, monitoring and maintenance of Europe’s networks. LiveEO delivers mission-specific and mission-critical information in real time. This is achieved by the application of an automated and mission-optimized combination of freely available Copernicus optical and synthetic aperture radar satellite imagery with target-specific commercial earth observation and unmanned aerial vehicle (UAV) data. A modular approach based on scalable cloud solutions allows for a flexible and rapid response to specific customer needs and enables LiveEO to quickly link its services with the services of its business partners. Adding real-time data on, e.g. traffic, weather or network usage as well as ground-proof data of critical parameters enable a machine learning-based classification of risk factors and the derivation of appropriate risk models. The fully automatized analysis of different satellite and drone data makes LiveEO the first private company specialised in real-time earth observation, replacing tedious, time and cost intensive monitoring of infrastructure grids with a more efficient, more accurate, more frequent, more economical and fully automatized approach while also minimizing the human error. LiveEO will not only reduce damage along infrastructure grids but also deliver forecasts for areas at risk in order to mitigate greater physical and economic damage. Always keeping an eye on technological developments, LiveEO will constantly improve and expand its services in order to address societal and environmental challenges.

Authors: Scholz, Nastasja Anais
Organisations: LiveEO, Germany
10:15 - 10:30
PyrSat – Prevention and response to wild fires with an intelligent Earth observation CubeSat (Women in Science) (ID: 374)
Presenting: Estébanez Camarena, Mónica

Forest fires are a pervasive and serious problem worldwide. Besides loss of life and extensive environmental damage, fires also result in substantial economic losses, not to mention property damage, injuries, displacements and hardships experienced by the affected citizens. Missions such as MODIS and SPOT VEGETATION have proven multispectral and hyperspectral Earth observation to be of great use for fire-related applications, providing rich information in a wide range of the electromagnetic spectrum. At the same time, Cubesats are starting to be used in numerous low-cost Earth observation applications. However, the usual size of hyperspectral sensors, together with the vast amount of information to be downloaded and the downlink limitations of nanosatellites normally restrict this capability to larger and more costly satellites. Furthermore, the large data volumes require high-performance antennae on ground. Highly skilled image processing experts are also required to process the images in order to extract useful information products for end users. All these requirements limit the reach of the technology to a reduced number of users. This project proposes a hyperspectral 3U CubeSat space mission for low-cost, direct-to-ground applications. The main novelties of the proposed mission are the full use of low-cost, commercially available COTS components for the CubeSat subsystems and the use of open source tools and readily available single-board computing platforms and the on-board autonomous generation of the mission final data product. The proposed satellite can be built for under $100 000. The hyperspectral images will be autonomously pre-processed and classified on-board using Machine Learning algorithms. The final product will be compressed and georeferenced vegetation fire risk and burnt area maps to be directly delivered to users on the ground. These maps will acquire the form of a GIS layer in order to be directly integrated with other geographic information. This can be of special interest to specific key locations such as hospitals, schools or airports, or important corridors, such as railways, major roads or power lines. Used in combination with other services such as the European Forest Fire Information System (EFFIS) or the Advanced Fire Information System (AFIS), the system could considerably reduce the extent and consequences of forest fires.

Authors: Estébanez Camarena, Mónica
Organisations: University of Cape Town, South Africa
10:30 - 10:45
Assemble the right crew for your mission: How to reduce workplace bias to find the right people (ID: 417)
Presenting: Sudbury, Hayley

We can't always see the skills people bring to the table. How can we transform who we see and what we do in the workplace to select the right people for projects, promotions and other processes? Hayley walks through how everyday technology can help reduce biases and blindspots in the workplace to assemble diverse teams.

Authors: Sudbury, Hayley
Organisations: WERKIN, United Kingdom
10:45 - 11:00
Data Science in Earth Observation (ID: 416)
Presenting: Zhu, Xiaoxiang

Geoinformation derived from Earth observation satellite data is indispensable for many scientific, governmental and planning tasks. Geoscience, environmental sciences, cartography, resource management, civil security, disaster relief, as well as planning and decision support are just a few examples. Furthermore, Earth observation has irreversibly arrived in the Big Data era, e.g. with ESA’s Sentinel satellites and with the blooming of NewSpace companies. This requires not only new technological approaches to manage and process large amounts of data, but also new analysis methods. Explorative signal processing and machine learning algorithms, such as compressive sensing and deep learning, have been shown to significantly improve information retrieval from remote sensing data, and consequently lead to breakthroughs in geoscientific and environmental research. In particular, by the fusion of petabytes of EO data from satellite to social media, fermented with sophisticated data science algorithms, it is now possible to tackle unprecedented, large-scale, influential challenges, such as the mapping of global urbanization — one of the most important megatrends of global change.

Authors: Zhu, Xiaoxiang
Organisations: DLR & TUM, Germany

FDL Europe ESA AI4EO Accelerator
14:00 - 15:30

14:00 - 14:15
AI4EO: Towards a Mission Control for Planet Earth (ID: 335)
Presenting: Parr, James

A ‘MISSION CONTROL’ FOR PLANET EARTH The advent of spacecraft systems such as Sentinel 2 and Planet’s Dove constellation allows us to - in theory - understand the changing nature of the Earth as never before. However the bottleneck remains the ability to quickly make sense of this new data availability, both in terms of refresh rates and heterogeneity, particularly when paired with ground data (such as mobile phone, drone and social media). How can these awesome new capabilities come together to provide a useful way of observing our entire planet dynamically? One notion is the concept of a ‘Mission Control’ for Planet Earth, where AI work- flows automate the task of large scale multispectral data fusion and change detection. The concept of a ‘Mission Control’ for serving the world’s poorest is thus enabled by AI workflows automating the task of identifying informal structures using spectral data from ESA’s Sentinel spacecraft and other EO assets. This solution requires a number of innovations in transfer learning to factor the changing structural styles in regions around the world as well as innovations AI workflows. In this session we will workshop the possibility of a Mission Control for Earth.

Authors: Parr, James
Organisations: FDL Europe, United Kingdom
14:15 - 14:30
FDL Europe AI4EO: Disaster Response (ID: 371)
Presenting: Parr, James William

Disaster events such as earthquakes, hurricanes, and floods cause loss of human lives and create substantial economic damage. Lack of information about affected communities and the level of damage restricts first-responder efforts and hinders efficient response coordination by authorities The team developed a novel approach to performing rapid segmentation of flooded buildings through by fusing multiresolution, multisensor, and multitemporal satellite imagery in a convo- lutional neural network (CNN). This method significantly expedites the generation of satellite imagery-based flood maps, which are crucial for first responders and local authorities in the early stages of flood events. By incorporating multitemporal satellite imagery, rapid and accurate post-disaster damage assessment can be gained, helping governments to better coordinate medium and long-term financial assistance programs for affected areas. The method consists of multiple streams of encoder-decoder architectures that extract temporal information from medium-resolution images and spatial information from high-resolution images before fusing the resulting representations into a single medium-resolution segmentation map of flooded buildings. This method compares well (and exceeds) to state-of-the-art models for building footprint segmentation as well as alternative fusion approaches for segmentation of flooded buildings, moreover this can be performed well using only freely available medium-resolution data instead of significantly more detailed (and expensive) very high-resolution data used in previous methods.

Authors: Parr, James William; Bischke, Ben; Rudner, Tim; Pelich, Ramona; Fil, Jakub; Russwurm, Mark
Organisations: FDL Europe, United Kingdom
14:30 - 14:45
FDL Europe AI4EO: Informal Settlements (ID: 372)
Presenting: Parr, James William

One-third of the world’s urban population lives in informal settlements. People living in these areas often have no security of tenure, they often lack basic service and city infrastructure and housing does not often comply with planning and building regulations. Yet quite often, the location and size of these settlements is simply unknown.Detecting and mapping informal settlements encompasses several of the United Nations sustainable development goals. This is because informal settlements are home to the most socially and economically vulnerable people on the planet. Thus, understanding where these settlements are is of paramount importance to both government and non- government organisations (NGOs), such as the United Nations Childrens Fund (UNICEF), who can use this information to deliver effective social and economic aid. However, data regarding informal and formal settlements is primarily unavailable and if available is often incomplete. This is due to the cost and complexity in gathering it on a large scale. The team developed an effective end-to-end framework that detects and maps the locations of informal settlements using only freely available Sentinel-2 satellite imagery with noisy annotations. This is in contrast to previous studies that only use costly very-high resolution (VHR) satellite and aerial imagery. The research also demonstrated a deep learning approach to detect informal settlements with VHR imagery for comparative purposes. In addition to this, it was shown how AI approaches can detect informal settlements by combining both domain knowledge and machine learning techniques, to build a classifier that looks for known roofing materials used in informal settlements.

Authors: Parr, James William; Gram-Hansen, Bradley; Helber, Patrick; Azam, Fazia; Varatharajan, Indhu
Organisations: FDL Europe, United Kingdom

AI4EO R&I Session (summary )
16:00 - 17:30
Chairs: Sveinung Loekken - ESA- ESRIN, Pierre-Philippe Mathieu - ESA- ESRIN


Hands-on Delay Doppler Altimetry Studio
09:00 - 10:30

Delay Doppler Altimetry Altimetry Studio – Where you can customise your own data processing (ID: 296)
Presenting: Cotton, David

The new technique of Delay Doppler altimetry opens up a new range of exciting options for bespoke processing, selecting different processing options according to the surface or problem being studied. In the DeDop Project, isardSAT and Brockmann Consult, together with a group of scientists in a larger consortium, have developed an interactive tool to allow the user to select input data, choose and run processing options, and immediately query and view the results. This introduces new paradigm in altimeter processing, allowing a much more direct and immediate interaction with the processor. Previously, the approach was to provide a single product, produced by a selected developer, with periodic updates implemented over a long-time scale. The DeDop tool comprises both a command-line interface, the DeDop Shell, and a graphical user interface, the DeDop Studio. The tool’s primary goal is to make it easy to configure and run and to provide a number of analysis functions to inspect and compare the L1B results. The tool’s target users are community scientists wishing to learn, modify or extend the DeDop processor configuration and/or code and then use the tool for comparisons between outputs of varying configurations generated by the DeDop. The tool comprises two components: DeDop Studio and DeDop Core. DeDop Core consists of DeDop processor, DeDop Shell, and DeDop webapi. With DeDop Shell, users will be able to do all the operations using a command-line interface. With DeDop Studio, users can also perform the same operations as in DeDop Shell (modifies & writes config, read L1A data, etc.) and in the end it invokes DeDop processor in DeDop Core via the webapi interface. For this “hackathon” we invite participants to use the DeDop tool on a number of different scientific problems, processing over ice, inland waters, icebergs and open sea, testing the impact of different processing approaches in terms of their ability to retrieve the desired parameters. Because of the quick response and easy configurability of the processor we hope to introduce the fun of altimetry processing to a new group of young scientists and technicians, maybe not experts in programming or altimeter processing, but interested in learning some new skills And – if you want to try something completely different, we offer DeDop fx DeDopFX is an experimental (=fun) tool which converts satellite altimeter measurement data into sound. Currently it can transform the L1B data from the SRAL sensor mounted on the ESA Sentinel-3 satellite into audio samples. DeDopFX can play the L1B NetCDF output files from the DeDop Processor or the SRAL sample files from the Sentinel-3A Altimetry Test Data Set.

Authors: Cotton, David (1); Roca, Mònica (2); Fomferra, Norman (3); Permana, Hans (3); Pattle, Mark (2)
Organisations: 1: SatOC Ltd, United Kingdom; 2: isardSAT Ltd, UK; 3: Brockmann Consult, Germany

Demo EO Linked Data Linked EO data
11:00 - 12:30

Hands-on: Enabling Downstream Service Providers using Linked-Open Data (LOD) and Distributed EO value-chains (ID: 338)
Presenting: Venus, Valentijn

Hands-on: Enabling Downstream Service Providers using Linked-Open Data (LOD) and Distributed EO value-chains This session will explore how to use Distributed data access in Web-semantically enriched processing workflows [https://analytics.ramani.ujuizi.com] to facilitate access and exploitation of multivariate EO data sets and publish results to native mobiel and web applications. Organisers: Valentijn Venus (RAMANI B.V.), Sam Ubels (RAMANI B.V.)

Authors: Venus, Valentijn
Organisations: RAMANI B.V., Netherlands, The

Amazon Web Service for Earth
14:00 - 15:30

Earth on AWS (ID: 365)
Presenting: Flasher, Joe

Enterprises, non-profits, and startups around the globe are using the cloud to accelerate innovation in geospatial workflows in order to respond to natural disasters, power precision agriculture, plan city infrastructure, provide weather forecasts and myriad other purposes. This session powered by Amazon Web Services will include presentations and discussions from experts covering how the scale and performance of AWS, coupled with petabytes of data staged for analysis in Amazon Simple Storage Service (Amazon S3), allows for an unprecedented opportunity to drive geospatial workflows. Presentations from Sinergise, FrontierSI, Element 84, the Pangeo project, EOX, UK Met Office, e-GEOS, FAO, Development Seed and Zooniverse.

Authors: Flasher, Joe
Organisations: Amazon Web Services

Amazon Web Service for Earth
16:00 - 17:30

Earth on AWS (ID: 406)
Presenting: Flasher, Joe

Enterprises, non-profits, and startups around the globe are using the cloud to accelerate innovation in geospatial workflows in order to respond to natural disasters, power precision agriculture, plan city infrastructure, provide weather forecasts and myriad other purposes. This session powered by Amazon Web Services will include presentations and discussions from experts covering how the scale and performance of AWS, coupled with petabytes of data staged for analysis in Amazon Simple Storage Service (Amazon S3), allows for an unprecedented opportunity to drive geospatial workflows. Presentations from Sinergise, FrontierSI, Element 84, the Pangeo project, EOX, UK Met Office, e-GEOS, FAO, Development Seed and Zooniverse.

Authors: Flasher, Joe
Organisations: Amazon Web Services, United States of America

DIAS SOOBLO Hands-on
14:00 - 15:30

14:00 - 15:30
DIAS sobloo Showcase (ID: 390)
Presenting: Avargues, Christophe

Beyond the data, creative grounds. This interactive demonstration will emphasize on how to make the most benefit of the sobloo infrastructure, environment and toolbox to create thematic services and extract added-value information based on Copernicus data. Through different use cases, our team of experts will show you how easy it is to create, process and develop using our secure cloud environment. The session will also be a unique opportunity to discover our initiatives related to the development of thematic services. We will showcase a first example with the sobloo crop profiles monitoring service that will provide time-series of satellite-derived vegetation maps for the EU Common Agricultural Policy (CAP).

Authors: Avargues, Christophe
Organisations: Airbus Defence and Space, France

DIAS CREO Hands-on
16:00 - 17:30

Developing Copernicus based geoanalytical services in CREODIAS with Hexagon Smart M.App technology (ID: 332)
Presenting: Zotti, Massimo

The recent launch of Data and Information Access Services (DIAS) platforms at the end of June 2018, providing unlimited, free access to Copernicus data and information access services, made it easier for users to develop Copernicus-based applications and services that provide the added value of combining EO technologies with other data sources, across different market segments. CREODIAS was one of the four industry consortia awarded by ESA to develop DIAS platforms. The CREODIAS consortium is led by Polish company, Creotech Instruments, and included also CloudFerro, WIZIPISI (Wroclaw institute of Spatial Information and Artificial Intelligence), Sinergise, Geomatis, and Eversis. CREODIAS operates a large cloud IT infrastructure, provided by CloudFerro, optimized to browse, search, deliver and process large amounts of EO data. The storage capacity includes up to 30 PB for EO open data, supplemented on demand by other complementary data sets. This vast repository will be co-located with a dedicated IaaS cloud modular infrastructure for the platform’s users, allowing customized processing activities to be established in close proximity to the stored data. CREODIAS storage is synchronized with main ESA repositories, so the data acquired by Copernicus Hub and contributing missions becomes available within a few hours after its publication by ESA. In order to provide state of the art technologies that facilitate the development of end-user applications and services, the CREODIAS consortia established a close cooperation with Hexagon Geospatial to deploy M.App Enterprise and other M.App Portfolio products, such as M.App X, from the CREODIAS front-office. This cooperation opens the possibility for CREODIAS users to create value-added EO based information services based on Hexagon's M.App Portfolio technology. M.App Enterprise complements the CREODIAS platform, providing Companies looking to create innovative applications on top of Copernicus data, a user-friendly and low-code development environment to build scalable and lightweight vertical applications, coined by Hexagon as “Hexagon Smart M.Apps”, that applies sophisticated geospatial analytics and tailored workflows to multi-source contents, within an intuitive and dynamic user experience. Planetek Italia is the first company taking advantage of this optimized environment, by deploying Hexagon Smart M.Apps based on Rheticus® services, from CREODIAS platform. Rheticus® is a collection of applications designed by Planetek Italia that provides subscription-based monitoring services, transforming changes detected on the earth’s surface into analytical information to drive timely decisions. Leveraging Planetek’s remote sensing expertise and Hexagon’s platform capabilities, the delivery of Rheticus monitoring services as Hexagon Smart M.Apps, provides dynamic mapping and in-depth geospatial analytical capabilities, offering timely insights on infrastructure stability and earth surface displacement to subscribing organizations.The satellite data captured by Copernicus Sentinel satellites are at the base of the monitoring services provided through Rheticus. The main applications of these services are the monitoring of urban dynamics and land use changes, Earth's surface movements (landslides and subsidence), stability of infrastructures, areas under construction and new infrastructures, areas affected by forest fires, marine water quality and aquaculture. During the workshop, users will be guided in the creation of different web applications for the processing of Sentinel-2 data using the capabilities of Hexagon M.App Portfolio available on the CREO-DIAS, specifically: - Segmentation of Sentinel-2 data using Open Street Map data (first day); - Classification of Sentinel-2 time series using Machine Learning algorithms (second day); For the hands-on activity users should bring and use their computer or they can participate just following the demonstration.

Authors: Zotti, Massimo (1); Fernandes, Joao P (2); Myslakowski, Krzysztof (3); Maldera, Giuseppe (1); Drimaco, Daniela (1)
Organisations: 1: Planetek Italia s.r.l., Italy; 2: Hexagon Geospatial; 3: Creotech

Digital Poster -Exhibition - Drink
18:00 - 19:00

Developing A Processing Chain For Long Time Series Sentinel 1 Interferometric Coherence Generation For Wide Area Landcover Change Mapping (ID: 167)
Presenting: Wheeler, James Edward Maxwell

We will present the processing chain developed at Leicester for the ESA Fire Climate Change Initiative (CCI) Small Fires Database (SFD). Our work involved deriving burned area estimates from Sentinel 1A image pair coherence generated for a full year (2016) covering the land of Northern Africa from latitude 20° N to the equator and longitude 20° W to 45° E, separated into five-degree by five-degree tiles. The S1 constellation, with its interferometry-supporting configuration, offers unprecedented temporal resolution for the generation of coherence images (12-day repeat visit with one sensor, 6-day with both sensors, and potentially more frequent analysis of ascending and descending orbit direction data, although it should be made clear that only image-pairs in the same orbital configurations will produce meaningful coherence images). The advantage, from a landcover change mapping perspective, of S1 coherence over S1 backscatter is in the lack of image speckle, as well as clear downward trends in coherence generated from image pairs spanning a disturbance event. In the Fire CCI SFD project, initial results suggest that S1 backscatter models have produced more reliable landcover change products in more densely vegetated areas, while S1 coherence models are more reliable in sparser vegetated areas. This suggests that the two datasets are complementary to each other. We will describe the processing issues and solutions relating to S1 data download, management, command line processing and classification using a High Performance Computing (HPC) cluster at the University of Leicester. The processing chain is scalable both up and down depending on the user’s study site requirements. All processing described was completed using open source or free software in the UNIX environment, with code developed in python and shell languages, and including the command line tools from the Sentinel Application Platform (SNAP).

Authors: Wheeler, James Edward Maxwell; Tansey, Kevin
Organisations: University of Leicester, United Kingdom
Artificial Intelligence meets Computational Sensing: Beyond Earth Observation Data Science (ID: 197)
Presenting: Faur, Daniela

Multispectral and microwave Earth Observation (EO) sensors are unceasingly streaming Giga samples per second, which must be analyzed to extract semantics or physical parameters, thus providing the means to globally and, for a long time span, understand the Earth phenomena. An important particularity of the EO data that should be considered is their “instrument” nature, i.e. in addition to the spatial information, they are sensing physical parameters. Artificial Intelligence (AI), i.e. machine and deep learning aims at enlarging the today EO data analyzing methodology, introducing automated models and methods for physically meaningful features extraction to enable high accuracy characterization of any structure in large volumes of EO data. AI for EO is advancing the paradigms for stochastic and Bayesian inference or information theory, evolving towards the methods of deep learning. Since the data sets are an organic part of the learning process, the EO dataset biases pose new challenges. Therefore, new solutions have to arise for the multi-source generalization, for very specific EO cases as multispectral or SAR observation with a large variability of imaging parameters and semantic content. Furthermore these solutions have to address the critical aspects of very limited and high complexity training data sets aiming to jointly learn, benefiting of the amount of known available data based on cognitive primitives for grasping the behavior of the observed objects or processes. EO instead, is demanding a more advanced paradigm considering to bring the AI to the sensor. The sensor is the source of the Big Data and the methods of computational imaging enable the optimization of the EO new paradigms for direct information sensing. Among the most advanced methods in Computation Sensing, one can includ: synthetic aperture, coded aperture, compressive sensing, data compression, ghost imaging and also the quantum sensing. The trend is to bring out EO systems enclosing sensor intelligence that is to renew the EO sensor along a complete processing chain from data to information, from information to knowledge, and from knowledge to value.

Authors: Datcu, Mihai (1,2); Vaduva, Corina (1); Faur, Daniela (1); Coltuc, Daniela (1); Anghel, Andrei (1); Cacoveanu, Remus (1); Sacaleanu, Dragos (1); Tache, Ioan (1)
Organisations: 1: CEOSpaceTech - Research Centre for Spatial Information, Politehnica University of Bucharest, Romania; 2: DRL, German Aerospace Centre
Crowdsourced and Satellite Images for Damage Assessment: a Test Case on Hurricane Harvey, USA, Summer 2017 (ID: 145)
Presenting: Dell'Acqua, Fabio

Crowdsourcing has been proposed on several occasions as a tool to provide a valuable complement to spaceborne data in various types of applications involving geospatial data [1][2]. Crowdsourcing for damage assessment is also a popular application where “citizen sensors” can support the institutional coverage provided by spaceborne sensors [3]. In this context, we give our modest contribution by analysing the case of hurricane Harvey, which swept across Barbados, Saint Vincent and the Grenadines, the Mexican area of Yucatan, and the US states of Texas and Louisiana, between 17th August and the 1st of September 2017. Harvey was the direct cause for 68 casualties in the US, ranking just second after Sandy (year 2012); damage estimates range in the hundreds of billions of euros. DigitalGlobe, a well-known satellite data provider, offers a catalogue of data related to natural disasters under their Open Data Program, including hurricane Harvey. Pre- and post-event RGB images in .tiff format from the WorldView-2 sensor are distributed together with a vector map where pointers highlight areas of Texas where damage was recorded. Remarkably, some of the worst-hit areas like the city of Victoria, Texas, were imaged soon after the event with acceptably low cloud cover extent. This permitted meaningful comparison between pre- and post-event data, and detection of damaged buildings by visual inspection. At the same time, the twitter archive was scanned for tweets related with Victoria, TX and hurricane Harvey. 53 tweets with pictures attached were identified by keyword search, sent between 26th of August and 5th September. Of these, just 1 was geo-located; of the others, 18 were localized based on the content of the image itself, by e.g. searching online for terms appearing on signs and then confirming the location of the identified businesses through pre-event street-level images from other sources (e.g. Google Street View). Overall, about 36% of images could be assigned a location. Results from spaceborne data and tweet pictures confirmed each other, leading us to believe that a deep-learning-based automated system for damage assessment, based on tweet images and the ad-hoc deep learning framework described in [4] could well integrate a satellite-based damage mapping system. The system in [4] is being furthered under an European Space Agency (ESA) Kick-Start Activity (KSA - EMITS reference AO8872) [1] Yifang, B., Gong, P., & Gini, C. (2015). Global land cover mapping using Earth observation satellite data: Recent progresses and challenges. ISPRS journal of photogrammetry and remote sensing (Print), 103(1), 1-6. [2] Dell’Acqua, F., & De Vecchi, D. (2017). Potentials of Active and Passive Geospatial Crowdsourcing in Complementing Sentinel Data and Supporting Copernicus Service Portfolio. Proceedings of the IEEE, 105(10), 1913-1925. [3] Yuan, F., & Liu, R. (2018). Feasibility study of using crowdsourcing to identify critical affected areas for rapid damage assessment: Hurricane Matthew case study. International Journal of Disaster Risk Reduction. [4] Iannelli, G. C., & Dell’Acqua, F. (2017). Extensive Exposure Mapping in Urban Areas through Deep Analysis of Street-Level Pictures for Floor Count Determination. Urban Science, 1(2), 16.

Authors: Albanesi, Erica (1); De Vecchi, Daniele (2); Dell'Acqua, Fabio (1,2)
Organisations: 1: University of Pavia, Italy; 2: Ticinum Aerospace s.r.l., Italy
Enhancing User Interaction with Forest Open Data and Earth Observations through a Prototype Web Dashboard (ID: 166)
Presenting: Ronchetti, Giulia

Forest areas cover more than 40% of the Earth’s land surface (Science, 2017), give a high contribution in climate stabilization, water cycle and carbon cycle regulation and provide habitat to thousands of life forms. For all these reasons, forest ecosystems represent an important natural resource to protect and preserve through sustainable management. The ongoing widespread availability of remote sensing data and technologies allows to develop new approaches and solutions for forestry mapping, monitoring and managing. Giving continuity to the Forestry Thematic Exploitation Platform (Forestry TEP), developed in a project commissioned by the European Space Agency (ESA), the proposed Web Dashboard is a web-based application and platform for developing extensible, customizable, and distributed forest information systems. The dashboard embeds a selected ecosystem of Geographic Information System (GIS) and web technologies allowing the discovering, analysis and sharing of Earth Observations (EO) and Open GeoData (e.g. OpenStreetMap and open forest data). Access and visualization of both internal and external dataset are provided through 2D and 3D web mapping interfaces, while data processing capabilities are supported in the backend by the Sentinel Application Platform (SNAP) and FOSS-based parallel modules. Users are free to access resources on online Forest Dashboard instances to process data for their locations of interest. They can also produce maps and web content to be integrated into their personal apps as well as enrich the platform with their personal processing modules. Dissemination and products sharing are also considered: users can export snapshots of their maps and shared them directly through social media. The real innovation, as respect as the existing Forestry TEP, is given by the introduction within the dashboard of dedicated modules for simulations of forest exploitation impacts, employing both analytics and data hosted on the platform or accessed from remote sources. The possibility to integrate all Open GeoData, not necessarily EO, facilitates end users in their analysis and simulations, as well as in developing management strategies. 3D visualization, dynamic and customizable graphics, user friendly interface, interaction with social networks complete and enrich the platform, making it appealing to end users.

Authors: Ronchetti, Giulia (1); Prestifilippo, Gabriele (2); Oxoli, Daniele (1)
Organisations: 1: Politecnico di Milano, Italy; 2: GISdevio srl
CoastVal: Ocean Colour Validation Activities in Coastal Waters (ID: 362)
Presenting: McGlynn, Sinead

While much work has been done in Case I open ocean waters to optimise observations for validation and vicarious calibration of satellite ocean colour, similar work in the coastal zone requires a modified approach due to the more complex environment. With the challenges of the coastal environment in mind, the CoastVal platform has been designed by TechWorks Marine to explore the performance and suitability of the buoy platform to make in situ optical measurements. Optical and other environmental sensors were deployed on the buoy as part of the system, with data transmitted to the TechWorks Marine “CoastEye” data platform for inspection, processing, analysis and visualisation. Significant work has been performed to experimentally measure and account for a range of effects, from tidal currents to buoy and instrument self-shading. Preliminary analysis indicates an excellent matchup between the in-situ CoastVal data and the Sentinel-3 data. The in situ data processing chain for the CoastVal system and associated uncertainties for the observed water-leaving radiances will be presented.

Authors: McGlynn, Sinead; O'Kelly, Charlotte; Moore, Karl; Dobrzanski, Jarek
Organisations: TechWorks Marine, Ireland
Experiences in using Deep Learning methods for Earth Observation (ID: 177)
Presenting: Neagul, Marian

Remote sensing image segmentation plays an important role in Earth Observation. Because of this, a wide range of methods have been tested and proposed. In this study we focus our attention on Deep Learning network architectures for semantic image segmentation. Most solutions sidetrack preprocessing of images and deal mostly with network topologies and their hyperparameters. In our work, we provide a suite of tools tailored for Earth Observation data, tools that aim to provide the required services for supporting easy experimentation and integration of various Deep Learning models, preprocessing techniques and model ensemble methods. We employ our tools and state of the art machine learning models for large scale image segmentation tasks like building footprint detection and road extraction. Our work was motivated by the Urban 3D [1] and SpaceNet RoadDetector [2] challenges, particularly by the need to provide a reusable and extensible machine learning framework. While the Urban 3D challenge was seeking an algorithm that provides reliable, automatic building footprint labeling based on orthorectified color satellite imagery and 3D height data, the SpaceNet RoadDetector [2] challenge was focused on automated methods for extracting routable road networks from high-resolution satellite imagery. In our study we focus on extending and evaluating state of the art deep learning models (like U-Net, Segnet, Xception, DeepLabv3+) for Earth Observation tasks. We integrate Machine Learning and Computer Vision tools like TensorFlow, Keras, OpenCV and SciKit-Learn with modern Earth Observation tools like RasterIO (for raster and elevation model handling), Fiona and Shapely (for vector data handling). Furthermore we evaluate our tools and models against both the data provided by the aforementioned competitions and also with reference data sets like the ISPRS 2D Semantic Labeling - Vaihingen [1]. We conclude by outlining the biggest challenges for successfully deploying Deep Learning techniques for Earth Observation tasks and discuss future research directions. [1] https://goo.gl/T7Yshz [2] https://goo.gl/wN7dCk [3] https://goo.gl/ohQXgB

Authors: Selea, Teodora (1,2); Neagul, Marian (1,2); Iuhasz, Gabriel (1,2)
Organisations: 1: West University of Timisoara, Romania; 2: Institute e-Austria Timisoara, Romania
Open Source DHuS: a collaborative Earth Observation data access and dissemination framework (ID: 215)
Presenting: Tona, Calogera

Keywords: Open Source DataHubSystem (OS DHUS), data access, dissemination, Copernicus, ESA, EO, Earth Science (ES), OWC (Open Web Components) The aim of this work is to present a quick overview of Open Source DataHubSytem, an open, free and collaborative framework created to support ESA Copernicus Sentinel data access. Copernicus is a space programme of the European Union. Through the Copernicus Services it offers full, free and open access to data, models and forecasts related to the monitoring of our environment. Copernicus Programme plays a key role in ensuring independent access for Europe to strategic geospatial information, provisioning free data and emphasizing a need to make that data more applicable to non-space users creating new opportunities and new challenges. In this perspective the OS DHuS can meet Earth Observation and generally Earth Science community needs. It can be easily extended; in fact it has a modular structure with a central core and different modules allowing to create different add-ons to disseminate generic Earth observation data products. In its lifecycle the OS DHuS has gone through several phases: primarily the OS DHuS focused on Sentinels products data access; afterwards it has been extended to handle Landsat-8, Pleaides and Cosmo-Skymed data products and currently the OS DHuS has been enriched with other powerful modules to manage data products from other sources as Copernicus Marine Environment Monitoring Service (CMEMS) or value added products. Moreover, in order to guarantee an easy and user-friendly data access, a new web client has been created. The innovative web client based on Open Web Components provides a configurable and extensible way to manage the data access and customize it. It is based on Web Components standard with Polymer implementation. OS DHuS has important and challenging milestones in its future: to give the possibility to users community to create new add-ons extending the target audience also to those coming from outside of the space sector and to investigate the processing and exploitation services in the same framework.

Authors: Tona, Calogera; Bua, Raffaele
Organisations: Serco Italy S.P.A., Italy
River Discharge Assessment Through The Use Of Optical And Altimetry Satellite Data (ID: 210)
Presenting: Tarpanelli, Angelica

River discharge is the variable of interest for climate and for many scientific and operational applications related to water resources management and flood risk mitigation. Notwithstanding its importance, collection, archiving and distribution of river discharge data is globally scarce and limited, and the currently operating network is inadequate in many parts of the world and is still declining. Satellite sensors are considered the new source to monitor river discharge, thanks to the repeated, uniform and global measurements for a long time-span due to the large number of satellites launched during the last twenty years. The integration of satellite and in situ data is the only solution for the still open issue regarding the monitoring of freshwater. Among the range of satellites currently available, the altimeter and the optical sensors are the most used instruments for hydrological purposes, specifically for the estimation of the river discharge. For addressing this topic, the recent advances in radar altimetry technology offered important information for the water level monitoring of rivers. Moreover, the multi-mission approach, i.e. interpolating different altimetry river crossings, has potential to overcome the limitations due to the spatial-temporal sampling. Alternatively, optical sensors, even if less used, have high potential to evaluate the river discharge variations, thanks to their frequent revisit time and large spatial coverage. Attempts to merge both the sensors, optical and altimeter, have been also investigated to improve the evaluation of river discharge. In this study, we illustrate the potential of different satellite sensors to provide good estimates of river discharge. We focus on the optical (Near InfraRed) and thermal bands of different satellite sensors (MODIS) and particularly, on the derived products such as reflectance, emissivity and land surface temperature. The performances are compared with respect to well-known altimetry (Envisat/RA-2, Jason-2/Poseidon-3 and SARAL/Altika) for estimating the river discharge variation in Nigeria and Italy. Moreover, preliminary results are provided for the integrated use of Sentinel-3 Ku/C Radar Altimeter, SRAL and the Ocean and Land Colour Instrument, OLCI, onboard the same satellite platform, the new Sentinel-3 satellite launched by ESA for the Copernicus Programme. The results confirm the capability of the integration of different satellite sensors to provide good estimates of the river discharge and encourage the use of all Sentinel-3 sensors in synergy, with the advantage that they are collocated on the same platform. Further tests should be carried out over more study areas to confirm the advantages and identify the limitations of the procedure for different climate regions and morphological characteristics of the rivers. In order to implement the approach at large scale and assess freshwater availability from space, computational power and large storage are necessary. The new Φ-lab of European Space Agency represents a good environment where to explore and promote the new possibility to ensure robustness and to enlarge the scope of the tests to validate the approach.

Authors: Tarpanelli, Angelica (1); Brocca, Luca (1); Benveniste, Jérôme (2)
Organisations: 1: National Research Council, Research Institute for Geo-Hydrological Protection, Perugia, Italy; 2: European Space Agency, Centre for Earth Observation (ESA-ESRIN), Frascati, Italy
INSPIRE for Copernicus Data on Ozone and UV (The AURORA project) (ID: 209)
Presenting: Dekavalla, Maria

The Earth atmospheric composition plays key role in life. Atmospheric data are important for weather and climatic applications. In the lower stratosphere at heights of 10-35 Km, where the ozone layer is most dense, the ozone layer is protecting humans against UV radiation (≤ 320nm). This Ozone layer is disappearing, despite the CFC ban made under the Montreal Protocol, and in fact the ozone is disappearing over the densely populated mid-latitudes and tropics (ETH Globe, 1/2018). Thus, data and information related to ozone concentrations and UV radiation have multiple effects on health and economy. The AURORA project aims to provide information and data for world-wide ozone concentration and UV radiation as compiled from the synergistic use of Sentinel 4 and 5 satellite data. It is desirable these data and information to be provided in a comprehensive and robust manner that ensures data interoperability and ease of use and also compatible to main data themes as defined by INSPIRE Directive. INSPIRE data specifications and technical guidelines, define a concrete set of instructions and provide an effective way for combining spatial data and services coming from different sources across Europe. Currently INSPIRE specifications cover 34 spatial data themes, mainly associated with vector data, whilst very few raster datasets have been defined. This presentation provides an overview of the state-of-the-art on satellite-derived atmospheric data standardization and sharing techniques. These observations are interweaved with the current best practices of INSPIRE on raster data standardization and as such are aimed to foster initial requirements on sharing AURORA datasets in an INSPIRE-compliant way.

Authors: Argyridis, Argyros (1); Bonazountas, Marc (1); Cortesi, Ugo (2); Dekavalla, Maria (1)
Organisations: 1: EPSILON, Greece; 2: CNR
Leveraging Sentinel-2 data to complement ground-based cloud cover statistics (ID: 208)
Presenting: Dell'Acqua, Fabio

Among the other possibilities opened by the wide availability of open data granted by the Copernicus system, a notable one is that of checking and assessing from space the statistic of weather features that have been traditionally recorded by ground stations. In this work, we used open datasets and the power of geospatial cloud computing to collect data and elaborate statistics on a set of test sites to compare official statistics of cloud cover with the actual cloudy periods observed in multispectral Sentinel-2 data. We have set 4 test sites, one city in a far Northern European location, one in a hot desertic area, one in a tropical area, and finally one in a temperate climate zone. We have activated an account on ESA RUS (Research and User Support) service to be used for selection and processing of Sentinel data, and one on the Google Earth Engine for selection and processing of LANDSAT data. Tailored code was developed to automatically select data and extract clouds, and cloud cover statistics were computed on each site for different geographical extents around the pinpointed location. Typical behaviours were analysed in comparison also with a climatic zone map. Results will be presented at the conference and discussed in light of the intended goal of this work. The purpose is not that of raising doubts on the validity of ground-based measurement, but rather to assess whether spaceborne data can be used as a valid replacement where ground stations may not be installed such as in remote locations whose statistics are nonetheless significant for purposes of climate studies. Future developments will include investigating possible fuzzy definitions of cloud cover and the introduction of multi-level statistics where the binary splitting into cloudy/non-cloudy class will be replaced by a "degree of cloudiness" on each image, and statistics adjusted accordingly. This work is being carried out as a group exercise within a Remote Sensing course at the University of Pavia, which has recently been selected as a new FabSpace under the H2020 “FabSpace 2.0” project of the European Union. A “Space Communication and Sensing” graduate track is currently active within the Engineering Faculty, and the aim of these exercises is that of showing the benefits of Earth Observation and encouraging public involvement in spaceborne monitoring of the terrestrial environment.

Authors: Bresciani, Laura (1); Curti, Alberto (1); Di Lorenzo, Benedetta (1); Modica, Camilla (1); Dell'Acqua, Fabio (1,2)
Organisations: 1: University of Pavia; 2: Pavia FabSpace 2.0
River Ice Monitoring Service for Poland based on radar satellite data (ID: 205)
Presenting: Weintrit, Beata

River icing is typical for some regions of Europe (including Poland) in the winter season and can last for up to a few months. Satellite-based monitoring of the river ice is to support currently used in-situ techniques and provide up-to-date information on river sections affected by ice to water management authorities. Such a regular analysis can support the planning of ice-breaking process, the prediction of floods and the validation of ongoing ice-breaking action. The proposed ice monitoring service is aimed to provide regular information on ice phenomena on main rivers in Poland. The detection of ice is based on the Sentinel-1 mission data, including both S-1A and S-1B microwave satellites. Application of Sentinel radar sensors allows to collect images, independently from weather conditions (excluding intensive storms and snow falls), every 2-7 days, and process in near real time. The spatial resolution of Sentinel-1 imageries of 5x20 m in Interferometric Wide Swath mode gives the ability to effectively detect ice on rivers over 60 meters wide. As a result of the conducted research works, the fully automatic service, providing river ice coverage classification with 90% accuracy was developed. River Ice Monitoring Service was developed to provide regular, spatially continuous information about ice events on rivers that meet the width criteria during winter season. The authorized user has access to the classification of images from various available dates. Data is presented in service as spatial layer representing several type of ice (fractured ice, ice cover, snow cover), or water cover. Service allows to generate statistical report relating to 1 km length sections of the river through web service, similar to previously used reports. The service was developed within the Earth Observation for Eastern Partnership project, financed and supported by ESA.

Authors: Weintrit, Beata; Kubicki, Michał; Krawczak, Ewelina; Kaniecki, Maciej; Jędryka, Marcin
Organisations: Astri Polska Sp. z o.o., Poland
Quantification of irrigation from space: a new data-intensive approach exploiting high-resolution Sentinel-1 and -2 observations (ID: 204)
Presenting: Brocca, Luca

Irrigation is the greatest human intervention in the water cycle. In a changing climate and with the growing of population, the use of irrigation water is expected to significantly increase worldwide. Notwithstanding its huge importance, nowadays we do not have any method for obtaining information on the amount of water used for irrigation over large areas. In this study, we developed a new approach exploiting high resolution microwave and optical satellite observations for assessing the amount of water applied for irrigation. On the one hand, we adapted the SM2RAIN algorithm (Brocca et al., 2014), which is based on the inversion of the soil water balance equation, for estimating the amount of water entering into the soil, and hence irrigation, from the knowledge of soil moisture obtained from microwave data. On the other hand, optical and visible sensors, characterized by high spatial resolution, have shown to be very useful for detecting the irrigated areas. The synergy between microwave and optical observations allow us to obtain high spatial resolution (10 m) irrigation information (where and how much) at daily time scale. This target can be finally achieved thanks to the launch of Sentinel-1 and Sentinel-2 satellites that meet the spatial-temporal requirement of this approach. Moreover, the method is highly suitable to be applied to the very recent and future small satellite SAR (Synthetic Aperture Radar) and optical missions, as they are characterized by a short revisit time (hourly), critical requirement for obtaining accurate irrigation estimates. Firstly, through synthetic experiments, we have tested the method reliability and accuracy in controlled conditions by considering different configurations of the: 1) measurements accuracy, 2) temporal resolution, and 3) climate. Secondly, we applied the proposed approach to high resolution soil moisture observations from Sentinel-1 in a test site in Italy. The method has shown potential in quantifying the irrigation water from Sentinel-1 with a good agreement with actual irrigation observations available at the test site. Additionally, Sentinel-2 observations have been found to be useful for detecting irrigated fields in the same study area. The implementation of the approach over large regions require the storage and processing of large satellite datasets (Big Data). Therefore, the recent collaborative approaches developed in the Earth System Science community that exploit new open tools and virtual laboratories should be considered. REFERENCES Brocca, L., Ciabatta, L., Massari, C., Moramarco, T., Hahn, S., Hasenauer, S., Kidd, R., Dorigo, W., Wagner, W., Levizzani, V. (2014). Soil as a natural rain gauge: estimating global rainfall from satellite soil moisture data. Journal of Geophysical Research, 119(9), 5128-5141, doi:10.1002/2014JD021489.

Authors: Brocca, Luca; Tarpanelli, Angelica; Filippucci, Paolo; Massari, Christian
Organisations: National Research Council of Italy, Italy
Machine Learning for the classification of sea-ice and inland waters in Sentinel-1 (ID: 203)
Presenting: Fabry, Pierre

This work deals with the classification of sea-ice in Sentinel-1 images and the production of Water Masks with Sentinel-1 images. Both scenes are similar in some aspects with more complex cases in the case of inland waters in the vicinity of man made structures. One of the main issues of using SAR images for sea ice classification is backscattering variation due to the local incidence angle and different normalization methods have been proposed including linear, iterative and class based normalization. In this research we propose a multi-scale classification independently applied in different incidence angle blocks. Support Vector Machine (SVM) is used as a classifier of Sentinel-1 SAR images and is tuned to be relaxed on noise and misclassified training data. Operational egg shape ice charts are used to validate the performance of this method. This work is performed with open source tools, mainly the ESA funded SNAP toolbox and the widely used open source Python language. It also relies on the R.U.S (Research and User Support for Sentinel Core Products) service that provides a scalable platform in a cloud environment to facilitate the uptake of Copernicus data. This research work is being performed in the frame of the Cryo-SEANICE project funded by ESA.

Authors: Fabry, Pierre; Zohary, Moein; Bercher, Nicolas
Organisations: ALONG-TRACK SAS, France
Interactive Virtual Research Environments for Vegetation Phenology based on Modern Open Source Web Frameworks (ID: 200)
Presenting: Eberle, Jonas

Within the research project “PhaenOPT” open Earth Observation (EO) datasets as well as in-situ vegetation phenology are used to analyze the suitability of EO-derived phenology information. Although all of the datasets used are open data, different file formats and access services need to be understood to work with the data. To enable users, such as local state agencies, to use and benefit from these datasets, a Python library automating access, visualization, and processing has been developed and integrated into different web-based open source frameworks. Several Jupyter Notebooks with interactive widgets have been developed and combined with the open source Django web framework for data management issues. Both tools simplify the discovery, access, visualization, and processing of the vegetation phenology data. Based on EO datasets, time-series for different areas can be extracted and vegetation phenology measurements can be calculated and compared with in-situ measurements within the Jupyter Notebooks. The open source Python web framework Django is used to manage any content of the virtual research environment and acts as a middleware between users and external web services, such as services for data discovery and access. Geospatial maps and time-series generated within the Jupyter Notebook can be published through Django and GeoServer REST API with OGC-compliant web services to be used in other web-based portals, such as the operational vegetation phenology portal of the Thuringian state agency for environment and geology, partner of the project “PhaenOPT”.

Authors: Eberle, Jonas; Schmullius, Christiane
Organisations: Friedrich Schiller University Jena, Germany
Observing Cooperation between humans and Planet Earth (ID: 196)
Presenting: Merletti De Palo, Alessandro

The contemporary satellite imaging availability could provide useful information for mankind in order to observe how much the human social ecosystem is cooperating with the whole Planet Earth ecosystem. Following the Cooperation model in Merletti De Palo et al., 2015 and the Cooperation Context Index presented in Macao at the PLS17 conference (Merletti De Palo et al., 2017) we propose a collaboration to develop a “Humans-Earth Cooperation Index by countries” based on the following seven indicators: 1. Diversity (of the land types in the country) 2. Understanding (Number of observatories, satellites and research institutes observing the planet in the country) 3. Freedom (Possibility of the Planet to develop naturally with no human artificial activity or non-natura barriers in the country) 4. Transparency (Amount of relevant data gathered through time and access to relevant data by country) 5. Care (Number of experts and relative research and courses about how to observe Earth in order to take care of its natural ecosystem i.e. pollution rates, etc by country) 6. Trust (Predictability of natural and non-natural disasters or alterations of the planet ecosystem by country) 7. Equivalence (Number of green areas by country and % of green areas and industrial areas in cities per country). The index could be highly useful to establish a true dialog with our planet and start international cooperation amongst the worldwide human social ecosystem in order to respect and avoid altering our Planet Earth one.

Authors: Merletti De Palo, Alessandro
Organisations: Cooperacy, Italy
Remote Sensing and Small Aperture Radar Imagery for Erosion Studies in Albania (ID: 172)
Presenting: Frasheri, Neki

We use Sentinel and Landsat images to identify mountainous areas with ground erosion due to meteorological factors. Based in previous works, presented in ESA activities, where using Envisat and Sentinel differential interferograms, we have identified mountainous areas in Albania with significant fringes, in particular hilly areas of Torrovica and Vau Dejes in Preadriatic Depression flatlands in northwest of the country, as well as in top of Mountain with Holes east of the capital city Tirana. Fringes are persistent in 3-month timebase interferograms and always in higher parts of the relief, while missing in valleys. Airborne imagery, both from aircraft and satellite (Landsat) show missing of vegetation in such areas. Field survey confirmed presence of gradual erosion due to meteorological factors. Radar satellite imagery was processed using Sentinel toolbox and ESA Cloud toolbox. Landsat images were processed using general purpose image processing software Gimp. Free imagery and computing capacities offered by ESA, and processing methodology is suitable for use in framework of citizen science.

Authors: Frasheri, Neki; Beqiraj, Gudar; Bushati, Salvatore
Organisations: Academy of Sciences of Albania, Albania
Potential of Sentinel-2 Spectral Bands for Agriculture Mapping (ID: 195)
Presenting: Belgiu, Mariana

Satellite images time series (SITS) are increasingly used to characterize the status and dynamics of crops cultivated in different agricultural regions across the globe. Multiple spectral variables, indices and phenological variables extracted from these data contribute to the development of a richer variables pool which permits efficient crops detection. Two important questions are to be addressed when using SITS for crops classification: 1) Given a large number of cloud-free images, what are the optimal images to be included in the SITS? and 2) What are the optimal spectral-temporal variables to be considered for the classification? Selection of relevant variables for agriculture mapping from SITS allows for the development of computationally-efficient classification models which achieve satisfactory results with the smallest possible set of variables. Finding the perfect features set contributes also to the development of operational solutions for agriculture mapping. The overall goal of our study is to systematically investigate the efficiency of feature space subsets identified by three feature selection methods for Random Forest (RF)-based crops mapping from Sentinel-2 SITS. The following feature selection methods are evaluated: 1) a backward feature selection method which relies on mean-decrease-accuracy (MDA) variable importance measure implemented in RF classifier; 2) a backward feature reduction method proposed by Díaz-Uriarte and De Andres (2006) and 3) a conditional variable importance for RF method (Strobl et al. 2008). The first two methods rely on CART trees built from bootstrap samples drawn with replacement from the original samples set to evaluate the importance of variables for classifying the crops under investigation, whereas conditional variable importance method uses classification trees based on conditional inference framework and created through subsampling without replacement to assess the variables importance. In this study, we focus solely on the importance of spectral indices computed from Sentinel-2 SITS for agriculture mapping, since previous studies reported that they performed better than spectral bands. A total of sixteen spectral indices for each image available in two study areas situated in Romania and Italy are calculated and used for the classification process. Preliminary results showed that a small subset of features identified by the evaluated feature reduction methods is sufficient to achieve satisfactory classification results. Furthermore, spectral indices computed using red-edge bands and shortwave infrared bands proved to be crucial to crop identification in the two study areas under investigation. References: Díaz-Uriarte, R., & De Andres, S.A. (2006). Gene selection and classification of microarray data using random forest. BMC bioinformatics, 7, 3 Strobl, C., Boulesteix, A.-L., Kneib, T., Augustin, T., & Zeileis, A. (2008). Conditional variable importance for random forests. BMC bioinformatics, 9, 307

Authors: Belgiu, Mariana (1); Csillik, Ovidiu (2)
Organisations: 1: University of Twente, Faculty of Geo-Information Science and Earth Observation (ITC); 2: University of Salzburg, Department of Geoinformatics – Z_GIS
With an Article (ID: 193)
Presenting: Eishoeei, Edith

Land use/ Land cover change detection using Landsat and Sentinel-2A satellite imagery (case study: west regions of Urmia Lake, Iran) Mirhassan Miryaghoubzadeh 1 Edith Eishoeei 2 1- Assistant Professor, Natural Resources faculty, Urmia University, Iran. Email: m.miryaghoubzadeh@urmia.ac.ir 2- MS. graduated in Watershed Management engineering, Natural Resources faculty, Urmia University, Iran. Email: edith_eishoeei@yahoo.com Abstract: In the context of remote sensing, change detection refers to the process of identifying differences in the state of land features or phenomenon by observing them at different times. Digital change detection approaches may be broadly characterized by the data transformation procedure and data analysis techniques used to determine the areas of significant changes. In the past decade in addition to the significant decrease of rainfall and successive drought, increase in the development plans and exploitation of water was also seen and these matters have led to a reduction in the flow of the Urmia lake. Due to this, the hydrological behavior investigation and land use change detection of this area is very important. In order to the LU/LC change detection, the west side watersheds of the Urmia lake basin were chosen. In this study, the Landsat and the Sentinel-2A satellite images from 2000 to 2017 were selected and the research was done at each five years interval. Land use and Land cover detected and classified into 10 classes. The method of classification used to classify the land use is maximum likelihood classification tool using ENVI 5.3 software. Different transformations applied to the images in order to enhance the correctness and resolution of classified maps are Principal Component Analysis (PCA), ND visualization, Tasseled Cap, vegetation indices and NDVI index. These methods were applied to find the best band combination in both satellite images in order to have the best classification of land use and high resolution of classified maps and were applied as data pre-processing. The best band combination derived from satellites is B7, B4 and B2 bands for Landsat data and B3, B7 and B10 for Sentinel-2A data. Classification done as data processing and the best classification method that can present accurate results is maximum likelihood classification algorithm. Kappa coefficient and overall accuracy in Landsat and Sentinel-2A satellite data shows high accuracy in Land cover and Land use classification. Results showed that most changes occurred between 2000 to 2005 in irrigated farmlands and orchards have the overall with 75 percent changes and in 2005 to 2010 most changes occurred in dry farming lands with 73 percent changes and between 2010 to 2017 most changes occurred at irrigated farmlands and orchards with 90 percent changes. Keywords: LU/LC Change detection, Sentinel-2A, Landsat, Urmia Lake, MLC algorithm

Authors: Miryaghoubzadeh, Mirhassan; Eishoeei, Edith
Organisations: university of Urmia, Iran, Islamic Republic of
Validation of SMOS L1C and L2 Products over the West and South West of Iran (ID: 192)
Presenting: Jamei, Mozhdeh

Soil moisture is a principal component of the Earth's climate system and hydrological cycle that plays a major role in the weather predictions, extreme events monitoring, the hydrological modeling and water resources management.The ESA’s SMOS (Soil Moisture and Ocean Salinity) satellite was launched in November 2009 that provides global surface soil moisture maps with an accuracy of 0.04 m3m−3 over the land surfaces. The SMOS soil moisture retrieval algorithm, processes SMOS multi-angular brightness temperatures (Level 1C products) to soil moisture data (Level 2 SM products). This algorithm is based on the comparison between the observations of SMOS brightness temperatures and the simulated brightness temperatures data using the L-MEB radiative transfer model. The main objectives of this study consists: Evaluation of SMOS brightness temperatures (TBSMOS) from the MIR_SCLF1C products in compare with the simulated brightness temperatures (TBL-MEB) using the L-MEB model at Agrometeorological Stations and Validation of SMOS Soil Moisture (SMSMOS) data from MIR_SMUDP2 products in compare with In ¬situ measurements (SM In Situ) in the west and south west of Iran. Therefore, we developed an Evaluation model for the SMOS brightness temperatures data and a Validation model for SMOS soil moisture. The statistical metrics as Root Mean Square Error (RMSE), centered RMSE (cRMSE) ، Bias, correlation coefficient, standard deviation and Taylor diagram were used to the evaluation of Results models. The results of Evaluation model show good correlation between the TBSMOS and TBL-MEB. According to RMSE and cRMSE results, the TBSMOS data at the most of stations have acceptable accuracy. Results of the validation analysis indicate that the SMSMOS data have a very good correlation with SM In Situ at the study stations. The SMSMOS data are very close to the SMOS accuracy target (RMSE=0.04 m3m−3). The bias errors of SMSMOS data tends to underestimate the soil moisture over most of the stations. These findings reveal that SMOS soil moisture data with reasonable quality can be used for soil moisture monitoring at the studied areas.

Authors: Jamei, Mozhdeh (1,2); Mousavi Baygi, Mohammad (1); Alizadeh, Amin (1); Irannejad, Parviz (3)
Organisations: 1: Ferdowsi University of Mashhad, Iran; 2: Khouzestan Water and Power Authority,Iran; 3: Tehran University , Tehran, Iran
Create Custom Machine Learning Models with Sentinel Data (ID: 191)
Presenting: Bollinger, Andrew David

Machine learning is a powerful method for deriving insights from satellite data. We're building tools to make it easier to use Sentinel data when developing machine learning algorithms. Label Maker is an open source python library we developed to create custom machine-learning-ready training data for most popular machine learning frameworks, including Keras, TensorFlow, and MXNet. Supervised machine learning methods require two things: satellite imagery and ground-truth labels. There are a few existing standard datasets for this purpose but they use very high resolution imagery (VHR) and are focused on identifying smaller features like cars and buildings. If you're looking to develop a land-use classification model using Sentinel-2 imagery, or a feature detector for electricity infrastructure with Sentinel-1 radar data, you’ll need to put together a new custom dataset. With Label Maker, developers can input parameters to define a training area and imagery source and it will automatically create the necessary label and imagery pairs. It creates the class labels using OpenStreetMap features and the imagery can be obtained from locally downloaded files, cloud services like Sentinel Hub, or directly from cloud storage if the data is stored in a accessible format like Cloud Optimized GeoTIFFs. I'll walk through a full example of a machine learning pipeline using Label Maker with Copernicus products: - Create a custom dataset for land-use classification with Sentinel and OpenStreetMap data - Quickly train a small neural network to classify land use within a satellite imagery tile - Show how this network can be deployed as a cloud-based pipeline for fast predictions over large areas - Show how the same network can be run in-browser for serverless machine learning applications - Finally, we'll share the necessary code and methods for recreating this methodology for different applications. All of the code in the demonstration will be open-source and documented to aid in reproduction.

Authors: Bollinger, Andrew David
Organisations: Development Seed, United States of America
System for Interferometric Exploitation of Phase-Normalized Sentinel-1 IWS Bursts (ID: 190)
Presenting: Lazecky, Milan

Sentinel-1 radar measurements play a current key role in displacements monitoring by interferometric data exploitation. This work brings a specific solution of pre-processing Sentinel-1 IWS SAR data to maximize effectivity of further multitemporal interferometry processing or other complex analyses including multitemporal polarimetry. The system is intended to secure national scale continuous observations of land change characteristics using novel methodologies. Currently interferometric techniques are implemented with the view of automatized identification of potential structure stability issues and geologic surface influences (e.g. tectonics, landslides). Structures for polarimetric analyzes are being developed figuring the needs for forestry such as deforestation or wind calamity mapping. The system is to provide a progressive link to a flood safety system for Czech regional risk management, Floreon+, aiming towards a flood prediction in future (realistic if nearly hourly frequency of soil moisture maps can be generated from some of future L-band missions). The vitality of the system lies in the specific management of Sentinel-1 data that are decomposed into SLC bursts and calibrated after interferometrically mandatory corrections as EAP or ESD. A phase signature of topography (based on DEM) has been removed from SLC data before storing them to a database as calibrated (coregistered and topography-phase-free) SLC-C bursts. This simplifies the process of further multitemporal analyses, vastly reducing computational needs related to generation of differential interferograms. A database of SAR data in this or higher processing level forms a basis for advanced data exploitation techniques following interferometric signal, supporting development of novel approaches using e.g. convolutional neural networks. The system is based on open-source projects (e.g. ISCE, GDAL, STAMPS, Parallel, Octave, doris, MySQL etc.) fused with own processing solutions (e.g. for multitemporal interferometry and post-processing) prepared to settle at High Performance Computing systems, as incubed at IT4Innovations, the Czech national supercomputing research center. Non-interferometric analyses, such as polarimetry workflow focusing on application specifics, are planned using ESA SNAP tool. The system is capable of nation-wide monitoring. Practical application of early warning can be achieved by implementing algorithms continuously assessing time series for a reliable identification of stability threatening displacements. Case studies already approached by the system involve mining-induced or groundwater-based subsidence (Czech Republic, Poland or Spain), landslides (e.g. in Kyrgyz Republic) and displacements of several structures in Czech Republic and Spain (bridges, dams, subsiding buildings).

Authors: Lazecky, Milan (1); Hlavacova, Ivana (2); Svaton, Vaclav (1); Sustr, Zdenek (3); Martinovic, Jan (1); Bakon, Matus (4); Agram, Piyush (6); Hatton, Emma (5)
Organisations: 1: IT4Innovations, VSB-TU Ostrava, Czech Republic; 2: Gisat Ltd Prague, Czech Republic; 3: CESNET Prague, Czech Republic; 4: insar.sk, Slovakia; 5: University of Leeds, United Kingdom; 6: NASA JPL, United States
An interactive visual analytics tool for EO data future exploitation (eVADE) (ID: 188)
Presenting: Faur, Daniela

This paper proposes a tool that aims to provide an innovative and insightful way of exploring Earth observation data content beyond visualization, by addressing a visual analytics process. The considered framework combines machine learning and visualization techniques empowered through human interaction to gain knowledge from the data. eVADE tool is able to leverage the methodologies developed in the fields of information retrieval, data mining & knowledge representation. The advance of a visual analytics component will increase users’ capability to understand and extract meaningful semantic clusters together with quantitative measurments, presented in a sugestive visual way.

Authors: Faur, Daniela (1); Stoica, Adrian (2); Mougnaud, Philippe (3)
Organisations: 1: CEOSpaceTech - Politehnica University of Bucharest, Romania; 2: Terrasigna, Bucharest, Romania; 3: European Space Agency, Esrin, Frascati, Italy
Artificial Intelligence for EO Big Data Analysis in the Framework of CAP Management (ID: 187)
Presenting: Drimaco, Daniela

In the framework of the CAP management both the LPIS update and the OTSC are two of the primary requirements of the national Paying Agencies for reducing their efforts and operational costs. The LPIS represents the basic cartographic information for the farmers that each year have to declare the crops annually cultivated in their agricultural parcel and wrong declarations mean bigger efforts and costs for the successive control phase. In the same way the OTSC are currently carried out through manual photointerpretation procedures and field visits requiring big efforts. In this context, we are testing Artificial Intelligence techniques applied to EO big data analysis (applied to Copernicus data sets and other open EO and non-EO data) for creating automatic tools able to support Paying Agencies in their operational activities. The experimentations are being implementing on the Descartes Lab Platform that already host the Sentinel data sets and is also able to support the ingestion of supplementary, commercial imagery, and ancillary data about ground truth/historical records and other input data needed (i.e. records as to what crop or change occurred in the past). With the assistance of Descartes Labs science team and the collaboration of the Physics Department of the University of Bari and CNR ISSIA, many tests are currently under development starting from the algorithms already available in the DL platform for agriculture applications and designing novel approaches ranging from traditional methods (spectral analysis/linear regression/clustering) to deep learning methods, specifically designed for the CAP requirements. Novelty Detection algorithms are being testing to Sentinel time series in order to identify macro-changes in croplands (i.e. arable versus artificial cover and transactions between different arable land types). The idea behind the proposed approach is that change detection can be ideally formalized as an outlier detection problem. We intend to investigate patterns describing the intensity of each image region according to the available bandwidths and to train supervised models to reveal those regions whose behaviour is anomalous. Different state-of-the-art algorithms are being used, such as Support Vector Machines. Machine learning models trained with farmer’s declarations and results from previous checks are being testing to exploit the different phonological cycles as key discriminant factor that can be extracted from Sentinel time series. The main advantage of a novelty detection approach is that it can be suitably adopted, not only to identify changes but also to assign them to different classes, according to the particular pattern they show when deviating from “normalcy”.

Authors: La Mantia, Claudio; Drimaco, Daniela
Organisations: Planetek Italia s.r.l., Italy
Ocean Feature Extraction using Deep Learning and Big Data (ID: 185)
Presenting: Hashemi, Mohammad

In this study, we tried to classify Sentinel-1 wave mode imagery into four different classes (open ocean, land, sea ice and ships) using Deep Learning Methods. In addition, we used a large dataset of Sentinel-1 wave mode images to extract six important ocean features (10 metre wind speed, Significant height of combined wind waves and swell, Significant height of wind waves, wind sea peak period, Significant height of swell waves and wind sea peak period) using Convolutional Neural Networks (CNN's). Acquired weather data from German Meteorological Office (DWD) was used as ground truth. Then we compared the results with NOAA buoy measurements and state off the art methods. Also, we compared buoy measurements and DWD weather info in a completely independent correlation coefficient study. This study for the first time will help to extract precise ocean features from different ocean vessels and states using big data methods from complex data of Synthetic Aperture Radar imagery.

Authors: Hashemi, Mohammad (1); Rabus, Bernhard (1); Lehner, Susanne (2)
Organisations: 1: Simon Fraser University, Canada; 2: German Aerospace Center (DLR)
A Feedback On Classic Deep Learning Pipelines For Some Earth Observation Use cases (ID: 182)
Presenting: De Vieilleville, François

Over the last 10 years deep learning has brought many changes in vision and recognition tasks and seems to offer many opportunities for earth observation value added products. In this talk we propose a feedback reviewing the pros and cons of classic deep learning pipelines applied to some earth observation use cases on very high resolution imaging. More precisely, for each considered pipeline, we will discuss about the problems of dataset constitution, network training and evaluation. The considered pipelines will cover 3 cases. The first one is the classification pipeline which consists in putting images in predefined set of classes. The tested networks are from the VGG and the ResNet families. The second one performs segmentation, which deals with labeling each pixel of the image, for this one only the U-Net network was considered. And the last pipeline deals with detection task, which aims at finding the bounding boxes of the objects of interests in input images. The selected network for this case is from the faster R-CNN families. In the reviewed use cases, we only address binary problems, which greatly simplify our data sets. However, as we were starting from scratch in the trainings, a simple strategy regarding dataset constitution and iterative training of the networks was required and will be discussed. Regarding our tasks we will also discuss how the inputs and outputs of the various pipelines can help one another though not fully complementary. At last all these problems will be considered on optical passive imagery regarding the problem of plane detections on airports, the problem of crater detection on Mars images and the building detection problem. We will conclude with results obtained on various imagery and the possible solutions to exploit the trained networks on other type of sensors.

Authors: De Vieilleville, François (1); Ristorcelli, Thomas (1); Bosch, Sébastien (1); May, Stéphane (2)
Organisations: 1: MAGELLIUM, France; 2: CNES, Toulouse, France
An Open-Source 3D Radiative Transfer Model For The Earth Observation Community (ID: 181)
Presenting: Leroy, Vincent

One tool common to many Earth observation activities is radiometry. Complementing it, radiative transfer models provide computational prediction of radiometric quantities for various purposes, e.g., instrument calibration or radiometric product validation. The Earth observation community is split into multiple subcommunities, each of which has specific requirements for radiative transfer models. Therefore, each subcommunity saw its reference radiative transfer model emerge, and these radiative transfer models are not designed to be easily extended beyond their original application field. This becomes problematic when users need features developed by another subcommunity, e.g., when land specialists need to include atmospheric effects in their model. In such cases, the new features are incorporated into the code and validated at the cost of a long study, despite it is already state of the art in another subcommunity. Existing reference Earth observation radiative transfer models also have varied distribution and licensing policies, which can drastically limit the potential number of contributors and testers. This directly sets a limitation on the rate at which the software can be improved and fixed. If code is supposed to target a wide audience, it requires a larger pool of contributors to implement new features or fix bugs more quickly and effectively. This development workforce can come either from a team internal to the entity managing the development of the software, or from a wider contributor community thanks to facilitated access to the code (e.g., open-source licensing). The latter case is well-adapted in scientific communities, which rely on collaboration to speed up progress. In this talk, we present the fundamental principles for a new general-purpose 3D radiative transfer model currently under development. It is is designed to support calibration/validation activities. Thanks to careful software design and the use of modern scientific programming technologies, this RTM will be easy to maintain and extend. An advanced multilayer user interface guarantees total, fine-grain access to developers and power-users, and a smooth learning curve to other users. Rigorous development practices help increase the reliability of the code through extensive testing and documentation. The enforcement of an open-source development model will allow community users to report bugs and contribute to the development of the model with bugfixes, new radiative interaction models or scenes. The 3D capabilities of the model should help go beyond the limit accuracy levels reached with 1D models when used to support calibration/validation activities on Sentinel-2 and Sentinel-3 data.

Authors: Leroy, Vincent; Govaerts, Yves
Organisations: Rayference, Belgium
Open-Source Earth Observation Research Framework for Python (ID: 179)
Presenting: Aleksandrov, Matej

In the earth observation (EO) even simple projects usually involve many data manipulations and processing challenges. The venture typically starts with data collection and preparation, continues with a series of use-case specific processing tasks and ends with visualization of results. For someone starting with EO implementing these steps can often be quite time consuming. The quickest implementations are usually not the most elegant, adaptable to changes, or reusable for the next project. As a solution we present open-source Python packages sentinelhub and eo-learn. Starting as side products of the research team at Sinergise their purpose is to contain essential tools a researcher would require for both playing around with data or a serious analysis and large-scale machine learning. The package sentinelhub is designed for gathering, managing and serving satellite imagery to the user. It allows an easy access to many satellite data sources by interacting with data providers such as Sentinel Hub OGC services, Amazon Web Services and can retrieve supplementary data from Geopedia services. With a collection of supporting tools, it can serve as a first step of any EO project, intuitively taking care of getting just the data that is needed, in a output that is aligned with forthcoming analysis. The eo-learn package is a satellite data processing framework. It employs multi-temporal data containers, processing tasks and processing workflows. These are complemented by data verification, monitoring and logging tools enabling user to have a good overview of the project. The package is especially useful as a machine learning pipeline and for working with large quantities of data. It also implements some most commonly used tasks such as image co-registration, segmentation, cloud masking, etc. The eo-learn package relies on sentinelhub package for the EO and supplementary data collection. In the presentation we will show the main functionalities of both packages. We will present their role in some of our existing projects and how the two packages could be use by a wide range of EO audience.

Authors: Aleksandrov, Matej; Sovdat, Blaž; Zupanc, Anže; Peressutti, Devis; Batič, Matej; Močnik, Rok; Kadunc, Miha; Milčinski, Grega
Organisations: Sinergise, Slovenia
A set of Software Tools supporting EO Satellites for Instrument Calibration and Validation (ID: 176)
Presenting: Pinol Sole, Montserrat

In preparation for future instrument Calibration and Validation activities of the EarthCARE mission, there has been a growing interest in understanding the possibilities of combining data from different types of products, acquired over the same geographical area within a given period of time between observations, as well as assessing the overpass opportunities over calibration ground-sites. The combined use of the tools distributed by ESA EOP System Support Division made feasible the preliminary analysis to obtain the revisit time between observations of the same area by different satellite instruments within a limited time period and their geographical distribution. The coverage of ground site (e.g. radar) locations and pre-defined areas of interest for ground campaigns could also be determined, leading to the study of potential measures to maximise the number of observable sites. Finally, the multi-satellite instrument overlap dynamics was animated to facilitate the visualisation of the resulting patterns between EarthCARE and other LEO satellites. This set of software tools, including pre-defined configurations of ESA Earth Observation missions, is freely available to the users part of the ESA Earth Observation Earth Explorer and Copernicus satellites community. The InstrCollocation tool [REF 1] provides a mechanism to identify instrument collocation opportunities for different types of instruments (Optical, Radar, Altimeters) available on different satellites, observing the same area within a limited time interval. Knowledge of the instrument overlap patterns and timing would benefit users involved in instrument calibration and product validation activities. The ZoneOverPass tool [REF 1]) allows to obtain overpass tables of a given satellite ground-track or instrument swath over an area of interest or ground-site. Finding opportunities of observations over a given area may be useful to search relevant time-tagged products or plan future on-ground campaigns. The desktop application ESOV NG [REF 2]) can also be used to analyse coverage over regions of interest and ground sites, calculating also the times of overpass. Constraints relative to the instrument operations, e.g. active only within certain range of sun zenith angle (day operations) can also be applied. The SAMI [REF 3] embedded capability to export image snapshots or HD video can be used to share media content and enhance the demonstration of mission concepts. The coherence and accuracy of the orbital and geometrical calculations within the set of ESA EOP System Support Division tools is ensured by the use of embedded Earth Observation CFI Software libraries (EOCFI SW). The libraries are used to obtain orbit ground-track, instrument swath, time passes over selected area of interest or ground site. The use of common interfaces (orbit files, swath files, zone database files, SCF segments export format from ESOV NG, KML Google Earth) is a key point to facilitate sharing the input data and the comparison of the output results across the various software applications. The set of tools is multi-platform, available for Mac OS X, Windows and Linux.

Authors: Pinol Sole, Montserrat; Zundo, Michele
Organisations: ESA/ESTEC, Netherlands, The
Land-cover Classification Results And Lessons Learnt From The Round Robin Consultation Within The ESA SEOM SInCohMap Project (ID: 241)
Presenting: Vicente-Guijalba, Fernando

The Sentinel-1 (S-1) mission defines a whole new playground to explore the limits and potentials of diverse technologies to generate updated and precise global land-cover maps. The availability of frequent and global satellite data promotes the development of alternative approaches for land cover mapping where mostly optical, and also radiometric, data have established their predominance. In this regard, the ESA SEOM SInCohMap project aims to develop, analyse and validate novel methodologies for land cover and vegetation mapping by using time series of Sentinel-1 data and by exploiting the temporal evolution of the interferometric coherence. The project aims to quantify the impacts and benefits of using Sentinel-1 InSAR (Interferometric Synthetic Aperture Radar) coherence data relative to traditional land cover and vegetation mapping approaches such as those using optical data (especially Sentinel-2) and SAR (Synthetic Aperture Radar) intensity. In general, interferometric coherence is affected by a combination of terms derived from the system, the observation geometry and the properties of the observed scene. In previous studies, coherence has already proven to be a good parameter for inferring land cover. Within the framework of the ESA SEOM SInCohMap project, a Round Robin consultation was devised with the objective of performing a valuable comparison between classification strategies exploiting interferometric coherence data. Seven teams composed by Earth Observation experts where involved in the consultation process during a total of 6 months. To ensure that the consultation provided scientific analyses, free and open access to the pre-processed interferometric data was required. An intensive InSAR stack consisting of two-year data at three study areas in Europe with a diverse range of land covers and vegetation was provided to the consultation participants. In particular, interferometric coherence data for each site was organised into 5-dimensional datacubes (2 spatial dimensions, 2 temporal dimensions and the polarimetric channel) providing a simple access interface to a very complex data structure using standard protocols. Moreover, during the consultation a collaborative cloud processing environment was exploited to reduce the resources required at the participants' side. Along with an overview of the obtained results in this work, the experience of the infrastructure during the consultation process will be presented. To summarize, this particular SInCohMap round robin consultation setup boosted the collaboration between the participants and it has also allowed the consortium to attract people and teams from outside the project consortium. Eventually, this has resulted in a larger number of experiments and methodologies for the same data ensuring direct comparison of the obtained outcomes.

Authors: Vicente-Guijalba, Fernando (1); Jacob, Alexander (2); Notarnicola, Claudia (2); Mestre-Quereda, Alejandro (3); Lopez-Sanchez, Juan M. (3); Lopez-Martinez, Carlos (4); Ziolkowski, Dariusz (5); Dabrowska, Katarzyna (5); Bochenek, Zbigniew (5); Pottier, Eric (6); Mallorqui, Jordi J. (7); Lavalle, Marco (8); Duro, Javier (1); Antropov, Oleg (9); Suresh, Gopika (10); Engdahl, Marcus (11)
Organisations: 1: Dares Technology, Barcelona (Spain); 2: Institute for Earth Observation, Eurac, Bolzano (Italy); 3: IUII, University of Alicante, Alicante (Spain); 4: Luxembourg Institute of Science and Technology, Belvaux (Luxembourg); 5: Institute of Geodesy and Cartography, Warsaw (Poland); 6: I.E.T.R, Universite de Rennes 1, Rennes (France); 7: CommSensLab, UPC (Spain); 8: JPL, California Institute of Technology (USA); 9: Aalto University, Helsinki (Finland); 10: Federal Agency for Cartography and Geodesy, Frankfurt (Germany); 11: ESA-ESRIN, Frascati (Italy)

Future EO (part5)
09:00 - 10:45
Chair: Marcello Maranesi - Phiunet

09:00 - 09:20
EO technology at ESA: Processes, achievements and future trends (ID: 392)
Keynote: Rosello, Josep
(PDF )

This presentation will introduce the processes and programmes (e.g. DPTDE, GSTP, EOEP-5) for the development of space technology in ESA. Special focus will be paid to technologies specific to Earth Observation (EO), for both upstream and downstream sectors, including achievements and challenges found during their development. All this will be supported with examples, including cases for on-board payloads and platform equipment, as well as on-ground big data acquisition and processing chains. Finally, general technology trends (e.g. digitalization, standardization, miniaturization, …) and EO trends (e.g. higher spatial-temporal-radiometric resolution, constellations, Artificial Intelligence for big data, …), complemented with a summary of plans to address them, will also be provided.

Authors: Rosello, Josep
Organisations: ESA-ESTEC, Netherlands, The
09:20 - 09:35
NASA's Advanced Information Systems Technology (AIST) Program - Fueling Innovation (ID: 373)
Presenting: Little, Michael Merle
(PDF )

One element of NASA’s Earth Science Technology Office is the Advanced Information Systems Technology (AIST) Program, which funds information technology development for use in the 5-20 year timeframe. The needs are identified in conjunction with the NASA research and applied sciences communities and discussions with other, forward-looking organizations, such as ESA. AIST projects must have a US lead, but collaborations with non-US organizations are encouraged. The AIST Program focuses on two major thrusts and a small collection of concept studies and solicits both evolutionary and disruptive development projects. One thrust seeks to leverage the emergence of smallsats to create constellations of instruments to examine phenomena that could not be studied before using conventional single-satellite instruments. The second thrust develops re-usable tools to support scientific investigations through the interaction with data. The study projects formulate relevant theory, bleeding-edge concepts or to articulate need statements more clearly for highly advanced work. This talk will describe the research and development funded by the AIST Program.

Authors: Little, Michael Merle; Le Moigne-Stewart, Jacqueline J.; Babu, Sachidananda R.
Organisations: NASA, United States of America
09:35 - 09:50
Game Changer Technologies for Optical Systems and Disruptive Innovations for Remote Sensing Systems (ID: 324)
Presenting: Zuccaro Marchi, Alessandro
(PDF )

The European Space Agency is leading several R&D activities in the field of compact multispectral and hyperspectral instrumentation. These activities encompass technology development of novel optical designs, materials and processes, including also engineering of detectors, EEE components and dedicated data processing to achieve innovative and cost-effective solutions. By combining these developments it is possible to produce a portfolio of innovative multi-hyperspectral instruments covering a broad range of applications, spanning from high spatial resolution to large swath width, from cubesat to minisatellite format. These instruments can be equipped with powerful on-board data processor for real time generation of L2 data. This capability opens new ways of using the space asset, providing the user with turnkey solutions and fast response to their specific needs. Furthermore, these instruments have been conceived with flexibility and interoperability to target a large number of applications. The paper provides an overview of the technology developments, the status of the instruments manufactured so far, their performance and their expected applications. The paper concludes with a discussion on which on-going development will introduce new types of services that may change the rule of the game in remote sensing application and which may disrupt the remote sensing market.

Authors: Zuccaro Marchi, Alessandro; Maresi, Luca
Organisations: ESA/ESTEC, the Netherlands
09:50 - 10:05
COBRA: a Demonstrator for Responsive Operations as a Service (ID: 334)
Presenting: Greenland, Steve
(PDF )

Craft Prospect has been developing a Mission Operations Services aligned framework and reference architecture for onboard autonomy in addition to modular products as enabling technologies. We propose a consortium to deliver a LEO demonstrator for autonomy utilising these developments together with industry and academic partners. Examples of work done to date include an evaluation of current off-the-shelf software for CubeSats against an MO aligned framework for autonomy, the evaluation of different autonomy enabling algorithms against different applications, and the production of a prototype Forwards Looking Imager using deep learning, able to provide real-time feature tracking up to 180 s in advance of a LEO satellite for < 2 W. For proof of concept, the prototype has been initially trained for real time cloud detection and classification, looking 1 min ahead of the satellite to enable responsive decision making for Earth observation and telecommunication applications. The design has been miniaturised and modularised to allow accommodation on small and nanosatellite systems. Flight representative and heritage components have been selected for prototyping. This presentation provides an overview both of our work and products and outlines a proposal and opportunity for future collaboration.

Authors: Greenland, Steve; Ireland, Murray; Karagiannakis, Phil; Rumsey, Clare
Organisations: Craft Prospect, United Kingdom
10:05 - 10:20
LEO-Based Hybrid RF-Optical Data Relay Network Architectures (ID: 336)
Presenting: Al Husseini, Abdul Mohsen Zuheir A.
(PDF )

Remote sensing satellites are proliferating at an astonishing rate, taking advantage of standardized commercial “off the shelf” parts and frequent, affordable access to space. However, these satellites have considerable downlink limitations, which force operators to choose between severe under-utilization or lower data generation rates - often at the expense of image quality. Laser downlink has the potential to address the downlink shortfall, but due to the current state of the technology, the burdensome pointing requirements to successfully operate it and the lack of optical ground infrastructure, it will be years before the majority of remote sensing satellites can take advantage of the technology. Fortunately, small satellites dedicated to data-relay are positioned to take advantage of laser downlink on a much faster time frame, and can leverage this advantage to drastically improve the downlink capabilities of satellites that rely on radio frequency (RF) downlink systems, including satellites that are already in orbit. As individual relay satellites are expanded to operational networks, the capabilities grow in kind, enabling near-real time communication, intelligent routing of information and dynamic tasking to satellites using any format of downlink technology.

Authors: Al Husseini, Abdul Mohsen Zuheir A.; Helms, Tristan; Nevius, Dan; Oliveira, Justin
Organisations: Analytical Space, Inc., United States of America
10:20 - 10:35
POLIS - Polar Orbit thermaL Infrared Satellite (ID: 383)
Presenting: Papadeas, Pierros
(PDF )

POLIS (Polar Orbit thermaL Infrared Satellite) is an in-orbit demonstration (IOD) mission for a constellation of low cost cubesats, monitoring earth in LWIR (10μm and 12μm) supplying global, high resolution and near-real time temperature ​ distribution maps in urban areas. For the scope of this proposal we will be detailing the proposed IOD mission to validate and verify the technology stack, configuration, modes of operation, data flows and applications. The aspiration of the POLIS mission is to launch a completely open source (software, hardware, ground segment, data flow) satellite constellation using Commercial and Off-the Shelf components for the Science Payload validating in-orbit re-configurability of science payload to meet a variety of science requirements (namely different LWIR measurements) For the mass, dimensions and the envelope of the spacecraft, POLIS IOD follows the specifications for a 3U cubesat as published in the CubeSat Design Specification (Revision 13). The key feature of the structural design is that it is reconfigurable in order to meet the needs of different sun synchronous orbits. A large array of solar panels (four deployable and one fixed, 85x285mm each) is covering the high energy requirements and provides shading and heat shielding to the payload. The payload itself is mounted in a deployable compartment that it can be arranged in different configurations achieving maximum earth coverage, sun coverage and shading, for a given orbit. The primary Science Unit is a Thermal Infrared (TIR) imaging component, comprised by four (4) imaging sensors capable of acquiring images in the range of 8-14 μm wavelength, filters on the imaging sensors to acquire narrow band (~0.5μm bandwidth) images, lenses for the imaging sensors (35mm f/1) and a computing module to orchestrate the acquisition of the images, pre-process them (for deblurring and super-resolution enhancements when applicable) and store them. Raw pixel resolution @450km is ~220m. POLIS will be inserted in an SSO orbit of 450km altitude allowing the capture of thermal imagery over a selected subset of major cities having a population of over 1 million. Being an IOD mission and due to the selected orbit, temporal resolution is varying from 11 to 32 hours per city. Nonetheless acquired data are highly usable for global earth temperature monitoring, and even on the single-satellite deployment still provide valuable input for Urban Heat Island monitoring. Most importantly this mission lays the foundations for a higher temporal resolution mission that can utilize a constellation of 24 satellites allowing an unprecedented 1 hour temporal resolution for equatorial cities and even higher temporal resolution for higher latitudes. The combination of the high spatial and temporal resolution in TIR will allow immediate exploitation of the datasets for heat health applications, civil protection and energy. The POLIS mission will be using two distinct frequency bands. An UHF uplink and downlink frequency for basic telemetry beaconing and Tele-Command and Control, as well as an S band high-rate Science Data Channel. For Command and Control (TC&C) POLIS will be implementing fully the ECSS-E-ST-70-41C Telemetry and telecommand packet utilization Standard, building on the heritage of UPSat (also developed by the Libre Space Foundation) which implemented ECSS-E-ST-70-41C as an open source embedded library written in C. Thus POLIS spacecraft will be compatible with existing popular TC&C ground segments implementing the same standards. On every orbit, POLIS SU will be activated five times (re-configurable) using all 4 imaging sensors. The resulting data has a total size of ~14MiB/orbit. The design of the Science Data Channel is able to transmit this data on a single pass to a ground station. Modulation scheme and coding rate of the CCSDS 131.2-B-1 (QPSK, Turbo 1⁄2) is used. Having a frame error probability of 6% and including all pilots and framing overhead, transfer time of science data is about 2.9 minutes, well within ground station coverage margins. The POLIS ground segment will be relying on the SatNOGS project, a global network of networked and distributed ground stations, run as an open source project by Libre Space Foundation. SatNOGS is able to supply TC&C (CCSDS compatible with remote capabilities) and high-data-rate downlink services through its deployed and operational ground stations around the globe (currently 12 in production network and 15+ in development). Science data downlink operations are automated, optimized and scheduled by the SatNOGS Network. Local SatNOGS clients (running on site a ground station) acquire the data according to Network scheduling and post them in raw bitstream mode back to the Network. Conversion from raw bitstream to L0 happens as data is transferred automatically from the network to SatNOGS DB. Once the science data is safely stored in the DB they become available as L0 data through an open API, ready to be consumed for applications L1+ applications, within less than 5 minutes from the downlink operation. Given the locations of the deployed SatNOGS ground stations, POLIS will be able to have at least one downlink opportunity per orbit. The design, development, integration and operation of POLIS will be following closely the Tailored ECSS Engineering Standards for In-Orbit Demonstration CubeSat Projects and the Product and Quality Assurance Requirements for In-Orbit Demonstration CubeSat Projects as published by ESA. Also POLIS will be compliant to the ESA Space Debris Mitigation Compliance Verification Guidelines.

Authors: Papadeas, Pierros (1); Papamatthaiou, Matthaios (1); Daradimos, Ilias (1); Keramitsoglou, Iphigenia (2)
Organisations: 1: Libre Space Foundation, Greece; 2: National Observatory of Athens, Greece

New Space Economy (Part1)
11:00 - 12:30
Chair: Mónica Miguel-Lago - EARSC

11:00 - 11:20
Making a profitable Commercial Business with Earth Observation (via Webcast) (ID: 331)
Keynote: Johnson/Candace, Candace Maria
(video )

As of now, Earth Observation has been very much a non-profit,NGO, governmental oriented sector. Now as big business has understood that they must have knowledge of the Earth for their own activities, this sector is set to grow exponentially. Be it forestry, be it our oceans, be it pipelines, be it precision agriculture, be it urban heating, Earth Observation is poised to become the biggest market for satellite applications, surpassing telecommunications, Internet, etc. The need to know the status of our earth and its near space environment is paramount for our future as a civilisation. We need to identify those who are polluting our earth and our near space environment. Today, the tools to do this are in our hands. Let us all come together to create a safe earth and near-space environment for humanity.

Authors: Johnson/Candace, Candace Maria
Organisations: Serial Satellite Entrepreneur and Angel Investor, Luxembourg
11:20 - 11:35
SAP HANA Spatial Services – Enabling Digital transformation with EO (ID: 366)
Presenting: Gildhoff, Hinnerk
(PDF )

Becoming a data driven intelligent enterprise is essential to succeed in the age of digital disruption. Organizations must be able to gather intelligent insights from all sources of data: internal or external, structured or unstructured. Big data and advanced analytics enable us to gain new insights on this data in real time, transforming existing business models and processes. Combining the wealth of data provided by the European Space Agency’s Copernicus Earth Observation Program with the power of SAP HANA Spatial Services, organizations can now gain insights on the environment that enable these new business opportunities across industries. In this session you will hear from SAP’s top experts how this data is helping reduce risks in the insurance industry, optimize supply chain lines, optimize utilities network operations, enable precision farming and provide many other new services to citizens, consumers and customers. Learn about the tools used by both large enterprises and start-ups to create these new applications and services. Gain an understanding of the future evolution of spatial analysis and the opportunities it is opening when combined with other disruptive technologies like the internet of things, drones, compute vision,…

Authors: Gildhoff, Hinnerk
Organisations: SAP, SE
11:35 - 11:50
The New Space Economy Powered By Venture Capitalists: An Italian VC Fund For Space (ID: 278)
Presenting: Scatena, Lorenzo
(PDF )

Outer space is where globalisation is turned into common fundamental values: scientific, technological, humanistic, societal, diplomatic and, last but not least, economic. Space creates economic and social development and is one of the fastest growing sectors in the world. The space sector promotes technology transfer by its very nature: terrestrial technologies can be re-combined to improve the performance of space technologies, as non-space sectors can benefit from disruptive space technologies, facilitating the digitilization of traditional industries. However, an excellent technology transfer model needs to be backed by risk capital financing. 2017 is a record year for the space industry by number of investments, number of VC investors and number of new privately funded companies. Also in Italy, the use of public finance, or examples of private fundings actually funded by public bodies, is starting to be joined by private equity fundings. The virtuous circle constituted by the initiative of E. Amaldi Foundation is based on a Space Venture Capital Fund which aims to convey private resources to a sector with high growth potential. E. Amaldi Foundation, co-creator of innovative forms of financing in the field of technology transfer, promotes and collaborates in the implementation of the brand-new Italian Space Fund. The objective of the Fund is to maximize the economic returns of space start-ups and innovative SMEs operating in the aerospace sector in order to express hidden profits so far. The Fund wants to address SMEs and start-ups operating in the space and enabling technologies sectors through minority equity investments (seed capital and venture capital) and set itself up as an accelerator of innovation and technology transfer. The fund invests in both the downstream (software) and upstream (hardware) sectors, and in technologies using satellite data. This paper intends to make an analysis of the state-of-the-art of the venture capital investments in Italy and to present the Space Fund opportunities with the objective to additionally boost the Italian response to the New Space Economy.

Authors: Scatena, Lorenzo
Organisations: Fondazione E. Amaldi, Italy
11:50 - 12:05
The booster factors of the SpaceStream Paradigm (ID: 189)
Presenting: Abbattista, Cristoforo
(PDF )

The Satellite Missions scenario, from Earth Observation to Science, to planetary rovers, is enormously changing both for the number of the deployed assets and for the acquisition capability of the embarked sensors. In fact, sensors evolve towards increasing acquisition capabilities, in terms of bands (hyperspectral in optical and full polarization in SAR), resolutions and duty cycles. The result is a considerably increased data volume to manage on-board and to download to ground stations. Thus, we cannot continue to rely on the classical and sharp separation between UpStream and DownStream and we need to rethink the complete satellite value chain in terms of the new SpaceStream Paradigm, whose highest value is the capability to enforce an “application oriented” approach to space systems architectures, including uncertainty, flexibility and risk issues. To pursue this approach we need to innovate the operational concepts of a satellite mission transforming the Space-Ground communication link in a full-duplex channel, together with inter-satellite communication, where requests for tasks, data, processing, fusion and information flow in the continuous spacestream to deliver in time value to end-users. The key technologies supporting the SpaceStream involve the full stack of the On Board Payload Data Processing (OBPDP), one of the core businesses of Planetek Italia, going over the simple tasks of compression, binning, checking and reduction. In fact, new technologies like Artificial Intelligence, Blockchain, Low Power GPU, Compressive Sensing, Computational Imaging can impress a considerable boost to this paradigm shift. In Planetek Italia we are experimenting how AI techniques based, for example, on machine learning, DDN, novelty detection can improve the capabilities of a space asset, not only satellites but even planetary rovers. By using such technologies the asset can accomplish a generic task in autonomy, like detecting fires, floods, or any other anomalous, or not foreseen at all, event, and react to it with consequent tasks. For example, if a fire is detected, the satellite could retask its activities to follow the fire and to send alarm to the ground, where the small amount of the information to send can be even sent by different communication bands. Moreover, the satellite can ask to other satellites to follow the fire after its passage or can ask other data/information to improve its knowledge, completely in line with the Virtual Sensor approach. It means that the tasking Telecommand needs to be improved, to insert into the payload of the TC the coded algorithm, by an approach we call “tasking 4 processing”. That can be done both from the satellite and from the ground segment where users are not interested in the science data, but in understanding if a phenomenon is in act or not. In this exchange of information and data between different actors, even not belonging to the same organization, we need to protect data spoofing and to certify requests for services. We accomplish it by another innovative technology like the Blockchain, from where we get not only the certification of each processing step, but also the capability to activate them by the means of “smart contracts” by which different machines can interoperate in autonomy and certify that.

Authors: Abbattista, Cristoforo; Drimaco, Daniela; Amoruso, Leonardo
Organisations: Planetek Italia s.r.l., Italy
12:05 - 12:20
Capabilities and Challenges of New AI methods (ID: 325)
Presenting: de Vielleville, François
(PDF )

New EO platforms, huge data volume made available by Sentinels missions and new technologies of data processing and data analysis open new opportunities and challenges for EO added-value applications and ground segments design. New artificial intelligence (AI) algorithms and know-how exploiting huge data volume provide a solid basis for solutions in data analysis and information extraction. These new methods can be quickly developed on top of open source tools and provide an alternative solution to replace the existing methods of image analysis, feature extraction as well as the ad-hoc methodologies based on physical understanding of data. These AI methods, considered as black boxes, represent a very promising alternative to existing methods of physical inversion, prediction but also for classification or object detection. They are a revolution with respect to the current research on image and data processing since the quality of the results has been proven in different domains replacing the usual methods of data analysis. The exploitation of Sentinel data by these new AI methods, such as deep learning, their revisit time and coverage, jointly to the archives of past missions, should provide new useful analysis for scientists in the domain of climate change observation and natural process understanding. They are also good candidates for simplifying, and probably replacing, the existing algorithms of L1 data processing in the future ground segments. In the domain of high resolution data, AI opens the possibility of a systematic analysis of available data providing useful information for administration and general public applications (traffic and infrastructures management, economy development in local regions, etc.) These algorithms face several challenges: the need of a combined expertise on methodology (AI) and thematic applications; in terms of implementation, the need of important ground-truth (training data) for model learning and finally, the need to surpass the common reticence of the scientific community attached to the physical understanding of the data and the strong requirements on data quality (L1, L2 products). These aspects will require the support of the EO community, scientist and industrials to share existing validated databases for learning. They will require references and expertise in the different application domains for evaluating the results of these alternatives for the future ground segments (data quality expertise) and a scientific agreement for their use in inversion problems. The technologies and computing capabilities of new platforms provide the ideal environment for these new methodologies and should be the context of collaborative scientific and technological work allowing the consolidation of these methods.

Authors: Ruiloba, Rosario; de Vielleville, François
Organisations: AGENIUM Space

New Space Economy (Part2)
14:00 - 15:30
Chair: Philippe Lattes - Aerospace Valley

14:00 - 14:15
Transilvania Digital Innovation Hubs (ID: 315)
Presenting: Muntean, Bianca
(PDF )

ARIES Transilvania was established in 2004 in the North-Western region of Romania currently being one of the strongest industry driven IT association in this region of Europe. IT is the engine industry for the Romanian economy and clusters play an important role in creating synergies and supporting economical develop for all sectors. The abstract which we want to showcase is the impact of our cluster activity in establishing the strongest DIH in Romania. The Digital Innovation Hub in Transylvania, supporting innovative projects and having an Industry 4.0 approach. ARIES Transilvania follows Quadruple Helix model, provides and supports the open innovation within the eco-system (academia, public authorities, research institutes, private companies). This comes as a natural development, given the fact that for the past 5 years, ARIES Transilvania contributed heavily to the creation and support of the regional innovation eco-system in Transylvania, Romania, gathering a wide range of digital competences through its members and also having a close collaboration with the other clusters from the Northern Transylvanian Clusters Consortium: energy efficiency, furniture, creative industries, agriculture, new materials, in which iTech Transilvania, the cluster developed by ARIES Transilvania in 2013 is part and has the lead of coordination this year. ARIES Transilvania is focused on building and support an efficient technological development and also to provide expertise and technology to industry, mainly to companies from different sectors of activity, clusters, academia and local public authorities. This is done in order to increase the competitiveness of the regional economy. Given the dynamic of the IT industry, ARIES T, through the involvement in the Digital Innovation Hub brings added value by interlinking the IT companies from region with other companies that activate in other fields of activity as well as non-IT companies. Currently, ARIES T is focusing on building the offer (on behalf of its members) for cross-sectorial cooperation with stakeholders for the development of digitalized tools to improve agricultural sector, on robotics & artificial intelligence, creative industries and smart cities development. The association is actively representing the IT sector through its over 80 members, having together over 10.000 employees and a turnover of more than 200 million Euro. Cluj-Napoca is the main IT business center in the country, after Bucharest. The number of IT companies in Cluj-Napoca increased with over 75% over a period of six years, more over the national level, according to the study developed by iTech Transilvania Cluster and ARIES Transilvania. Data from a 2016 report shows that there are over 1.235 IT companies in Cluj, an increase with 705 up against 2011. The estimates say there will be over 2000 IT companies in Cluj in 2020. The IT sector in Cluj generated an annual revenue of over 583 million euros (2016), which represented 14% of the revenues generated in IT at national level. The estimates made by IT sector representatives say that Cluj will reach over 850 million Euros revenue from the IT sector in 2020. The IT sector is one of the main economic sector for Cluj, which thrives with academic graduates and young people who have chosen to re-qualify in IT due to the high offer of jobs and projects. Furthermore, ARIES Transilvania represents a bridge on a local, regional and national level and has strong links and direct connections with the actors from Transylvania’s ecosystem like start-up communities, the local public authorities, technology companies, tourism associations, universities and other clusters. All this being said, by providing skills in technology area, we maximize the potential of innovation ecosystem existing in Cluj-Napoca. Also, we contribute to regional development with the cooperation of our member companies’, public administration, universities and start-ups from our regional area and strong citizens’ involvement in our most important and relevant activities.

Authors: Muntean, Bianca
Organisations: ARIES Transilvania, Romania
14:15 - 14:30
MapTiler and OpenMapTiles: Vector Maps for EO (ID: 329)
Presenting: Pridal, Petr
(PDF )

Learn how the vector tiles and maps powered by OpenStreetMap and Copernicus satellite data are in many cases replacing Google Maps API. This innovative project from a small European team has taken the world by storm. We are using ESA data in many places, from beautiful satellite imagery base maps enriched with high-resolution aerial photos to vectorized landcover. Building an attractive visualization for your own geodata and showing results of earth observation analysis is now easier. Want to host custom data with us for your customers? Or make the next-generation websites and mobile apps with dynamic zooming? Use our open-source technology and affordable hosting infrastructure! Our company story is also exciting: from historical scanned maps to large-scale cluster data processing, global server infrastructure, and space applications.

Authors: Pridal, Petr
Organisations: Klokan Technologies GmbH, Switzerland
14:30 - 14:45
How Spatial Data Science Is Changing The World of Data (ID: 293)
Presenting: de la Torre, Javier

humans generate increasingly massive amount of location data every day. Everything from how cities operate, to the next phase of self driving cars will be shaped by how we use this location data to solve problems and accelerate technology innovations. Open source geospatial technology, and spatial analysis processes being developed and implemented by data scientists are paving the way for the next phase of understanding and accessibility. In this presentation we will explore some real world implementations of open source geospatial technologies in solving urban, environmental, and business problems. We will learn how data scientists have been catapulted to the forefront of spatial problem solving, and how new technologies are making advanced processes and analysis accessible for much broader audiences.

Authors: de la Torre, Javier
Organisations: CARTO, United States of America
14:45 - 15:00
AI in Space (ID: 352)
Presenting: Prasolov, Maxim
(PDF )

Authors: Prasolov, Maxim
Organisations: Neuromation
15:00 - 15:15
Open Innovation and AI research at ESA's Advanced Concepts Team (ID: 418)
Presenting: Summerer, Leopold
(PDF )

This talk will present recent research projects of ESA’s internal research think-tank, the Advanced Concepts Team, in the field of artificial intelligence, covering topics including neurocontrollers, swarm intelligence, biomimetic actuation and sensing, vision, evolution and smart search. It furthermore will present the kelvins.esa.int open competition website and give an outlook of the Open Space Innovation Platform, a new common ESA portal for soliciting and maturing novel ideas.

Authors: Summerer, Leopold; Izzo, Dario
Organisations: ESA
15:15 - 15:30
Emerging Semantic Web Technologies for EO Value-chains; Enabling Downstream Service Providers using Linked-Open Data (LOD) (ID: 322)
Presenting: Mil, Albert
(PDF )

Data access is key to any downstream service developer. However, even if the access is simple a deep knowledge of (the underlying physics) of EO data processing is often needed to actually be able to exploit the data and add value. We provide software services which not only enable downstream developers to transform Copernicus and other GRIDded data sources into linked data, also (spatial) knowledge from other sources and living textbooks can be inter-linked. By extending existing data cataloguing functionality with Semantic Web standards we provide a framework for explicitly describing the data models implicit in EO and other GRIDded data. This not only enhances the opportunities for downstream display and manipulation of data, but more importantly provides a framework where multiple metadata standards can co-exist and be inter-linked. This is a key step in creating interoperability and compliancy with (meta)data standards. Finally, a semantic EO workflow is presented where authors of analytical functions can make key aspects of their contributions explicit (e.g. the terms and conditions of used algorithms or sub-sections of key workflows). Although at the early stages of development (still), we hope this lays the technological foundation for a world where thematic expertise and data are both acknowledged and rewarded through smart-contracting such that transactions are automatically initiated.

Authors: Venus, Valentijn; Mil, Albert
Organisations: RAMANI B.V., Netherlands, The

New Space Economy (part3)
16:00 - 17:30
Chairs: Philippe Lattes - Aerospace Valley, Rory Donnelly - EARSC

16:00 - 16:15
Building the company without having an Exit in mind (ID: 275)
Presenting: Milcinski, Grega
(PDF )

Sentinel Hub was born as a project within an existing company with an established business model, building large-scale geospatial applications for governments in Europe and Africa. We did therefore not face some of the existential problems that other startups have. One important issue was common though - how to get resources to build a new idea. Perhaps that one was even more challenging as we had to pull resources from our own moneymaking and always "just before deadline critical" projects and assign them to a cool new idea, for which it is uncertain, whether it will generate income anytime in the future. We also had to change methodology from "services" business to the "product" one, invent a business model, find out, if there actually is a market, learn about marketing and, last but not least, actually build something. There was however one major difference to many startups nowadays - at practically no point on our path have we seriously considered our "Exit strategy". We simply wanted to make our cool new idea true. And once it was somehow working, we wanted it to actually be used. This made our journey perhaps a bit less fun and product probably a bit better. This will be a story from our early steps three years ago up to now and some outlooks for the future.

Authors: Milcinski, Grega
Organisations: Sinergise, Slovenia
16:15 - 16:30
Unlocking The Power Of Deep Tech Globally (ID: 343)
Presenting: de la Tour, Arnaud

Cutting edge technologies are no longer just software-based, and despite the vast amount of ground breaking deeptech research happening across the globe, the necessary resources to support and bring these deep-tech innovations to market are severely limited, particularly outside the United States. At Hello Tomorrow we are on a mission to leverage the power of deep technologies to tackle the toughest world challenges. We do this by fostering a global ecosystem for deep technologies to thrive, accelerating projects through events and competitions and pursuing research on technological trends.

Authors: Pedroza, Sarah; de la Tour, Arnaud
Organisations: Hello Tomorrow, France
16:30 - 16:45
Ticinum Aerospace Builds on Satellite and Ground Data to Slash Risk Uncertainty (ID: 137)
Presenting: Dell'Acqua, Fabio
(PDF )

In the “Big Data” age, the enormous amount of data created every day, including “space data”, is a resource for several applications. The core business of Ticinum Aerospace (TA) [1], a spin-off company of the University of Pavia, Italy, is to provide risk-related services based on the analysis of remotely sensed data by means of innovative algorithms. The company was founded in year 2014 as an “innovative startup” under the Italian law, with a plan to tackle uncertainty in risk assessment and agriculture by developing two innovative services: CountFloors and Saturnalia. CountFloors tackles the exposure factor of risk computation, in urban areas and on a large scale. Risk modelers can narrow down uncertainty in exposure estimation when buildings are described by several risk-relevant features, such as number of floors, occupancy type, age, shape regularity, materials, etc. Whereas satellite images are used for thematically rough classification of human settlements, street-level pictures are largely under-exploited, despite they allow zooming deeply into building features. Several companies are collecting these in-situ data globally, increasing their amount practically every day. TA has developed a new breakthrough framework combining satellite and street-level pictures to map exposure proxies [2]. The implemented system is capable of automatically retrieving satellite and in-situ pictures, extracting risk-relevant features, to then attach feature ‘tags’ (like e.g. number of floors or occupancy type) to each polygon in a GIS layer. The European Space Agency is in the process of awarding an ESA Kick-Start Activity project to assess commercial viability of this service under the “Space for Municipalities” call [3]. Saturnalia [4] instead provides early knowledge about the quality of fine wine traded in an investment perspective. Saturnalia offers quantitative and objective prediction of wine quality before bottling, based on satellite and meteorological data acquired on vineyards. Quality proxies will be issued before production, based on the history of all the parameters collected during the current and past growing seasons. The user can access wine quality estimates preceding the release of reviews. These insights will support better-informed decisions and give the chance to save money by reducing the uncertainty in discounted advance purchases. Saturnalia includes a tool for automatic retrieval and analysis of satellite and meteorological data over the areas of interest, which are then used by the prediction block. Saturnalia was sparked by a winning idea at the “Space App Camp” contest organised by the European Space Agency (ESA) in Frascati, Italy, September 2016. The idea has won the "Big Data, Big Business" challenge by CGI at the 2017 edition of the Copernicus Masters initiative, and is currently being furthered under the ESA Kick-Start Activity funding scheme, “Food Security” call (contract No. 4000122245/17/NL/NR). REFERENCES [1] Ticinum Aerospace web site. Available online at: http://www.ticinumaerospace.com/ [2] Iannelli, G. C., & Dell’Acqua, F. (2017). Extensive Exposure Mapping in Urban Areas through Deep Analysis of Street-Level Pictures for Floor Count Determination. Urban Science, 1(2), 16. [3] ESA AO8872. Publicly available description of call at https://business.esa.int/funding/invitation-to-tender/space-for-municipalities [4] Web site of the Saturnalia project. Available online at: http://saturnalia.tech/

Authors: Iannelli, Gianni Cristian (1); De Vecchi, Daniele (1); Dell'Acqua, Fabio (1,2); Lisini, Gianni (1); Galeazzo, Daniel Aurelio (1)
Organisations: 1: Ticinum Aerospace s.r.l., Italy; 2: Università di Pavia, Italy
16:45 - 17:00
FabSpace 2.0: Stimulating Geodata-Driven Innovation (ID: 202)
Presenting: Del Frate, Fabio
(PDF )

The geodata world is going through an important stage at the European level, with substantial efforts allocated to making the data accessible to all. The Fabspace 2.0 project has made this its mission in a project funded by European Commission under the Horizon 2020 program. The aim is to get civil society – students, researchers, technicians, organizations, public authorities, involved by bringing the culture of satellite data to sectors normally focusing on other issues. After more than two years the project has already obtained several achievements: courses, events, challenges, free training have been organized in each of the 6 countries participating (France, Belgium, Germany, Greece, Italy, Poland). Moreover, one-stop shops for information have been opened which, in some cases, received more almost 300 visitors per year. Started with an initial number of 6 Fabspaces, during the project the number expanded with 14 new FabSpaces located in various countries worldwide. Indeed, both initial and new partner universities are centers of excellence in research in the field of geomatics and space based information. They are not only offering a highly-qualified human capital likely to generate innovation, but also providing open access to data generated within previous research works. Thus the FabSpace 2.0 project can be a particularly relevant opportunity for research teams to make a step forward towards Science 2.0. The founder FabSpaces and the new FabSpaces are part of the international FabSpace 2.0 network that will be launched at the beginning of 2019, and its legal status will be defined according to the results of the report on European and Non-European initiatives with which FabSpace 2.0 can create synergies. Given the particular added value of Earth Observation data and Satellite Navigation services in countries with less ground infrastructures (i.e. developing countries) specific attention will also be given to the developing countries as much business markets are expected to grow. In the end, the network will be extended worldwide. In the context of the project some use cases have been produced in various fields of applications which will be also presented.

Authors: Del Frate, Fabio (1); Mothe, Josiane (2); Barbier, Christian (3); Becker, Matthias (4); Olszewski, Robert (5); Soudris, Dimitrios (6)
Organisations: 1: University of Rome "Tor Vergata" - Italy; 2: University of Toulouse, IRIT-CNRS UMR5505, Toulouse – France; 3: University of Liège - Belgium; 4: Darmstadt Technical University – Germany; 5: Warsaw University of Technology – Poland; 6: Institute of Communications and Computer Systems – Greece
17:00 - 17:15
SpaceWave: internationalisation of European SMEs for the Blue Growth market (ID: 310)
Presenting: Coliolo, Fiorella
(PDF )

The Aerospace Technological District (DTA) of Apulia Region (Italy), together with other three clusters Aerospace Valley, Pôle Mer Méditerannée and Marine South East Limited are bringing together their respective expertise in Earth Observation and Blue Growth to build up the basis of a European Strategic Cluster Partnership called SpaceWave. SpaceWave, co-founded by the COSME project of the European Union, aims to support European SMEs with a specific internationalisation plan to accelerate both the EO technologies global deployment in Blue Growth and the economic growth of Europeans SMEs. In this presentation we would like to show the first results of the project and interactively discuss challenges and opportunities to extend SpaceWave to additional clusters across EU to benefit a larger group of SMEs.

Authors: Coliolo, Fiorella (1); Zilli, Antonio (1); Iasillo, Daniela (2)
Organisations: 1: Apulian Aerospace Cluster; 2: Planetek Italia s.r.l., Italy
17:15 - 17:30
Future Cities: Using Data & Innovation to Drive Systemic Change in Urban Spaces (ID: 407)
Presenting: Sadiku, Lejla
(PDF )

UNDP's innovation work shows that the most interesting and novel insights and experiences in policy work come at the intersection of industries, sectors and perspectives. Similarly, with the emerging work on cities, we are seeing a set of experiments that sit at the cross section of new approaches and technologies, and are able to unlock new value which disrupt existing sectoral or other silos. These seek to either reinforce existing inequalities and growth patterns or could be leveraged for a more inclusive and participatory development. Seeing that cities are lead markets, due to their rapid growth and economic prowess, UNDP's innovation efforts in the City Experimentation Fund are focusing in on how horizon technologies and data innovation can provide real-time and comprehensive sense-making for life in cities and drive a new range of solutions that are intersectional by design. In this process, EO is an integral part in understanding the city and generating evidence based decisions. At the Phi-Week, we will launch our City Experimentation Fund and outline the engagement strategy, as well as the broad intersection in which we will be experimenting - including on: art, science & technology, new food/urban agriculture, and new infrastructures.

Authors: Sadiku, Lejla; Pawelke, Andreas; Vasilescu, Dumitru
Organisations: UNDP, Regional Bureau for Europe and Central Asia

EO Open Science

Open Data, Tools and Virtual Labs
09:00 - 10:45
Chairs: Miguel D Mahecha - Max Planck Institute for Biogeochemistry, Espen Volden - ESA- ESRIN

09:00 - 09:15
SNAP as collaborative research and exchange platform (ID: 140)
Presenting: Peters, Marco
(PDF )

The SentiNel Application Platform, SNAP, is established a the first-choice tool when new as well as experienced users want to work with ESA’s Sentinel, ENVISAT and Earth Explorer data, as well as combining them with other Earth Observation data. SNAP has more than 20 000 user installations and a very active forum with more than 3800 registered users. Today the most common usage of SNAP is via its Desktop application. However, SNAP is more than that, including powerful server side processing, batch-mode, and scripting capabilities, and in combination with the STEP website it offers a suite of collaboration tools enabling knowledge exchange and sharing of results. The support of the validation activities of Sentinel 3 is an excellent example how SNAP validation tools foster improvement of Copernicus products, developed by the community and aiming at ground segment improvement. Over the next two years the SNAP roadmap foresees further evolution of such community functions. “Sharing of resources” is a paradigm for the development and this addresses not only resources in a technical sense of distributed computing and cloud exploitation, but also in terms of human resources by means to showcase technologies, applications and sharing of ideas. The overarching idea is to bring data applications to life, for the benefit of environment and society. In this presentation we will demonstrate by start-to-end examples how SNAP can be used today in an optimal way, in a typical distributed network of researchers, to develop their ideas and share data, code and results. We will show how SNAP integrates with the ESA TEP and the Proba-V MEP. Also, the Copernicus DIAS will offer SNAP as a standard tool for the development of front-office services. We will present the roadmap for the next two years, in order to be the starting point for discussion and guidance for refining the foreseen evolution.

Authors: Peters, Marco (1); Fomferra, Norman (1); Barrilero, Omar (2); Cara, Cosmin (3); Veci, Luis (4); Engdahl, Marcus (5)
Organisations: 1: Brockmann Consult, Germany; 2: C-S SI; 3: C-S Romania; 4: Array/Skywatch; 5: ESA
09:15 - 09:30
SARbian – The free and open SAR operating system (ID: 113)
Presenting: Eckardt, Robert
(PDF )

Processing Synthetic Aperture Radar (SAR) data is a complex task, which requires a deep understanding of the interaction of microwaves with the Earth's surface, as well as a basic understanding of radar satellite technology. In addition, knowledge about the availability and use of SAR processing software is necessary to succeed in the interpretation of SAR data. In the framework of the SAR-EDU project, we developed the SARbian Operating System (OS) to make SAR software more accessible to the end user by removing the obstacles of the installation process. SARbian assimilates several free and open SAR software tools into a Debian Linux based Operating System which can be used either as a live OS or as a virtual machine within any other OS. One main application field of SARbian will be the EO capacity building sector. SARbian can be facilitated as a standalone plug'n play OS on e.g. flash drives or DVD, making SAR software accessible in regions without fast internet connections and to users without preliminary skills in building and installing Linux software. URL: https://eo-college.org/sarbian/

Authors: Eckardt, Robert; Cremer, Felix; Glaser, Felix; Schmullius, Christiane
Organisations: Friedrich-Schiller Universität Jena, Germany
09:30 - 09:45
EO data processing in QGIS with a python API (ID: 171)
Presenting: Rabe, Andreas
(PDF )

Free and open source software (FOSS) is becoming more and more important for the processing of Earth observation (EO) data. Reasons for this development are manifold. Yet, the availability of programming languages such as R, Python or Julia plays a major role in this context. These languages are easy to learn and used by many scientists who in the past may not have programmed for their analysis. At the same time the languages include very powerful and problem-oriented packages, enabling implementations even by relatively inexperienced programmers. In the general context of FOSS for analysing geodata, the release of QGIS version 3 marks a milestone. Its suite of available plug-ins and the interfaces to python make it a very powerful environment for EO data analysis. Nevertheless, the in-built python integration of frameworks like GDAL, Qt, and especially NumPy and SciPy is not fully completed with regard to the analysis of EO image data analysis. For example, the application of machine learning approaches like Random Forests or Support Vector Machines for regression as easy-to-use QGIS processing algorithm is still missing. We present packages for integrating the functionality of scikit-learn, the most-advanced package for machine learning in python, into the QGIS processing framework. These can be used by programmers to call a variety of machine learning methods on geodata and to develop their own QGIS processing algorithms with user-friendly, customized widgets to collect the parameters required to run their algorithms. The full potential of this integration framework is illustrated along the example of the EnMAP-Box 3.0. The EnMAP-Box is designed to process imaging spectroscopy data and particularly developed to handle data from the upcoming EnMAP (Environmental Mapping and Analysis Program) sensor. It serves as a platform for sharing and distributing algorithms and methods among scientists and potential end-users. The two main goals for the development are to provide (i) state-of-the-art applications for the processing of high dimensional spectral and temporal remote sensing data and (ii) a graphical user interface (GUI) that enhances the GIS oriented visualization capabilities in QGIS by applications for visualization and exploration of multi-band remote sensing data.

Authors: Rabe, Andreas; Jakimow, Benjamin; Thiel, Fabian; van der Linden, Sebastian
Organisations: Humboldt-Universität zu Berlin, Germany
09:45 - 10:00
Enabling research, applications, and collaboration on global high-resolution mosaics from Sentinel-2 via the Copernicus S2GM Service (ID: 175)
Presenting: Brandt, Gunnar
(PDF )

The Copernicus programme has substantially increased quality and availability of Earth Observation (EO) data for a diverse userbase and a variety of applications. One of the latest additions to the programme’s services, the Sentinel-2 Global Mosaic (S2GM) as part of the Global Land Component of the Copernicus Land Service, explicitly addresses a growing need for simple access to Sentinel-2 surface reflectance (L2A) products and their convenient handling. To this end, the S2GM service provides on-the-fly generation of mosaics from Sentinel-2 surface reflectance spectra as Analysis Ready Data (ARD), which substantially lower the barriers to their usage also by non-experts, because they are calibrated, well-described products in common, easy to handle formats. S2GM Mosaics cover different time spans from one day to a full year and comprise best representative spectra for the considered periods. The service offers various options to tailor mosaics in terms of format, spatial extent, spatial and temporal resolution through an interactive web frontend powered by highly scalable cloud resources, which ensure a great user experience and fast access to products regardless of their temporal and spatial extent. S2GM users thus save substantial efforts on their side. The service provides resources for data access, storage, pre-processing and quality control, and users benefit from well-documented and scientifically validated mosaic processing. Users can hence focus on the analysis and processing they are interested in. Users with high processing and data coverage needs may even forego the operation of own infrastructure and instead implement further processing of the mosaic products using cloud resources at the S2GM data, albeit with additional costs. We present here the overall system approach (frontend and backend technology), interactive user functionalities, as well as the scientific content of the products generated from Sentinel-2. There are two algorithms used for the generation of mosaics, Medoid and Short-term Composite, for which we illustrate their characteristics and performance. In addition, mosaics from different geographical locations and temporal periods are shown and their suitability for typical applications in environmental monitoring is discussed. We also explain how to become a S2GM user and illustrate typical use cases for the S2GM service including the extracting of time-series at arbitrary locations, the subscription of user-defined mosaics in freely defined areas of interest and the access to mosaics in areas of high relevance for specific applications like forest monitoring in the UNFCCC REDD+ regions.

Authors: Brandt, Gunnar (1); Kirches, Grit (1); Peters, Marco (1); Brockmann, Carsen (1); Milčinski, Grega (2); Kolarič, Primož (2); Riffler, Michael (3)
Organisations: 1: Brockmann Consult GmbH, Germany; 2: Sinergise Laboratory for Geographical Information Systems, Ltd.; 3: GeoVille Information Systems GmbH
10:00 - 10:15
Products of the Soil Composite Mapping Processor (SCMaP) – A novel approach for mapping soil development (ID: 148)
Presenting: Heiden, Uta
(PDF )

To achieve sustainable food security, health and productivity of soils are the ultimate requirements. At DLR, a fully automated approach has been designed and implemented, allowing large scale top soil analysis. The so called Soil Composite Mapping Processor (SCMaP) generates several products such as cloud-free multispectral soil reflectance composites, the distribution of exposed soils, duration of soil exposure, soil use intensity and others. Each of the products is available in 5 year time steps allowing the analysis of long term developments of soils. The underlying data base are multispectral Landsat imageries from 1984 - 2014. Technically, the processor is applicable to areas with agricultural activity, thus, several countries are processed so far such as Germany, Alberta (Canada) and Spain. The objective of this talk is to (1) provide a short overview about the status of the technology, (2) to show the SCMaP products and derived products and their potential for further soil analyses and (3) to discuss future developments.

Authors: Heiden, Uta (1); Zepp, Simone (1); Pinnel, Nicole (1); Jilge, Marianne (1); Zeidler, Julian (1); Rogge, Derek (2)
Organisations: 1: DLR Oberpfaffenhofen, Germany; 2: Hyperspectral Imaging

Data Cube
11:00 - 12:30
Chairs: Carsten Brockmann - Brockmann Consult, Philippe Mougnaud - ESA

11:00 - 11:20
Scalable Spatio-Temporal Analysis through Open Standards: The European Datacube Engine (ID: 102)
Keynote: Baumann, Peter
(PDF )

Datacubes form an enabling paradigm for serving massive spatio-temporal Earth data in an analysis-ready way by combining individual files into single, homogenized objects for easy access, extraction, analysis, and fusion - "one cube says more than a million images". In common terms, goal is to allow users to "ask any question, any time, on any size" thereby enabling them to "build their own product on the go". Today, large-scale datacubes are becoming reality: For server-side evaluation of datacube requests, a bundle of enabling techniques is known which can massively speed up response times, including adaptive partitioning, parallel and distributed processing, dynamic orchestration of heterogeneous hardware, and even federations of data centers. Known datacube services exceed 600 TB, and datacube analytics queries have been split across 1,000+ cloud nodes. Intercontinental datacube fusion has been accomplished between ECMWF/UK and NCI Australia, as well as between ESA and NASA. From a standards perspective, datacubes belong to the family of coverages, as per ISO and OGC; the coverage data model is represented by OGC Coverage Implementation Schema (CIS), the service model by OGC Web Coverage Service (WCS) together with its OGC Web Coverage Processing Service (WCPS), OGC's geo datacube query language. Additionally, ISO is finalizing application-independent query support for massive multi-dimensional arrays in SQL. In our talk we present the concept of queryable datacubes, the standards that play a role, as well as interoperability successes and issues existing, based on our work on the European Datacube engine, rasdaman, which is powering today's largest operational datacube services.

Authors: Baumann, Peter
Organisations: Jacobs University | rasdaman GmbH, Germany
11:20 - 11:35
The Earth Sytem Data Lab: A light-weight data cube approach (ID: 115)
Presenting: Mahecha, Miguel D
(PDF )

Scientists today are confronted with a plethora of Earth observations (EO) that are available through a multitude of data platforms, monitoring initiatives, and model-data integration approaches. To tap into the full potential of joint exploitations of these datasets, the scientist has to deal with different data formats, retrieval methods, data usage policies, inconsistent resolutions and formatting as well as getting access to suitable computation and data storage facilities. This can be time consuming and impede scientific progress. Our Earth System Data Lab is a novel infrastructure for scientists. It not only provides a set of harmonized data cubes, but also developed a strong data analytics toolkit in tandem with access to computational and visualization facilities to jointly and interactively analyze the datasets. The data cube consists on a selection of diverse Earth observation datasets including multiple atmospheric variables, Land-Atmosphere fluxes, biophysical parameters and is currently extended to ocean variables. These datasets are analysis-ready, harmonized on different temporal and spatial resolutions to allow rapid exploitation. The core data analytics toolkit allows users, proficient in a language of scientific computing like Python or Julia, to apply their own analysis methods on the multivariate data cube. Examples implemented include nonlinear time series analytic tools, dimensionality reduction approaches, or anomaly detectors, among others. To showcase the potential of the approach we will present some scientific case studies demonstrating the full potential of the infrastructure.

Authors: Mahecha, Miguel D (1); Gans, Fabian (1); Brandt, Gunnar (2); Fomferra, Norman (2); Permana, Hans (2); Brockmann, Carsten (2); Reichstein, Markus (1)
Organisations: 1: Max Planck Institute for Biogeochemistry, Germany; 2: Brockmann Consult GmbH
11:35 - 11:50
CUBEO: a scalable pre-processing and Data Cube platform for Geoinformation application services (ID: 170)
Presenting: Corsi, Marco
(PDF )

The Big Data revolution in satellite Earth Observation started when large EO data archives were released publicly. However, at that time the full potential to turn data into information was heavily limited by the unavailability of sufficient ICT resources to handle full mission archives enabling data analytics, multi-temporal analysis, etc. The Big EO Data revolution continued with the progressive enlargement of commercial and free&open satellite data archives which were improving performances in spatial/spectral resolution and revisit capabilities. The latest step (chronologically) in this sense is the major contribution of the Copernicus programme through the Sentinels. In parallel, the value of satellite images has been slowly shifting from the traditional use (single image analysis to derive cartographic layers) towards innovative data exploitation strategies based on Big Data paradigms such as large scale multi-temporal analysis, rapid detection of objects with a frequent revisit, fusion of EO and non-EO data. The new Geoinformation paradigm is now based on the intimate combination of satellite time series, scalable ICT (e.g. cloud) and EO data processing and analysis capabilities to deliver actionable information (e.g. analytics, reports, maps). In this context, modern Geoinformation service providers cannot avoid to adopt novel technologies such as Data Cubes deployed in highly scalable cloud environments to build effective service chains that are in line with vertical market expectations which are more and more demanding in terms of spatio-temporal coverages, delivery time and accuracy. CUBEO is e-GEOS scalable pre-processing and Data Cube platform for Geoinformation application services developed in cooperation with MEEO and exploiting MEEO’s well recognized technology and experience in EO data access platforms. CUBEO allows its users to build on demand Data Cubes from multi-source optical (e.g. Sentinel-2, Landsat-8, MODIS) and SAR (e.g. Sentinel-1, COSMO-SkyMed) data over any Area of Interest, any Time of Interest, worlwide in just a few steps. The main concept behind CUBEO is to provide expert and non expert users with professional EO Data Preparation Pipelines (based on free SW such as SNAP as well as on e-GEOS proprietary solutions) capable to pre-process raw data (i.e. Level 1 optical/SAR data) to generate Analysis Reeady Data (i.e. Level 2 / Level 3 products) accessible though Data Cubes (i.e. extended WCS interfaces) to enable further data analysis operations (e.g. classification, anomaly detection, extraction of descriptive/predictive analytics). CUBEO has been developed using a cloud-oriented approach (containers, process orchestration, autoscaling) to maximize the horizontal and vertical scalability of the Data Preparation Pipellines and of the WCS data access endpoints to be able to cope with on demand Data Cube requests largely varying in size. CUBEO is currently deployed on AWS, with minor migration effort required to deploy it in a different cloud environment (e.g. Copernicus DIAS). CUBEO has already supported large scale geoinformation analysis tasks such as, for example, the analysis of MODIS 2000 – 2018 Time Series at continental scale as well as the preparation and analysis of combined Sentinel-1/Sentinel-2/Landsat-8 yearly Time Series over more than 2 million km2 for agricultural monitoring purposes.

Authors: Corsi, Marco (1); Grandoni, Domenico (1); Biscardi, Mariano Alfonso (1); Volpe, Fabio (1); Pistillo, Pasquale (1); Mantovani, Simone (2); Cavicchi, Mario (2); Ferraresi, Sergio (2); Barboni, Damiano (2)
Organisations: 1: e-GEOS, Italy; 2: MEEO, Italy
11:50 - 12:05
Datacube Services on a Satellite: the ORBiDANSe Project (ID: 103)
Presenting: Baumann, Peter
(PDF )

Project ORBiDANSe (Orbital Big Data Analytics Service) is driving the "ship code to data" paradigm to the extreme: it makes a Cubesat an online Web data service for real time EO acquisition, processing, and retrieval, based on the ISO SQL/MDA (Multi-Dimensional Arrays) standard under adoption. Images get acquired by the on-board camera and geo-referenced via GPS. Access is done via a high-level array query language allowing ad-hoc processing and filtering on spatio-temporal datacubes, similar to what standard SQL accomplishes on tuple sets. On board, such queries are evaluated by the rasdaman Big Array Data Analytics engine. Among others, it supports spatio-temporal queries, hence is truly multi-dimensional. The configuration can be updated/reconfigured in-flight, although emphasis will be put on automatic optimization, including acquisition planning based on incoming queries. In a direct scenario, targeted subsetting/pocessing of imagery can be downlinked directly to the requesting client, effectively turning the satellite into an image database. In a federated scenario, a client may submit some complex decision support query to a data center; the rasdaman instance there finds out that data are missing and spawns a sub-request to the Cubesat; merges its locally computed results with the Cubesat response into the final result sent back to the user. As rasdaman is already cloud-parallelized, queries can be distributed automatically between ground and space instances. Overall goal of the project, which is conducted jointly by Jacobs University and rasdaman GmbH, is to achieve a quantum leap in both EO service quality, data availability, and service integration. This project is partially funded by German BMBF (Ministry of Education and Research).

Authors: Baumann, Peter
Organisations: Jacobs University | rasdaman GmbH, Germany

New Education
14:00 - 15:30
Chairs: Parya Pasha Zadeh - ITC, University of Twente, Robert Eckardt - Friedrich-Schiller Universität Jena

14:00 - 14:20
Challenging Education, Challenging the Education! (ID: 330)
Keynote: Pasha Zadeh, Parya
(PDF )

We are experiencing an era in which the science and technology are growing faster than ever before, and data volumes are exploding! This is not any different for Earth Observation (EO) Science and technology where data is being captured more frequently, more accurately, and more detailed than ever. More important, spatial data is increasingly becoming publicly available for little or no costs. Worldwide projects and programmes are being funded to mainstream the use and uptake of EO in various fields, from food and water security to sustainable urban developments. But are we moving fast enough with education as well to cope with this pace? With EO data and technology becoming widespread, the need for education is increasing and providing efficient instructions in a diverse manner becomes fundamental. We are no longer educating only graduate and undergraduate students on our campus. The profile of those who seek education in these domains is also evolving rapidly. There is a growing number of students who come directly from the industry as the need for EO data and products increases across an ever wider spectrum of economic activities. Among the different student profiles, one can think of a data scientist who lacks field expertise in the EO domain, a high-level manager at an international funding agency who is curious about the potential of this technology, or an EO professional working on project implementation on the field who requires fine-tuning and refreshing his/her existing knowledge. Regardless of the status or knowledge, every individual has a different study aptitude and achieves optimal learning in a different manner as well. A global learner might learn the concepts and their relationships by using a graph representation of topics, whereas a sequential learner would prefer to follow a series of recorded videos or read the ordered chapters of a book. Nowadays anyone is able to follow an online course varying in levels from the principles of remote sensing to the advanced analytics using machine learning, but providing the optimal learning experience remains a challenge for educators. New approaches to teaching are being explored in educational institutes in order to tackle these challenges. In this talk, we will look at the panoramic view of new approaches and trends in the context of international higher education institutes from the perspective of both teachers and students. It is guaranteed that you will leave the room with some answers, and certainly with more questions!

Authors: Pasha Zadeh, Parya
Organisations: ITC, University of Twente, Netherlands, The
14:20 - 14:35
Understanding EO, MOOC-by-MOOC (ID: 119)
Presenting: Hodam, Henryk
(PDF )

"To know that we know what we know, and to know that we do not know what we do not know, that is true knowledge." Nicolaus Copernicus might have explained the definition of propaedeutic learning very well and in a funny manner: the acquisition of knowledge by applying scientific methods while being aware of epistemological problems. Nowadays, the bird’s eye perspective of satellites enables humankind to explore the spatial patterns on our Earth, detached from the limited scope of the human eye. High-technology sensors extend the scope of perception to the global and the invisible. Following his heritage and especially his name, the European science community experiences a revolution in terms of data access and analyses. However, remote sensing data and image processing techniques provide more than “just” the chance to monitor our environment and secure societal benefits. Earth observation links the fascination of aesthetic imagery, technological progress, and changing perspectives. Hence, it is predestined to act as a learning instrument, mediating questions and problems of the STEM curricular. The presentation demonstrates how remote sensing can act as glue to link science and school education in an interactive, intermedia, and interdisciplinary manner. It is shown how children are taught curricular topics and introduced to the world of data behind fancy-colored satellite images at once. Currently, the development paradigm focuses on Massive Open Online Courses (MOOC), structured in miniature format for school purposes. Mini-MOOCs provide the advantage of addressing different types of learner and learning situations. Additionally, they can be used for accumulative lessons or just for one specific topic. Thematically, the Mini-MOOCs deal with the observation and analysis of the global change in terms of climate, water, and land cover. Hence, it provides the chance to simultaneously address high school seniors and grown-up novices. Methods and techniques of remote sensing are initiating the media preparation of curricular topics of STEM classes like Geography, Physics, Mathematics, and Biology. Thus, the multi-level aspects of global change can be explained in an illustrative approach and basic knowledge of satellite image interpretation sustainably taught. It will be concluded, how further mediation techniques might foster methodological and action-oriented competences. Accordingly, it will be shown how augmented reality and immersive education can encourage pupils to measure and analyze the processes and patterns in a globalizing world.

Authors: Rienow, Andreas; Hodam, Henryk; Lindner, Claudia; Ortwein, Annette; Schultz, Johannes; Selg, Fabian; Jürgens, Carsten
Organisations: Ruhr-University Bochum, Germany
14:35 - 14:50
Educating to Earth Sciences and observation through cooperation and gamification (ID: 198)
Presenting: Merletti De Palo, Alessandro
(PDF )

Cooperation and gamification in education have been improving the current results in learning and practicing sciences, allowing even complex concepts and methodologies to be acquired with high levels of efficacy in disseminating scientific knowledge. Following our live experience with AIR3 Associazione Italiana Registi in the communication field and Progetto Parco del Benessere in the medical one we propose an improved model to teach Earth sciences that involves cooperation and an eventual layer of gamification useful for ad hoc dissemination of Earth related sciences.

Authors: Merletti De Palo, Alessandro
Organisations: Cooperacy, Italy
14:50 - 15:05
Which geospatial sector skills will be demanded for the future? Experiences based from EO4GEO, Sector Skills Alliance project (ID: 244)
Presenting: Miguel-Lago, Mónica
(PDF )

The Earth Observation and Geoinformation sector is of strategic importance with its great potential to support to many European, national and sub-national policy domains. However, the uptake of existing data and services is still sub-optimal and their integration in added value services for government, business and citizens could be improved. Several studies (P. van der Heiden (2015) / Small Businesses, Job creation and growth, OECD) revealed that the lack of specialized technical and scientific skills impedes this uptake by private companies and other actors. Moreover, there is a gap between the offerings of academic and vocational education and training at both universities and private institutions, and the specifics of what is needed to make this uptake happen fluently. The sector is rapidly changing, and the needs of industry are constantly evolving. Several trends emerged that could transform the market landscape, where the proliferation of small satellites, the emerging of new start-ups leveraging data cubs and artificial intelligence extracting insights from the huge amounts of satellite imagery, etc. To that end, the Erasmus+ project EO4GEO (www.eo4geo.eu) aims to define a long-term and sustainable strategy to fill the gap between supply of and demand for space/geospatial education and training taking into account the current and expected technological and non-technological developments in this sector. Advances in technology are changing the nature of geo-information and the skills needed to support the sector: which skills and profiles are now demanded, and which will be the skills for the future professionals”. EO4GEO promotes the new action launched by the European Commission “Blueprint for Sectoral Cooperation on Skills” to support the implementation into the economic sector of Earth Observation. The Blueprint is one of the ten actions identified in the New Skills Agenda, which is designed to improve the quality and relevance of skills to meet the need of a rapidly changing labour market. Disruptive technologies make us realize that our education and training systems will increasingly need to develop innovative, entrepreneurial and flexible mind-sets of their graduates. Technological changes are expected to create an increasing demand for workplace learning that complements formal skills training. Building on the lessons learned during this first period of the project, the skills strategy will draft the basis to be responsive to rapid changes in industry and employment. It will respond to the need to offer skillsets which correspond to what is currently required by the market to make the development and use of Copernicus-based products and services take place effectively.

Authors: Miguel-Lago, Mónica (1); Vandenbroucke, Danny (2); Lang, Stefan (3); Carbonaro, Milva (4)
Organisations: 1: EARSC, Belgium; 2: KU Leuven, Belgium; 3: Universität Salzburg, Z-GIS, Austria; 4: GISIG, Italy
15:05 - 15:20
EO College - The Earth Observation Education platform (ID: 104)
Presenting: Eckardt, Robert
(PDF )

The increasing availability of EO data and corresponding analysis tools fosters a massive demand for offers in education. Alongside with training efforts on location, the supply with web based education solutions becomes more and more important. The EO College is an attempt to provide both, a central platform to host EO education material and educational content, to enable users to make use of (freely available) EO data based on the scientific research of the eLearning community and modern web technologies. As one of the first efforts after the launch of the EO College, the massive open online course (MOOC) ‘Echoes in Space – Introduction to Radar Remote Sensing’ was developed and deployed on behalf of ESA. The lessons learnt and findings from the development of the platform and the MOOC shall be presented in this contribution.

Authors: Eckardt, Robert (1,2); Eberle, Jonas (1); Pathe, Carsten (1,2); Urbazaev, Mikhail (1); Ismail, Baris (1); Thiel, Christian (1); Schmullius, Christiane (1)
Organisations: 1: Friedrich-Schiller Universität Jena, Germany; 2: Earth Observation Services Jena

Research Infrastructures & Platforms (part1)
16:00 - 17:30
Chairs: Guenther Landgraf - ESA- ESRIN, Felix Bachofer - German Aerospace Center - DLR

16:00 - 16:20
European Ground-based Research Infrastructures Building Future Earth Observation Capabilities (ID: 333)
Keynote: Sorvari, Sanna
(PDF )

Ground-based Research Infrastructures (RI) of the Environment Domain are crucial pillars for environmental and Earth system scientists in their quest for understanding and interpreting the complex Earth System and in general providing knowledge to solve various environmental and societal challenges. Since 2008 the environmental research infrastructures have been collaborated together and currently the ENVRI cluster contains 27 European-wide ground-based research infrastructures and EO networks cover the main four subdomains of the Earth system (Atmosphere, Marine, Solid Earth, and Biodiversity/Terrestrial Ecosystems). ENVRI cluster is the largest producers and providers of environmental research data in Europe collected from in-situ/ground-based observing systems. Thus environmental RIs are forming an important the cluster of data providers in the field of Earth Observations that contribute to global observing systems and they generate relevant information for Europe and worldwide. The demand for Earth system observation data is rapidly increasing, but the tools to manage, document, provide, find, access, and use such data are still underdeveloped owing to the combination of data complexity and data volumes. In the ENVRI is building new Earth Observation capabilities in the frame of FAIR (Findable, Accessible, Interoperable and Re-usable). ENVRI-FAIR goal is reached by: (1) well defined community policies and standards on all steps of the data life cycle, aligned with the wider European policies, as well as with international developments; (2) each participating RI will have sustainable, transparent and auditable data services, for each step of data life cycle, compliant to the FAIR principles. (3) the focus of the proposed work is put on the implementation of prototypes for testing pre-production services at each RI; the catalogue of prepared services is defined for each RI independently, depending on the maturity of the involved RIs; (4) the complete set of thematic data services and tools provided by the ENVRI cluster is exposed under the European Open Science Cloud catalogue of services. As EO framework is important for ENVRI, we want to build a system where policies, standards, protocols and technical solutions are closely worked and collaborated with other EO communities and service providers, thus satellite communities, Copernicus and GEO activities. In the presentation, we will introduce the ENVRI cluster and for the ENVRI can enhance the development of seamless EO system and services.

Authors: Sorvari, Sanna (1); Petzold, Andreas (2); Asmi, Ari (3); Kutsch, Werner (4); Laj, Paolo (5)
Organisations: 1: Finnish Meteorological Institute, Finland; 2: Forschungszentrum Jülich; 3: University of Helsinki; 4: ICOS ERIC, Integrated Carbon Observation System European Research Infrastrcuture Consortium; 5: CNRS – Centre National de la Rechereche Scientifique
16:20 - 16:35
Enhancing Tsunami Early Warning System With New Implementation Of Copernicus Sentinel 3 Mission (ID: 173)
Presenting: Castro de Lera, Mario
(PDF )

This paper proposes an early tsunami warning system able to capture the leading wave of the tsunami from accurate remote sensing measurements using the first operational implementation of the Sentinels Collaborative Ground Segment of the Copernicus Sentinel 3 satellites. The 2004 Indian Ocean and the 2011 Japan tsunamis provoked important human loses and caused economic upheaval in many areas. To ensure early detection of tsunamis the Global Sea Level Observing System (GLOSS) provides a sea-level monitoring through the global tide gauge network, GLOSS Core Network (GCN), and is complemented by the NOAA’s Deep-ocean Assessment and Reporting of Tsunami (DART®) stations. The critical data are acquired by these network to real-time forecasts from static instrument allocated to island or group of islands at intervals not closer than 500 km, and along continental coasts at intervals generally not less than 1000 km. Big open ocean areas are still out of the monitoring systems. In the past, satellites have observed and measured major tsunami events in open ocean. The first likely positive identification from the 1992 Nicaraguan Tsunami was done using ERS-1 and Topex/Poseidon missions altimetry data. However noise of the signals, geometry of the satellite Track, insufficient number of satellite altimeters available or the data latency have been the major reasons to discard in the past the implementation of a tsunami detection system based on satellite data. Now Sentinel 3 SRAL instrument with a 1-s noise level on the altimeter range for a typical 2-m significant wave height (SWH) of less than 1 cm in SAR mode is considered an great improvement with respect to levels of previous missions like the 3 cm of 1-s noise level for similar SWH from ERS-1 altimeter. Sentinel 3A and 3B have a revisit time of 27 days, for 35 days of the previous missions, with a sub-cycle of 4 days and only 52 km of ground track separation at the Equator. The Collaborative Ground Segment for Sentinel 3 can provide a regional quasi real-time data acquisition. Usage of existing stations on the south pole for local data dump will complement the current north pole Svalbard data acquisition. Providing on site data processing the tsunami perturbation in the sea level could be detected within one hour from sensing. The integration of Jason-2 and 3, Cryosat2 and Altika altimeter data will enhance the data availability and the diversity in the geometry of the track to increment the possibilities of early tsunami detection. The proposed system implementation will increase the early detection with the high coverage of open ocean regions currently not monitored by the existing tsunami warning systems. It will use more advanced altimeter than available in the past reducing signal noise and measurement error. The number of altimeters on flight with different reference orbit will extend the geometry of the ground track increasing the early detection probability. Finally the usage of the Sentinels Collaborative Local Stations reduce significantly the data latency.

Authors: Castro de Lera, Mario; Ruiz Sánchez, Pablo
Organisations: Deep Blue Globe UG (haftungsbeschränkt)
16:35 - 16:50
French Research Infrastructure "Earth System" (ID: 255)
Presenting: Moreno, Richard
(PDF )

The mission of the Earth System Research Infrastructure is to: - federate the French scientific data and services centres (AERIS, ODATIS, ForM@Ter, THEIA, ...), respectively specialized in Atmosphere, Ocean, Solid Earth and Land surfaces - develop coordinated access portals for data, products, services and expertise to access space and in-situ data to facilitate access to data processing and dissemination services (in situ, airborne and space data) - promote integrated and interdisciplinary research to understand the processes associated with the Earth system and global changes, - develop European and international partnerships.    The Earth system RI is positioned over the entire data management cycle (field measurements and satellites) from their production (in synergy with other RI) until they are made available, as well as to feed the databases and national, European and international arrangements (Copernicus, GEOSS ...). One example of project that will be integrated within the RI Earth System is the “Space Climate Observatory – SCO”   The project gathers 34 partners. Its Executive Board is composed of CNRS, CNES, IFREMER, IGN, IRD, IRSTEA, Météo-France and MESRI   Objectives and orientations of the project - Development of an information system allowing storage, processing, analytics and access to user interfaces offering services with high added value (Cloud, big data, AI...) - Development of a distributed architecture allowing the networking of data and service infrastructures, access to high-performance computing resources (HPC, ...) and a Cloud service, - Implementation of the FAIR principles, - Get involved in GO FAIR, EOSC, Copernicus / DIAS, CEOS, GEO projects and initiatives from a strong national base, Providing high-level interfaces to users to facilitate data discovery, transparent storage, processing, and access to resources

Authors: Moreno, Richard (1); Huynh, Frédéric (2); Papineau, Nicole (3); Diament, Michel (4); Baghdadi, Nicolas (5); Maudire, Gilbert (6)
Organisations: 1: CNES, France; 2: IRD, France; 3: IPSL, France; 4: IPGP, France; 5: IRSTEA, France; 6: IFREMER
16:50 - 17:05
Participatory Earth Observation Research in the Alps, The Sentinel Alpine Observatory (ID: 163)
Presenting: Jacob, Alexander
(PDF )

The Alps are amongst the most vulnerable and dynamic regions in Europe with respect to natural hazards, impacts of climate change or pressure on sensitive ecosystems. Processes such as snow-melt and the impact on run-off or drought damage of forest ecosystems need to be monitored to better understand the dynamics of the systems as well as to support an efficient and sustainable management of natural resources. While in-situ observatories offer precise monitoring information for well-defined locations, monitoring data, which are covering full areas in a consistent and transnational approach, are rare. The Sentinel Alpine Observatory (SAO) is an initiative of the Institute for Earth Observation at Eurac Research that has been launched in March 2017 (http://sao.eurac.edu). It showcases the output of various research projects and activities and features a number of Earth Observation products and services. They are mainly based on data from the Copernicus Sentinel program (EC/ESA) for monitoring key environmental variables in South Tyrol and the European Alps. With the goal of rendering access to relevant earth observation data as easy as possible for our own researchers and collaborators outside of the Sentinel Alpine Observatory and Eurac Research, we have developed an infrastructure and platform for collaborative research on the one hand and sharing of results with external non EO experts on the other hand. This includes activities in how to organize and store the data (e.g. using data cubes), how to access and process (e.g. Standardized metadata, API development and cluster computing) the data and how to distribute and visualize resulting products (e.g. exposing data by OGC standards and interactive web platforms). The initiative follows an open and participatory approach, integrating with other platforms like the Earth Observation Data Center (EODC) in Vienna and Alpine wide networks like the Virtual Alpine Observatory (VAO) and their AlpEnDAC infrastructure. But we have also strong ties to local authorities and companies as users of our products for very concrete problem solving, like discharge forecasting due to snow melt or forest change mapping in the fragile alpine environment.

Authors: Jacob, Alexander; Zebisch, Marc; Notarnicola, Claudia; Sonnenschein, Ruth; Marin, Carlo; Monsorno, Roberto; Costa, Armin; Vianello, Andrea
Organisations: Eurac Research, Italy
17:05 - 17:20
FAO Open tools: Openforis and SEPAL (ID: 364)
Presenting: Jonckheere, Inge G.C.
(PDF )

For several years now, FAO Forestry, with funding from Norway has been developing the System for earth observations, data access, processing & analysis for land monitoring (SEPAL). SEPAL is a cloud-based computing platform to facilitate countries’ access to earth observation data as well as processing of that data. An easy-to use platform, SEPAL allows countries to overcome processing issues related to poor internet connections, low computing power and several other barriers to satellite data access and use which developing countries still face in their efforts to monitor their forest area(s). SEPAL has been developed, tested, refined and applied in consultation with several Global Forest Initiative (GFOI) partners and countries. The development of SEPAL is now sufficiently advanced to allow for its broader uptake and application by GFOI partners. It has shown to be useful to increase the use, accuracy and transparency of developing countries in the UNFCCC reporting context. The platform runs a suite of open source modules developed in-house, Openforis, which are presented.

Authors: Jonckheere, Inge G.C.
Organisations: FAO of the UN, Italy

Workshop on Blockchain (Part1)
09:00 - 11:00
Chairs: Sveinung Loekken - ESA- ESRIN, Anna Burzykowska - ESA ESRIN, Andreas Vollrath - ESA- ESRIN

09:00 - 09:15
Workshop Blockchain4EO Welcome Introduction (ID: 384)
Presenting: Loekken, Sveinung

This workshop will address the use of Distributed Ledger Technologies (DLT) such as blockchain with Earth Observation (EO). It will provide an overview of how DLT can be used to foster the use of EO and how EO can help in supporting DLTs. The aim of the workshop is to (i) agree an agenda for concerted European action to explore the uptake of the DLT solutions in the EO sector, (ii) make recommendations for ESA programmes and inputs to other (national and European), (iii) federate the community and (iv) develop an ambitious roadmap

Authors: Vollrath, Andreas; Burzykowska, Anna; Mathieu, Pierre-Philippe; Loekken, Sveinung
Organisations: ESA- ESRIN, Italy
09:15 - 09:35
Space-based “Digital Twin” of Earth Brings Affordable EO Insights to the Other Seven Billion of Us (ID: 389)
Keynote: Stöcker, Carsten

Low-cost “nanosats” and reusable launchers are remaking the satellite business, making everything from remote monitoring of crops to broadband access to remote villages cheaper and more accessible. But all these changes pale beside the new services, business models and markets made possible by adding blockchain to the mix. By providing low cost, assured trust in the integrity of data and transactions, blockchain can make it dramatically easier to trust, own, share and sell services from this exploding new sensing and communication infrastructure. This confluence of new technologies could create a sharing economy in space that allows the “other seven billion” residents of earth who are not employed by large corporations or government to access a “digital twin” of earth to create a more humane and just world. But complex technical, legal, political and regulatory challenges stand in the way. So does the need to overcome the suspicions of those who feel left behind by lofty political and technical initiatives proposed by technical “elites”. A substantial and active portion of the citizenry will likely look askance at ever more satellites tracking their movements or property and sharing that information in unknown and possibly sinister ways. How governments and business manage questions such as security, privacy and ownership will determine whether a sharing economy in space enables more just, equitable and sustainable societies or fuels ever more paralyzing levels of suspicion, division and resentment.

Authors: Stöcker, Carsten
Organisations: Spherity GmbH, Germany
09:35 - 09:50
Evolving EO Data Trading by means of the BlockChain technology (ID: 240)
Presenting: Abbattista, Cristoforo

The EO market is in a rapid expansion and, more and more, EO derived products and services will be part of new value chains, different from their original ones. It means that each single step of the EO services value chain could be part-of and integrated-into different market sectors. Moreover, we now need to look at EO in a world of pervasive computing, where storage and processing power is everywhere, like a background microwave radiation. These insights require us to look at deep exploration of the new processing capabilities, taking into account security and emerging trading opportunities. From the first cyber-attack to an archive containing EO data, it has been clear that the satellite data can be not only stolen, but also, and even worst, be substituted by fake copies that can invalidate all the information to be extracted from them. Cyber-crimes can actually affect any processing step of any value chain. Moreover, when dealing with satellite imagery, we have to cope with very large datasets and lots of metadata. Proprietary algorithms elaborate those datasets to generate value added products, which EO companies and institutions deliver further in the value chain to the final users. Data integrity, value chain participants’ identity and reputation, data freshness, processing reliability and cyber security in general are the most important concerns to settle for stimulating downstream industry growth and overall service quality and reliability. Planetek Italia identified a set of technologies and practices (known worldwide as Blockchain) as being for sure the proper solution to guarantee Identity (the source and destination endpoints of the data), Integrity (the data are not counterfeit), Freshness (data are processed and the time-relation between original and resulting data is stored and certified), overall data & transaction Security, and Ubiquity (as a positive phenomenon). Planetek is actively involved in investigating 1) the engineering of an efficient, high performance, blockchain-based distributed processing engine for EO data (large imagery datasets and metadata); 2) how to represent, sign and store proprietary algorithms in the ledger (intellectual property preservation, quality parameters and measurements, etc.); 3) how to create a safe network of peers, including those known as “miners”; 4) how to integrate the blockchain technology to the current EO platforms, like DIAS; 5) how to select the best suited key determination algorithms to enforce distributed consensus. Moreover, Planetek’ s objective is to create interest, involve stakeholders and deliver technologies that allow value chain participants to be part of a peer-to-peer network with the aim of trading specific EO content by using a customized crypto currency and dedicated, movable, signed smart contracts.

Authors: Drimaco, Daniela; Abbattista, Cristoforo; Amoruso, Leonardo; Iacobellis, Michele
Organisations: Planetek Italia s.r.l., Italy
09:50 - 10:05
NGOs and Satellite Imagery - Blockchain Use Case (ID: 272)
Presenting: Keenan, Robert

European Space Agency : Earth Observatory Forum ConsenSys & Radiant.Earth The thesis driving this project is that blockchain technology can make previously purchased commercial satellite imagery more accessible to non-government organizations and align the incentives of commercial satellite providers and data consumers. Large philanthropic organizations and small non-profits alike purchase and use satellite imagery to unlock solutions that support their missions. These commercial purchases come with a use license that in most cases allows for further distribution to non-profits and organizations aligned with the mission of the original purchase. However, given the complexity and risk of poorly managing these resources, further distribution of the data seldom occurs. Our proposal is to load a subset of licensed commercial imagery on the Radiant.Earth Platform and through the use of blockchain and other services allow this data to be discovered and shared with organizations in compliance with the original license terms. We believe this will grow the Earth observation commercial imagery market, enable higher impact for NGOs, extend the buying power of donor dollars, as well as create new uses for this data that we have yet to imagine. For this to happen, geospatial providers will need the ability to validate that only legitimate NGOs are accessing data and also track that data usage at a detailed level to analyze use patterns and application development Blockchain technology is a recent phenomenon that is most often associated with the originating cryptocurrency Bitcoin. The underlying technology has many applications, and has often been described as the “trust machine”. The distributed ledger design and cryptographic mechanisms ensure that both the identity of the transacting parties and the authenticity of any transaction cannot be disputed. Ethereum was built to create fully programmable blockchain applications beyond cryptocurrency. It builds on the concepts of cryptographic security, decentralization, and immutability to include the capability for decentralized smart contracts that enable business logic to be programmed into the blockchain application. It enables the option to create private, permissioning for applications with a limited group of participants or transaction rights. Once proven, this concept could be scaled as a production system to thousands of NGO users. The system could permission a multitude of different users types to the system to use granted or purchased tokens for access to the data from any EO organization. Along with allowing access to data providers imagery assets, this token could be used to incentivize users to uploading crowdsourced data (like cell phone pictures, IoT sensor information, drone imagery, etc.), or even spur collaboration across many NGO organizations.

Authors: Keenan, Robert (1); Miglarese, Anne (2); Marchal, Emmanuel (1); Page, Corbin (1)
Organisations: 1: ConsenSys, United States of America; 2: Radiant.Earth
10:05 - 10:20
KSI blockchain for EO data integrity (ID: 396)
Presenting: Sisask, Andreas

The volume of data in the EO archives is huge and it keeps growing. More and more important decisions are made based on this data for scientific and business purposes. Mission control systems and data processing facilities that play a key role in the collection of this data, have become very complex. In the same time the situation in the cyberspace is getting worse - it is impossible to guarantee the security of even a simple system for a long period of time. Sooner or later a vulnerability will be exploited. While preventive measures for keeping the integrity of data are necessary, it is impossible to tell if they have failed or not. In this presentation we will look how the KSI blockchain can be used to fix that problem and allow us to verify both the integrity and time of all EO data.

Authors: Sisask, Andreas
Organisations: Guardtime, Estonia
10:20 - 10:35
Onboard AI for Nanosat Cluster: Distributed computing power in space with permanent Earth observation and onboard image analysis (ID: 318)
Presenting: Prasolov, Maxim

Current state of the art in computer vision and AI in general has made lightweight models readily available and able to run on constrained hardware. We propose to imbue 1-10 kg satellites with sufficient computational power to support an AI node, able to run pretrained computer vision models based on state of the art deep learning architectures. An AI node will be able to analyze the images onboard the satellite and transmit the results of processing to Earth, thus alleviating the need to transfer heavy images to Earth and reducing data exchange dramatically. This will also allow to reduce the requirements for powerful communication solutions on board.

Authors: Prasolov, Maxim
Organisations: Neuromation, Estonia
10:35 - 10:50
How Blockchain-based Geo Smart Contracts Fuel the IoT (ID: 403)
Presenting: Moradi, Yashar

Having the means to monitor the Earth’s surface on a regular basis and combining this data with other data of geocontext will affect almost every aspect of human life. Geospatial services increasingly affect how humans manage their businesses and private affairs. The next important step for humankind will be to enable machines to communicate with each other in the exchange of geospatial information, allowing automated transactions involving micro-geoservices. The Internet of Things, combined with Big Data and Artificial Intelligence, will be driven by geocontext. Geodata and geoservices will play a dominant role in the swiftly expanding fields of machine-to-machine (M2M) communication, autonomous driving, AI, and IoT. These are just a few of the emerging technologies poised to be enhanced by blockchain technology. To take advantage of opportunities arising in these two important and fast-growing markets, cloudeo initiated the CBN Foundation, a nonprofit foundation designated to guide the CBN community. The Foundation is a separate entity from cloudeo and both cooperate on a full arm’s length basis. Established in 2017, the CBN Foundation (CEVEN Blockchain Network) will incentivize geodata, software, and analytics suppliers to make their products and services accessible to the CBN community. Geodata typically will be processed in a decentralized manner involving modules and processing capacities of diverse participants. Data will be managed and matched with other data, analyzed, and packaged into value-added geodata and ready-to-use geoservices. Capabilities of the smart contracts will become more advanced over time. Ultimately smart contracts will replace standard orders and provide a transparent mechanism to control complex relationships between data providers, value-adding contributors, and providers of geoservices. Tokens are both an incentive and a reward for geodata-, analytics- and geoservices suppliers to participate in the ecosystem. Allowing and enabling technical, legal, and commercial transactions to occur within one automated step is essential to reduce operational costs. This allows the CBN to generate new, micro-geoservices feasibly. This is a necessity for all IoT applications and for enabling crowdsourcing of geodata. The token fundamentally simplifies all processes in the value-add chain. Providers of geoservices can offer their assets based on tokens to the participants in the community. Participants can consume those services for their own purposes. Alternatively, they can use the resources to produce new geodata and analytics and offer those to the community through APIs. Using CEVEN tokens makes this process easy and can lead to a rapid growth of available geoservices. CBN consumers can be individuals as well as SMEs, large corporations, and governmental bodies.

Authors: Moradi, Yashar
Organisations: cloudeo, Germany
10:50 - 11:00
IoT over Satellite: possible application of blockchain technologies (ID: 401)
Presenting: Merialdo, Matteo

In the context of a preliminary security study for an IoT over satellite system with severe power/bandwidth constraints at the terminal side, we conducted an analysis related to the application of blockchain technologies to the system. Main goal was to understand business, security and technical benefits. Different scenarios of application have been considered and among them: 1. Blockchain applied to the entire IoT ecosystem, where each IoT device is somehow involved in the distributed ledger 2. Blockchain applied only to the Ground Station network Power and bandwidth constraints to be observed by the IoT devices restricted the focus only to the second possibility. A preliminary draft architecture has been developed, considering the usage of Hyperledger as main blockchain technology. Derived from the draft architecture, a proof-of-concept software application has been developed in order to explore the actual maturity of the technology and measure some of the possible benefits. As a conclusion for the study, advantages from a security, business and technical perspective have been identified.

Authors: Merialdo, Matteo
Organisations: RHEA, Belgium

Workshop on Blockchain (Part2)
11:15 - 12:00
Chairs: Sveinung Loekken - ESA- ESRIN, Anna Burzykowska - ESA ESRIN, Andreas Vollrath - ESA- ESRIN

11:15 - 11:30
Data, AI, And Tokens: Ocean Protocol (ID: 316)
Presenting: Enevoldsen, John

Nowadays quantity and relevance of data are as crucial, if not more, to AI models as the algorithms. This leads to data being siloed by large companies, as they increase their values by hoarding data. However, this can impedes efforts by AI startups to solve global challenges (in environment, healthcare, transportation, etc.). Ocean Protocol aims at creating an ecosystem for data services where by using decentralization technologies and token incentives we can bridge the gap between data haves and have nots. This talk will talk about the power of AI, the issues surrounding its development, how to use token engineering to solve trust issues, and how a future of AI services can shape the way we approach problems.

Authors: Enevoldsen, John
Organisations: Ocean Protocol, Germany
11:30 - 11:45
Sensors, automation, and oracles in blockchain platforms (ID: 405)
Presenting: Botsford, August

ChromaWay has been involved in a number of projects which have discussed or involved the integration of some kind of sensor, or "machine signatory" to a blockchain. Land registration projects often target some kind of advanced GIS or surveying technology in order to mitigate the expense of traditional surveying, especially in the context of the developing world. We are also developing a platform for green finance called the Green Assets Wallet which aims to facilitate the validation of green projects and reporting on their impact. We will discuss these current projects led by Chromaway and the risks associated with them, the need for networked sensors to interface with blockchain platforms, and where we think remote sensing platforms can fit.

Authors: Botsford, August
Organisations: ChromaWay, Sweden
11:45 - 12:00
Satellite Imagery and Blockchain Technologies to upscale Natural Conservation Programmes (ID: 283)
Presenting: Marke, Alastair Hubert Nathan

Climate change happens at a speed much faster than humans finish cumbersome procedures and paperwork which is an integral part of every natural conservation programme. REDD+, for example, cannot be deemed truly successful in mitigating global climate change unless the regime is administered smart and fast enough in the interest of scalability. A new paradigm to make REDD+ smarter and faster would be to transfer satellite imagery (which proves efforts against deforestation) to an Ethereum-based smart contract, a key subset of blockchain technology, for triggering payments to local stewards in the spirit of results-based climate finance. The similar model could be deployed in other natural conservation projects to reduce administration cost while increasing speed. A smarter and faster REDD+ can yield a ripple effect in the generation of new asset classes for the global transition towards a low-carbon economy.

Authors: Marke, Alastair Hubert Nathan
Organisations: Blockchain Climate Institute; Blockchain Commission for Sustainable Development

Workshop on Blockchain (Part3)
13:30 - 14:30
Chairs: Sveinung Loekken - ESA- ESRIN, Anna Burzykowska - ESA ESRIN, Andreas Vollrath - ESA- ESRIN

13:30 - 13:50
Distributed Ledger Technology in Monitoring and Analysis of Food Safety and Food Sustainability Data (ID: 327)
Keynote: Leveille, Genevieve

Author: G. Leveille AgriLedger Founder, Vice Chair for the techUK Distributed Ledger Technology (DLT) Group (St Helier, Jersey; genevieve.leveille@0tentic8.com) Scientific Domains: Life Sciences, Earth Sciences, Technology. Idea Description: Management of the changing Earth’s fragile food supply chains and food waste are challenges growing in scope and importance as the human population is predicted to reach 8.5 billion by 2030[1] though growth rate is declining. More than one in seven people today still lack sufficient protein in their diets. The urgent requirement to reduce the inefficiency in our food system and its impact on the environment is coupled with a growing competition for land, water and energy.[2] While these are enormous challenges, it has become possible to monitor and gather vital data sets to inform better decision making in food supply chains. Based on observation of an AgriLedger-SourceTrace-Kloudin implementation project underway in Heilongjiang in China [3], there is viability for use of Distributed Ledger Technology (DLT) as a monitoring and traceability tool, with the application of Internet Protocol version 6 (IPv6) for monitoring and Artificial Intelligence (AI) Machine Learning (ML) for data analysis. Blockchain and Directed Acyclic Graph (DAG) are examples of DLT fast emerging as tools which can replace typical centralized data storage systems and capture digital identity and real-time data in an immutable, secure and decentralized record. Smart contracts containing code and terms that self-execute at set stages in a food supply chain render each transaction in the chain traceable, transparent and irreversible. The emerging technology of distributed ledgers implemented by AgriLedger-SourceTrace provides a comprehensive and powerful set of tools for establishing food safety and food sustainability which include financial services for farmers and end-to-end traceability. Through AI and ML, it is possible to use algorithms to build models that analyse bigger and more complex data and to deliver results faster with a higher level of accuracy. As these models unearth insights about data gathered through each stage in a food supply chain, better decisions can be made with less human intervention and higher efficiency. The community and consensus mechanisms inherent in DLT open opportunity for input and contribution by all stakeholders in the food supply chain – for example, the farmers, citizen scientists, distributors, logistics providers, markets and customers – making it possible to discover, test and implement smarter food system solutions much sooner. As local farming communities provide ground-truth data in situ through AgriLedger and SourceTrace agriculture mobile software applications,[3] the use of resources can be monitored and made more efficient, aligning better with Earth’s capacity to replace these resources. Together with monitoring and providing uniquely better tools for analysis of global data sets from ESA satellites, the introduction of DLT can support more efficient management of our food system. Objectives: Gaining new perspectives of the food supply value chains on Earth by establishing relevant monitoring and traceability data storehouses can help us to understand and improve global crop yield and minimise waste – which is vital for sustainable development in food safety. Making comprehensive and analysis-ready data easily accessible to an active and connected network of food producers and processors, qualified food and nutrition scientists, and citizen scientists will create a global digital laboratory and useful management tool for making responsible decisions about micro and macro food supply chains, waste and consumption. By implementing AgriLedger-SourceTrace DLT solutions for food safety and food sustainability as part of the ESA’s overall strategy for innovation, information, inspiration and interaction within the community, there is also the potential to access a diversity of services, including financial resources, to support developments in earth observation. Requirements: Data gathering in the crop fields on Earth will require remote sensory nodes and connected mobile devices for ground-truth crop yield, food process and food supply chain data collection through the AgriLedger and SourceTrace mobile applications. Mobile network accessibility is favoured in the real-time collection of data, while some ground-truth data can be collected offline. This allows for continual access by members to the digital ecosystem supporting digital identity, reduction in food waste and contamination, and the flow of knowledge within Earth’s food supply chains. Role of the ESA, AgriLedger, SourceTrace and Kloudin: Together with citizens on the ground and the ESA network of earth observers contributing to food supply chain data assembly and data analysis activities, the AgriLedger, SourceTrace and Kloudin teams can contribute analysis-ready data accessible via mobile devices, to improve crop yields and food supply chain efficiency through challenge-led implementation. Deploying sensory nodes and monitoring equipment to make sure instruments are working are roles citizens at each stage of the supply chain may assist in along with specific analysis of data by scientists who perform focused activities in food production and processing. Impact and benefits of this research: Combining the efficient data monitoring and gathering of ESA’s earth observation programmes with DLT-based AgriLedger-SourceTrace-Kloudin mobile application solutions to support food safety and food sustainability can transform the limitations of current food traceability and monitoring systems. The planning and harvesting of foods is more effective. Supported by the immutable trustworthiness of DLT-based records, each item can be traced from source to consumption. Decentralized ledgers can be used by various teams for uniquely better record keeping. The efficiency of mobile applications combined with AI Machine Learning in recording ground-truth crop yield and other significant data can make massive reductions in food waste as all information becomes accessible and transparent on the distributed ledger. This emerging technology also makes it possible to ensure human consumption and diet protect and respect our planet’s biodiversity and ecosystems [4] while also serving to equalize nutrition provision in a challenging international context within which industry borders are fading[5] while stricter controls are applied to cross-border interactions. Supporting developments in food safety and food sustainability at a planetary level, the AgriLedger network, together with important local DLT and agricultural partners in various countries including the Shenzhen Kloudin Co. Ltd (of the Suntron Group) and the Harbin Jixiang Agricultural Planting Development Co. Ltd in China, integrates the process of growing food with immutable record-keeping and the provision of analysis ready data. There are no existing solutions tackling the problems AgriLedger-SourceTrace-Kloudin aims to solve using DLT: the transformation of data into information that is usable and the integration of as many perspectives as possible to understand and manage the world’s food system. Collaboration between the ESA and Agriledger-SourceTrace-Kloudin means the community can access a wealth of first-hand knowledge of crop cultivation, processing and food supply chains combined with sophisticated data obtained by the earth observation network to inform food safety and sustainability on our fragile home planet. References: [1] Roser, M., Ortin-Ospina, E. (2018). World Population Growth, Our World in Data [Online]. Available at https://ourworldindata.org/world-population-growth. [13-07-2018]. [2] Godfray, H.C.J., et al. (2010). Food Security: The Challenge of Feeding 9 Billion People, Science, 327(5967): 812-818. [3] AgriLedger (www.agriledger.io) and SourceTrace (www.sourcetrace.com) have collaborated with the Shenzhen Kloudin Co. Ltd (of the Suntron Group) and the Harbin Jixiang Agricultural Planting Development Co. Ltd. to conduct implementation of AgriLedger-SourceTrace-Kloudin agriculture mobile software applications which support sustainable agriculture and empower smallholder farmers to participate in the global market. [4] Bonaccorsi, G. (2015). Food and Human Behaviour: Consumption, Waste and Sustainability, J. Public Health Resources, 4(2): 606. [5] Uyttendaele, M., Franz, E., Schluter, O. (2016). Food Safety, a Global Challenge, Int. J. Environ. Res. Public Health, 13(1): 67.

Authors: Leveille, Genevieve
Organisations: AgriLedger, United Kingdom
13:50 - 14:05
Blockchain Is not the Technology to create Sustainable Supply Chains, but Satellite Remote Sensing is (ID: 242)
Presenting: Kuilder, Ernst Thomas

Full article originally published on: https://medium.com/@kuilder/blockchain-is-not-the-technology-to-create-sustainable-supply-chains-but-satellite-remote-sensing-f1b61c07ed38 At every conference concerning world hunger, deforestation and climate change, Satellite Remote Sensing has to compete with this other cool piece of tech. Grouped together with blockchain as a technology that will offer innovative solutions towards these problems. Yet in my experience as a maker of technology, I believe tracking commodities and forests is the only ‘tool’ that will play a major role in furthering sustainable supply chains. To understand why, we need to understand what blockchain really is: a system for electronic transactions without relying on trust, using proof-of-work to record a public history of transactions that quickly becomes computationally impractical for an attacker to change if honest nodes control a majority of CPU power. Which can be paraphrased as: A tool for recording transactions. Which does not require trust. And is impractical for dishonest entities to tamper with. We can apply this new information storage mechanism to supply chains or sustainable forest management. Areas which already have some excellent tools on public transaction records: trase.earth shows how soy and other commodities move from a deforested part of Brazil to large corporations and consuming countries. globalforestwatch.org shows tree cover loss on a global scale. The tools exist, but the problem remains. Clearly they are not as effective as we wish. Are these tools not functioning because of a lack of trust in the transaction records and are dishonest entities tampering with the records? The answer is: no. What we need are people on the ground: checking products at ports, borders, warehouses, grocery stores and farms. A trustworthy institution making sure products without the right origin are refused entry into our grocery stores. The global supply chains of palm oil, wood, soy, and many other commodities are a mess. Not because the ledgers which store the information are being tampered with, but because information on for instance origin and the negative effects on the local society and global environment are not available or cannot be trusted. This is where the other “tool” comes in: satellite remote sensing. Contrary to an innovative way to store information (blockchain), sensors orbiting the Earth form a readily available way to retrieve information. Satellite sensors pinpoint the location of crops, pick up on disappearing forests, detect peat on fire, locate floods, abandoned fields and much more. A complete industry exists to analyze this data in a cost effective way and to make the information publicly available by building or connecting to the right platforms such as trase.earth and globalforestwatch. This is the data-driven revolution in supply chain sustainability where more and more data from different sources are gathered, aggregated and cross-validated. Although this revolution is real and valuable, neither satellite remote sensing nor blockchain are technological innovations that will magically stop deforestation and make commodity production transparent and sustainable. Yet satellites, contrary to blockchain, provide practical information which help in creating the institutions we need.

Authors: Kuilder, Ernst Thomas
Organisations: Satelligence, Netherlands, The
14:05 - 14:20
Beyond The Hype: What Are Useful Links Between HLT And GSI For Smallholder Agriculture (ID: 303)
Presenting: Kruseman, Gideon

The hype surrounding blockchain technology has led to the scramble to use the technology for all kinds or purposes, whether appropriate or not. In this presentation the use of hyper ledger technology (HLT) for not strictly financial transactions is presented with a very specific focus on smallholder agriculture in low and middle-income countries. HLT is especially useful in those circumstances where there are severe market failures regarding the validity and trustworthiness of information. It is not for nothing that HTL is sometimes called distributed truth. With respect to smallholder agriculture, HLT can be useful to solve seemingly insuperable challenges facing agri-food system value chains. HLT can in principle address challenges related to issues such as counterfeit seed, and provenance of commodities and their invisible traits, including invisible product quality traits as well as traits related to sustainability, and socially responsible production. In some of these cases geo-spatial information can be a valuable asset to enhance transparency of agri-food systems.

Authors: Kruseman, Gideon (1,2)
Organisations: 1: CIMMYT, Mexico; 2: CGIAR Platform for Big data in Agriculture
14:20 - 14:30
HeraSpace, Future of EO data and the Blockchain (ID: 404)
Presenting: Durá Hurtado, Isaac

HeraSpace helps fishermen to locate the most profitable and sustainable fishing grounds optimizing operative budgets. Our proposed solution is based on EO data and the use of the Blockchain. Within our technical architecture, we are analyzing the use of the Blockchain in two different ways, byoffering traceability within the seafood distribution chain and by proposing an unhackable allowed areassatellite logging system to warranty that the vessels practices are aligned with the regulations and ethical practices.

Authors: Durá Hurtado, Isaac
Organisations: HeraSpace, Spain

Workshop on Blockchain - Round Table
14:30 - 17:30
Chairs: Sveinung Loekken - ESA- ESRIN, Anna Burzykowska - ESA ESRIN, Andreas Vollrath - ESA- ESRIN

Workshop Blockchain4EO - Discussion (ID: 345)
Presenting: Loekken, Sveinung

The participants to the afternoon session are invited to share their own insights concerning the opportunities and challenges for DLT applications. They will discuss in the roundtable setup the role of public sector, NGOs, academia, and industry in fostering and facilitating technology uptake, articulate recommendations and define steps necessary to develop partnerships across existing networks and projects to explore the potential of DLT for EO.

Authors: Loekken, Sveinung; Burzykowska, Anna; Vollrath, Andreas
Organisations: ESA- ESRIN, Italy

DIAS ONDA Hands-on
11:00 - 12:30

ONDA DIAS Showcase (ID: 402)
Presenting: Lo Zito, Fabio

ONDA is the Data and Information Access Service (DIAS) led by Serco Italia which, with the aim of facilitating, fostering and expanding the exploitation of Earth Observation satellite data and geospatial information, enables users to build and operate applications in the Cloud by providing Data, Services and Support. The ONDA paradigm is to bring people on data, by offering custom solutions to cover the needs of all prospective users, who range from the general public having minimal or no knowledge of Remote Sensing through professionals and SMEs up to public authorities, agencies and large enterprises. ONDA offers free and open access to a wealth of datasets from different sources - from the Copernicus Sentinels family, to EO missions to the Copernicus Services projects - and provides easy-to-use resources for accessing, downloading and processing the data and information. In order to further benefit from the huge volume of geospatial information available today, the initial data offer will be progressively extended to include additional missions and Copernicus Services as well as in situ measurements, or other data that will be deemed of interest. The ONDA platform also provides services that benefit from the performance of a market leading Cloud environment. Our Cloud computing solutions are scalable, easy to set up and have predictable costs as well as optimised and consistent performance. In addition, all data, information, applications and transactions are securely protected. At any time users are able to upgrade or scale down the chosen configuration for their virtual platforms to tailor them to their needs and expertise. An innovative data access technology is also provided, which allows users to easily extract only the needed product information from the data. Other available ONDA services include the provision of customisable, managed support for expert users, to help designing and building scalable and advanced systems for data processing and also to provide solutions to integrate the user’s own data. Our users will then have the option to publish their results and applications through the ONDA marketplace. With regard to the user support, the ONDA Helpdesk is available to provide assistance and technical help, and users can also benefit from the information available on the ONDA web portal, including any published documentation. During the showcase, users will be shown some practical demonstrations on how to access and use the ONDA catalogue and cloud resources, and also a few applications developed using the ONDA platform.

Authors: Vingione, Guido; Tesseri, Andrea; Ranera, Franck; Lo Zito, Fabio; Iumiento, Mariano; Scarda, Barbara
Organisations: Serco Italia SpA, Italy

DIAS CREO Hands-on
14:30 - 15:30

Developing Copernicus based geoanalytical services in CREODIAS with Hexagon Smart M.App technology (2) (ID: 387)
Presenting: Zotti, Massimo

The recent launch of Data and Information Access Services (DIAS) platforms at the end of June 2018, providing unlimited, free access to Copernicus data and information access services, made it easier for users to develop Copernicus-based applications and services that provide the added value of combining EO technologies with other data sources, across different market segments. CREODIAS was one of the four industry consortia awarded by ESA to develop DIAS platforms. The CREODIAS consortium is led by Polish company, Creotech Instruments, and included also CloudFerro, WIZIPISI (Wroclaw institute of Spatial Information and Artificial Intelligence), Sinergise, Geomatis, and Eversis. CREODIAS operates a large cloud IT infrastructure, provided by CloudFerro, optimized to browse, search, deliver and process large amounts of EO data. The storage capacity includes up to 30 PB for EO open data, supplemented on demand by other complementary data sets. This vast repository will be co-located with a dedicated IaaS cloud modular infrastructure for the platform’s users, allowing customized processing activities to be established in close proximity to the stored data. CREODIAS storage is synchronized with main ESA repositories, so the data acquired by Copernicus Hub and contributing missions becomes available within a few hours after its publication by ESA.In order to provide state of the art technologies that facilitate the development of end-user applications and services, the CREODIAS consortia established a close cooperation with Hexagon Geospatial to deploy M.App Enterprise and other M.App Portfolio products, such as M.App X, from the CREODIAS front-office. This cooperation opens the possibility for CREODIAS users to create value-added EO based information services based on Hexagon's M.App Portfolio technology.M.App Enterprise complements the CREODIAS platform, providing Companies looking to create innovative applications on top of Copernicus data, a user-friendly and low-code development environment to build scalable and lightweight vertical applications, coined by Hexagon as “Hexagon Smart M.Apps”, that applies sophisticated geospatial analytics and tailored workflows to multi-source contents, within an intuitive and dynamic user experience. Planetek Italia is the first company taking advantage of this optimized environment, by deploying Hexagon Smart M.Apps based on Rheticus® services, from CREODIAS platform. Rheticus® is a collection of applications designed by Planetek Italia that provides subscription-based monitoring services, transforming changes detected on the earth’s surface into analytical information to drive timely decisions.Leveraging Planetek’s remote sensing expertise and Hexagon’s platform capabilities, the delivery of Rheticus monitoring services as Hexagon Smart M.Apps, provides dynamic mapping and in-depth geospatial analytical capabilities, offering timely insights on infrastructure stability and earth surface displacement to subscribing organizations. The satellite data captured by Copernicus Sentinel satellites are at the base of the monitoring services provided through Rheticus. The main applications of these services are the monitoring of urban dynamics and land use changes, Earth's surface movements (landslides and subsidence), stability of infrastructures, areas under construction and new infrastructures, areas affected by forest fires, marine water quality and aquaculture. During the workshop, users will be guided in the creation of different web applications for the processing of Sentinel-2 data using the capabilities of Hexagon M.App Portfolio available on the CREO-DIAS, specifically: - Segmentation of Sentinel-2 data using Open Street Map data (first day); - Classification of Sentinel-2 time series using Machine Learning algorithms (second day); For the hands-on activity users should bring and use their computer or they can participate just following the demonstration.

Authors: Maldera, Giuseppe (1); Zotti, Massimo (1); Joao P, Joao P (2); Myslakowski, Krzysztof (3); Drimaco, Daniela (1)
Organisations: 1: Planetek Italia s.r.l., Italy; 2: Hexagon Geospatial; 3: Creotech

ESA-NASA Web WorldWind – Hands-on Training Session
09:00 - 10:30

Web WorldWind Open Source Virtual Globe (ID: 408)
Presenting: Voumard, Yann

The Web WorldWind development team invites you to a hacking session that will get you started with visualising your EO data and service outputs on a 3D virtual Earth thus putting them in context and making them easily explorable.

Authors: Voumard, Yann (1); Draghici, Florin (2); Ifrim, Claudia (3); Balhar, Jakub (4)
Organisations: 1: Solenix Deutschland GmbH, Germany; 2: Qualteh JR; 3: Terrasigna; 4: GISAT

Workshop Lego - EO ClimLab
11:00 - 12:30

11:00 - 12:30
EOClimLab Design Thinking Workshop with Lego Serious Play® (ID: 385)
Presenting: Pop, Sorin

Cities today evolve in all dimensions. In Europe, ensuring citizens’ quality of life is a priority for most municipalities. Challenges arise as population ages, people migrate to larger cities, traffic and mobility issues need to be resolved. Climate change brings new risks while putting pressure on urban planners. The urban population demands better public transport and more green areas in cities where construction and land costs are rising. Efficient land management is more and more important. A ray of sunshine comes from the use of renewable energy and innovative building materials, structures and the development of green buildings and green neighborhoods. Lego Serious Play® is hands and mind activating, a thought-provoking methodology that engages participants in serious discussions through playful techniques - building 3D models of your thoughts with Lego®. The workshop is built on a tried and tested process of building, sharing and reflecting, creating an equal playing, thinking, sharing and learning ground for all participants. Through this process of building and sharing Lego models and their stories, insights, ideas and meaningful discussions are conducted at the tables, addressing serious challenges. European cities dynamics challenge local, regional and national authorities to constantly adapt central and local development strategies, design and apply normative and regulatory planning tools for managing the urbanization phenomenon, create smart mobility infrastructures, mitigating the impact on the environment and landscape, finding and adapting space for leisure and green areas. Urban planners confirm that remote sensed data is seldom and insufficiently used, EO and in-situ data can provide urban experts key indicators like multitemporal urbanization monitoring, analyzing spatial structures and the urban fabric, population distribution, socio-economic analysis, urban climate, risk and vulnerability assessment, traffic, energy-relevant aspects, 3D models, benchmarking. Climate change brings new risks while putting pressure on urban planners. The urban population demands better public transport and more green areas in cities where construction and land costs are rising. Efficient land management is more and more important. A ray of sunshine comes from the use of renewable energy and innovative building materials, structures and the development of green buildings and green neighborhoods. Using design thinking techniques, all age categories from children to adults and from citizens to urban planners, experts and researchers will be challenged to (re)design their city based on space technologies, as well as designing new space tools to improve urban life and urban management. The workshop invites paricipants in the wonderfull world of Lego using Design Thinking Methodology to co-create the Future Smart City, able to respond to Climate Change and mitigate its effects. All this with support from Earth Observation, Satellites, Smart Urban Planning and co-creation. Concerning green areas, air quality, mobility or plain education, the new designs will be promoted to European Cities.

Authors: Muntean, Bianca (1); Pop, Sorin (2)
Organisations: 1: Aries Transilvania, Romania; 2: Indeco Soft, Romania

Workshop on Quantum Computing for Earth Observation
14:00 - 15:40
Chairs: Mauro Paternostro - Queen's University, Chris Stewart - ESA- ESRIN

14:00 - 14:20
Secure Quantum Cloud Computing via Satelite (ID: 378)
Presenting: Kashefi, Elham

The recent interest in quantum technologies has brought forward a vision of quantum internet that could implement a collection of known protocols for enhanced security or communication complexity. On the other hand the rapid development of quantum hardware has increased the computational capacity of quantum servers that could be linked in such a communicating network. This raised the necessity/importance of privacy preserving functionalities such as the research developed around quantum computing on encrypted data. In this talk I review the state of art and present how recent progress on quantum satellite communication put forward a new horizon in this direction.

Authors: Kashefi, Elham
Organisations: Universite' Sorbonne, France
14:20 - 14:40
Quantum MW: Towards applications with a superconducting qubit based quantum computer (ID: 375)
Presenting: Filipp, Stefan

In the recent years we have observed a rapid development of quantum technologies for the realization of quantum computers that promise to outperform conventional computers in certain types of problems. This includes problems in optimization, machine learning, the solutions of partial differential equations and finite element analysis, but also in the computation of complex many-body physical systems such as molecules or condensed matter. Assisted by conventional computing systems, hybrid quantum-classical architectures may soon allow us to solve some of today’s computational challenges. In this talk I will present the IBM Q quantum computing platform based on superconducting quantum circuits. On this platform we use variational algorithms that utilize the quantum processor to efficiently represent highly entangled quantum states. Such algorithms are best suited for near-term applications on non-error corrected quantum hardware because they only rely on a small number of quantum operations and finish within the coherence time of the system. I will then showcase first quantum applications in the field of quantum chemistry and machine learning for computing the energy spectra of small molecules and for classification protocols.

Authors: Filipp, Stefan
Organisations: IBM Research, Switzerland
14:40 - 15:00
Quantum MW: Neural networks discover quantum error correction strategies from scratch (ID: 391)
Presenting: Marquardt, Florian

Machine learning with artificial neural networks is revolutionizing science. The most advanced challenges require discovering answers autonomously. In the domain of reinforcement learning, control strategies are improved according to a reward function. The power of neural-network-based reinforcement learning has been highlighted by spectacular recent successes such as playing Go, but its benefits for physics are yet to be demonstrated. Here, we show how a network-based “agent” can discover complete quantum-error-correction strategies, protecting a collection of qubits against noise. These strategies require feedback adapted to measurement outcomes. Finding them from scratch without human guidance and tailored to different hardware resources is a formidable challenge due to the combinatorially large search space. To solve this challenge, we develop two ideas: two-stage learning with teacher and student networks and a reward quantifying the capability to recover the quantum information stored in a multiqubit system. Beyond its immediate impact on quantum computation, our work more generally demonstrates the promise of neural-network-based reinforcement learning in physics. “Reinforcement Learning with Neural Networks for Quantum Feedback" Thomas Fösel, Petru Tighineanu, Talitha Weiss, and Florian Marquardt Phys. Rev. X 8, 031084 (2018)

Authors: Marquardt, Florian
Organisations: Universtiaet Erlangen-Nuernberg & Max Planck Institute for the Science of Light, Germany
15:00 - 15:20
Quantum MW: Machine learning for processing and certification of photonic quantum information (ID: 376)
Presenting: Sciarrino, Fabio

Photonic technologies provide a promising platform to address at a fundamental level the connection between quantum information and machine learning. As first step in this direction, we will address the design and implementation of protocols that apply classical machine learning methods to problems of quantum information theory: learning of quantum states [1] and quantum metrology [2]. We will then exploit machine learning as a tool to validate quantum devices such as Boson Samplers. Indeed, the difficulty of validating large-scale quantum devices poses a major challenge for any research program that aims to show quantum advantages over classical hardware. To address this problem, we propose a novel data-driven approach wherein models are trained to identify common pathologies using supervised and unsupervised machine learning [3,4]. Our results provide evidence on the efficacy and feasibility of this approach, paving the way for its adoption in large-scale implementations. [1] A. Rocchetto, S. Aaronson, S. Severini, G. Carvacho, D. Poderini, I. Agresti, M. Bentivegna, F. Sciarrino. Experimental learning of quantum states [arXiv:1712.00127] [2] A. Lumino, E. Polino, A. S. Rab, G. Milani, N. Spagnolo, N. Wiebe, F. Sciarrino, Experimental Phase Estimation Enhanced by Machine Learning, [arXiv:1712.07570] [3] I. Agresti, N. Viggianiello, F. Flamini, N. Spagnolo, A. Crespi, R. Osellame, N. Wiebe, F. Sciarrino, Pattern recognition techniques for Boson Sampling validation, [arXiv:1712.06863] [4] T. Giordani, F. Flamini, M. Pompili, N. Viggianiello, N. Spagnolo, A. Crespi, R. Osellame, N. Wiebe, M. Walschaers, A. Buchleitner, F. Sciarrino. Experimental statistical signature of many-body quantum interference. Nature Photonics (2018) doi:10.1038/s41566-018-0097-4

Authors: Sciarrino, Fabio
Organisations: Sapienza Universita' di Roma, Italy
15:20 - 15:40
Quantum Computing for the optimization of Earth Observation Mission Design and Management (ID: 398)
Presenting: Picard, Mathieu

The Earth Observation (EO) market is undergoing significant transformations paving the way for constellations of Very High-Resolution satellites. These transformations are accelerated through several New Space initiatives from private actors and enabled by multiple converging factors: an affordable access to space (thanks to reusable launch systems), low-cost recurring satellite platforms and instruments (relying on COTS hardware) and cheaper multi-sensor ground systems (leveraging AI and Big Data technologies). However as the size of future EO systems increases, both mission design and mission management are rendered more complex. In many situations, new computational solutions will be required to solve a range of discrete and continuous optimization problems and find optimal values for the design and decision variables that drive mission performance and mission cost. Quantum Computing (QC) is generating a tremendous interest and gaining momentum through massive investments in research and technology development from both public and private actors in the US, Europe and China. Several quantum computers are already commercially available (including the D-Wave 2000Q Quantum Annealer and IBM-Q 20-qubit system), while many prototypes have been announced by Google, Intel, Rigetti and others. Combinatorial optimization is a major area of focus for the QC community where researchers devise new quantum algorithms and hope to demonstrate an advantage or speedup against classical algorithms. Quantum Annealing and the Quantum Approximate Optimization Algorithm are great examples of metaheuristics algorithms that can be evaluated on current quantum computers or simulators. In this talk, we will focus on one problem relevant to the design of an EO system, Fault Tree Analyses (FTA), which are widely used to assess the Reliability, Availability, Maintainability and Safety of space systems. We will illustrate the mathematical and preprocessing steps necessary to derive a representation amenable to Quantum Annealers. Then, we will provide a preliminary comparative assessment of quantum versus classical performance and conclude by giving some insights on a promising hybrid classical/quantum approach to solve this problem.

Authors: Picard, Mathieu (1); Botter, Thierry (2); Michaud, Vincent (1)
Organisations: 1: Airbus Defence and Space, France; 2: Airbus Corporate Technology Office, Germany

Copernicus Master Side Event
14:00 - 15:30
Chair: Thomas E. Beer - ESA

Where you win it, where you lose it: developing EO based apps (ID: 379)
Presenting: Beer, Thomas E.

Developing a successful app which is enriched with EO data and will generate income is not an easy undertaking. This side event session will demonstrate how technical and managerial hurdles can be overcome. Successful and less successful app developer will highlight the essentials of their long march towards a commercially viable app. The audience will learn where you win it and where you lose it, which mistakes to avoid. Also present: the organiser of the ESRIN app camps (AZO Oberpfaffenhofen (DE) ), the developer of the API used for ESA app camps (Ramani B.V. (NL) ) and a group of app developers fresh from the 2018 ESRIN app camp held in September 2018. This session is a must for all hopeful would-be app developers!

Authors: Beer, Thomas E.
Organisations: ESA, Italy

Digital Poster -Exhibition - Drink
17:30 - 19:00

The Complete Data Fusion for the improvement of Sentinel 4 and Sentinel 5 products (ID: 220)
Presenting: Zoppetti, Nicola

AURORA is a project financed by the European Commission in the framework of the Horizon 2020 Framework Program that concerns the sequential application of fusion and assimilation algorithms to simulated ozone profiles in different spectral bands, according to the specifications of atmospheric Sentinels 4 and 5(p). It is known that the atmospheric Sentinels will provide an enormous amount of data with unprecedented spatial and temporal resolution. In this scenario, a central challenge to face is to enable a generic data user (for example, an assimilation system) to ingest such a large amount of data without loss of information. In this sense, an algorithm such as the Complete Data Fusion (CDF) is particularly interesting as it is able to reduce, without loss of information, the data volume of input products that correspond to the same space and time location. CDF accepts as input a generic number of Level 2 products, retrieved with optimal estimation techniques. Each of these products is represented by a volume mixing ratio profile characterized by its covariance matrix, its averaging kernel matrix and the a priori information used in the retrieval. The output of the fusion is a single product that has the same structure and collects all the information of the input products. This work is divided in two parts. The first part, which consider the fusion of 1000 coincident pixels simulated with different errors, aims to show that the CDF is, to our knowledge, the only known algorithm able to correctly combine the information of several coincident measurements into a single product, taking into account the a priori information. The same a priori information introduces a bias if, for example, the arithmetic averages of the input profiles are considered. In the second part of the work, the products obtained by fusing not perfectly co-located simulated ozone profiles in TIR and UV bands are analyzed. In particular, the characteristics of these products are compared with those of the original products that have been fused. This comparison is aimed at highlighting the better data exploitation provided by the fusion. The second part also shows that the CDF can be applied with different coincidence grid cell sizes, for example to match the size of an assimilation grid, leading to different compression factors of the original Level 2 data volume. These results highlight the importance of the data fusion procedure in the management of large data volumes, such as those provided by the atmospheric Sentinels.

Authors: Zoppetti, Nicola (1); Ceccherini, Simone (1); Carli, Bruno (1); Cortesi, Ugo (1); Del Bianco, Samulele (1); Gai, Marco (1); Tirelli, Cecilia (1); Barbara, Flavio (1); Dragani, Rossana (2); Kujanpää, Jukka (3); Tuinder, Olaf (4); Van Peet, Jacob (4); Van Der A, Ronald (4)
Organisations: 1: IFAC-CNR, “Nello Carrara” Institute of Applied Physics, Florence, Italy; 2: European Centre for Medium-Range Weather Forecasts, Shinfield Park, Reading, RG2 9AX, UK; 3: Finnish Meteorological Institute, Earth Observation Unit, P.O. Box 503, 00101 Helsinki, Finland; 4: Royal Netherlands Meteorological Institute, Utrechtseweg 297, 3731 GA De Bilt , The Netherlands
Aerostatic System For Early Detection Of Fires (ID: 261)
Presenting: Lipski, Stanisław Lipski

Early detection of a fire at the initial stage of development is of decisive importance for effective firefighting, safe evacuation of people and avoiding serious material damage. The elevation of the automated TV and IR observation station on the deck of the high altitude aerostate further reduces the response time of the emergency services. The subject of the undertaking is the design, construction of a proper aerostat set, selection of on-board equipment, prototype making using new functional materials, implementation of appropriate software and a ground station for data transmission for use by anti-crisis services. The advantage of this type of system may be a lower cost of system devices, with a large range of continuous surveillance in comparison to ground systems. Additional system functions can be: - automatic analysis of air transparency, - self-diagnosis of the state of on-board systems, - dirt control and the status of addressable sensors. These features make it easier to maintain system efficiency and significantly simplify the service, which the operating costs of the entire installation. The applicant is the Institute of Precision Mechanics (IMP), one of the oldest and most well-known Institutes in Poland. Directions of work at IMP: • Increase in operational durability, fatigue, corrosion, tribological parts of machines and tools; • Increase of corrosion resistance directed in particular to light metals and their alloys (titanium, aluminum, magnesium); • Increase of technical and technological security of the country, development of material-saving and energy-efficient technologies together with devices for their implementation and selection of modern technologies in various conditions of use, taking into account environmental protection and recycling requirements; •Development and application of modern nanomaterials, with particular emphasis on graphene materials; • Development of materials for medical tools and instruments as well as implants for medicine; IMP develops technologies for the production of functional coatings from new advanced materials. More on the applicant's website: http://imp.edu.pl/ Expectations of the applicant •Identify, define and evaluate technically and economically potentially sustainable services based on HAPS supplemented by satellites. •Identify and reduce the technical and economic risks associated with the implementation of these services. •Consolidate user / customer requirements and establish contacts with relevant clients and other stakeholders for further engagement. • Propose recommendations and define an action plan for the implementation and demonstration of services and the possible preparation of a demonstrator project.

Authors: Lipski, Stanisław Lipski
Organisations: Institute of Precision Mechanics,, Poland
Maritime optimised weather routing services: Model vs Neural Network (ID: 184)
Presenting: Ruiz Sanchez, Pablo

To reduce fuel consumption is more important every day for the maritime industry. Weather routing services are currently offered to maritime operators as qualitative information which is manually processed and use to redefine the maritime routes. Now, thanks to the availability of Copernicus data and services, it is possible to provide optimised weather routing services which, based on quantitative data, computes the optimal route based on weather conditions and forecast together with maritime traffic data. The required inputs are Level 1 and 2 products from Sentinel-3, Level 3 and 4 products provided by CMEMS and supporting radar/optical images acquired by Sentinel -1, -2 via ESA. This new service can increase the fuel savings from 5% using traditional weather routing services up to 15% with optimised weather routing services. The development of this solution has a proven potential market and it has been identified as a key enabling technology for maritime autonomous navigation, leading to the acceptance of the company in ESA BIC Darmstadt to develop it further. Since 1st January 2018, EU MRV (Monitoring, Reporting and Verification) regulation is applicable and a database including vessel routes together with their associated fuel consumption is being maintained. Additionally, IMO (International Maritime Organization) agreed during the 70th session of the Marine Environment Protection Committee (MEPC 70) to develop a scheme to collect vessel fuel consumption worldwide and maintain a similar database. With the data collected by this two independent but similar systems, a lot of data regarding fuel consumption, vessel routes and weather conditions will be available. The logical subsequent step for the above mentioned solution is to build an artificial neural network that, trained with this data through deep learning techniques, is able to provide the optimal route in terms of fuel consumption (or any other operational variable or associated metric). The data sources and inputs to develop such a neural network are exactly the same required to feed a model-based solution. Two different approaches to solve the same problem could be then developed and their performance compared. At first, the model-based solution is expected to be more accurate to predict fuel consumption and provide the optimal routes while the performance based of the deep learning techniques will increase with the amount of data available, since more and more data about maritime traffic, vessel routes and weather conditions is accumulated every day and used to train the neural network. Both kind of solutions will be made available to maritime operators through web services and cloud platforms, being the Copernicus DIAS infrastructure a perfect platform to host these services. The medium and long term plan is to stop improving the model-based solution since the neural network performance is expected to be much better once that enough data is collected to allow a correct learning process.

Authors: Ruiz Sánchez, Pablo; Castro de Lera, Mario
Organisations: Deep Blue Globe UG (haftungsbeschränkt)
Change detection of Build-up areas exploiting multiple classification approaches in VHR images - MW (for mini-workshop Future EO) (ID: 311)
Presenting: Taggio, Nicolò

Urban environments are complex and fast evolving over time due to the frequent interaction between humans and the natural system. Timely and accurate change information in urban areas is essential for successful urban planning and management and crucial for decision-making related to sustainable development. Change detection of urban environment is considered a mature research field, mainly because has been extensively studied from multidisciplinary scientific teams. However, the change detection problem from multi-temporal earth observation data still remains a challenge mainly due to the continuously increasing needs of the society and the evolving requirements of the stakeholders. Concerning the change detection problem for built-up areas and the current availability of very high-resolution optical data, the present study analyses the benefit of deep learning approach vs traditional supervised classification approaches. The main goal of this work is to identify changes in built-up areas starting from single classifications of two VHR satellite images acquired with difference in time. At a first stage every image is classified using non-parametric classifiers such as Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM) as well as a deep learning approach and using built-up training samples derived from the Italian Regional Cartography. For every single image, the classification result is a binary map, which contains built-up and non built-up areas. At a second stage, the final built-up change map is extracted from the comparison of binary maps and a quality assessment is conducted where the performances of the aforementioned classifiers are examined and compared. According to experimental results in the area of Fiumicino Airport in Rome, capabilities and potential use of the deep learning approach will be analysed in order to outline further improvements for change detection in built-up areas.

Authors: Cilli, Roberto (1); Bellotti, Roberto (1,2,3); La Mantia, Claudio (4); Taggio, Nicolò (4); Karamvasisk, Kleanthis (5)
Organisations: 1: Dipartimento Interateneo di Fisica "M. Merlin", Università degli studi di Bari "A. Moro", Bari, Italy; 2: Center of Innovative Technologies for Signal Detection and Processing (TIRES), Bari, Italy; 3: Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Bari, Italy; 4: Planetek Italia s.r.l., Italy; 5: Laboratory of Remote Sensing, National Technical University of Athens
Applications of the Sentinel Satellites for Spatial Assessments of Ecosystem Health and Resilience in Landscapes (ID: 292)
Presenting: Vagen, Tor-Gunnar

Ecosystem health as a concept considers social and ecological systems in an integrated way. It recognizes the need to understand not only ecological processes, but also how social systems and processes drive ecological change and are in turn impacted by such changes. However, assessing ecosystem health at scale or across large landscapes is complex and requires a combination of systematic field-based ecological and socioeconomic indicators, coupled with consistent methodologies for data collection and approaches to scale such assessments. We apply the Land Degradation Surveillance Framework (LDSF), which is an approach for collection of ecological variables in landscapes that implicitly recognizes scale dependencies and can provide consistent estimates of key indicators of ecosystem health and resilience in landscapes. We present a study where data from both Sentinel 1 and 2 are used to conduct spatial assessments of indicators such as soil organic carbon (SOC), soil erosion prevalence and vegetation condition. These spatial assessments can in turn be integrated with socioeconomic survey data to assess ecosystem health and resilience, including impacts of land management on soil and land health. Further, efforts to restore degraded lands can be greatly enhanced by applying such spatial assessments to target, both spatially and contextually, specific restoration interventions.

Authors: Vagen, Tor-Gunnar; Winowiecki, Leigh A.
Organisations: World Agroforestry Centre (ICRAF), Kenya
Introduction to Overt Space (ID: 276)
Presenting: Fernandez de la Pena, Carlos

Overt Space is an insights company based in Madrid, Spain. We empower global enterprises to make the most accurate decisions ever possible by providing insights rooted in a EO data and platform created specifically for living resources. Global population is expected to reach 9.7 billion by 2020. Given limited arable land, the world food production has to become 30% more efficient. We use satellites and machine learning to solve this problem. To do that we've integrated over 5Pb of optical and radar satellite imagery from a variety of satellites. To work with this massive dataset we built a cloud-based processing infrastructure capable to catalogue, process and deliver terabytes of pixels in minutes. To make diverse data usable for analysis, we've developed a sophisticated cross-calibration pipeline that creates a consistent fused dataset. We do it by: • Making an atmospheric corrections to recover the losses of light caused by going through the atmosphere. • Cross-calibrating other data towards Sentinel-2 (we believe it's become an industry standard and the most informative dataset nowadays). The core of the infrastructure is the processing pipeline based on microservices that enables traditional, machine learning and deep learning algorithms to be performed. With this infrastructure we deployed an artificial intelligence method for detecting clouds that doesn’t require Cirrus and Thermal so it can be used for both our data and for improving Sentinel-2 cloud mask. Unfortunately, all available satellite data doesn't provide sufficient frequency of imaging and spectral resolution to solve global food production challenges. In the future we plan to launch our own satellites dedicated to agricultural monitoring. Our sensors are optimized for vegetation analysis with maximum signal-to-noise ratio and consistent cross-calibration with daily coverage of all arable land we can detect crop development trends even in cloudy regions. Our processing algorithms are focused on extracting insights on the world food production. For example we can identify all the locations in the World where corn is growing. Artificial intelligence allows us to detect agricultural activity, identify crops and compare the productivity with previous years at significantly lower price at scale. Such type of data is interesting to global stakeholders that need an objective global picture like national governments, commodity traders, banks and insurance companies. Additionally it gives us knowledge of the key industry drivers or “features” that correlate with the crop productivity of different nature: economic, climatic, social, etc. While so many other players create companies based on what was possible, what will garner attention or what might be the safest or easiest to deliver, we will always focus on what is needed, and in the process, right the wrongs of years of sub-par or overpromised products.

Authors: Fernandez de la Pena, Carlos; Lengold, Katerina; Kudriashova, Alexandra; Feyzkhanov, Rustem
Organisations: Overt Space, Spain
Using EO datasets to investigate the existence and physics of Lithosphere Atmosphere Ionosphere Coupling possible effects prior to large earthquakes (ID: 274)
Presenting: Arquero Campuzano, Saioa

Within the SAFE project, funded by ESA in 2015-2016, we have started to analyse possible effects in ionosphere due to seismic activity, known as Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) effects. During the project, we analysed 12 large and intermediate earthquakes showing that the use of Swarm satellite data, integrated by ground observations (e.g. seismic, total electron content and ionosonde measurements), is very important to better understand the preparatory phase of large earthquakes in the world. The positive results we obtained during the project have convinced ESA to approve an extension (e-SAFE) of another year (presently ongoing and ending in 2019). The e-SAFE project, together with an ASI funded project, Limadou-Science, continues to show that there is a possible influence of lithospheric activity on the magnetic field and the electron density measurements taken by Swarm satellites, with great expectation also to find similar results also with CSES satellite mission. In this presentation, we will show how large satellite datasets (e.g. Swarm, ERA-Interim, MERRA-2, etcetera) can help in the search of anomalous features in atmosphere and ionosphere before the occurrence of significant magnitude (M6+) earthquakes in the world. The advantage in the use of EO satellite datasets with respect to ground observation is related to their global coverage, also in inaccessible areas (e.g. oceans and deserts) of the whole world. We have developed different methods to investigate possible pre-earthquake effects in multi-datasets, considering the different nature of them by a multi-parametric approach. In this regard, some positive results have been published recently about Nepal M7.8 2015, Amatrice-Norcia M6.5 2016, Ecuador M7.8 and Mexico M8.2 earthquakes (De Santis et a. 2017, Piscini et al. 2017, Akhoondzadeh et al. 2018 and Marchetti et al. 2018). The use of EO global datasets could give an effort to the knowledge of the deep nature of the Earth system and its components, well known as Geosystemics (De Santis et al. 2015). References: • Akhoondzadeh M., De Santis, A., Marchetti, D., Piscini, A., Cianchini, G., (2018). Multi precursors analysis associated with the powerful Ecuador (MW = 7.8) earthquake of 16 April 2016 using Swarm satellites data in conjunction with other multi-platform satellite and ground data. Advances in Space Research 61, 1, 248-263. • De Santis, A., De Franceschi, G., Spogli, L., Perrone, L., Alfonsi, L. Qamili, E., Cianchini, G. et al. (2015). Geospace Perturbations Induced by the Earth: The State of the Art and Future Trends. Physics and Chemistry of the Earth, Parts A/B/C, May. Doi:10.1016/j.pce.2015.05.004. • De Santis A., Balasis, G., Pavón-Carrasco, F.J., Cianchini, G., Mandea M. (2017). Potential earthquake precursory pattern from space: the 2015 Nepal event as seen by magnetic Swarm satellites, Earth and Planetary Science Letters, 461, 119-126. • Marchetti D., Akhoondzadeh M. (2018) Analysis of Swarm satellites data showing seismo-ionospheric anomalies around the time of the strong Mexico (Mw=8.2) earthquake of 08 September 2017, Advances in Space Research, in press. DOI: 10.1016/j.asr.2018.04.043 • Piscini A., De Santis A., Marchetti D. and Cianchini G. (2017). A multi-parametric climatological approach to study the 2016 Amatrice-Norcia (Central Italy) earthquake preparatory phase, Pure appl. Geophys., 174, 10, 3673-3688.

Authors: De Santis, Angelo (1); Cesaroni, Claudio (1); Cianchini, Gianfranco (1); Di Giovambattista, Rita (1); Ippolito, Alessandro (1); Marchetti, Dedalo (1); Perrone, Loredana (1); Piscini, Alessandro (1); Spogli, Luca (1); Abbatista, Cristoforo (2); Carbone, Marianna (2); Amoruso, Leonardo (2); Santoro, Francesca (2); Arquero Campuzano, Saioa (1); D'Arcangelo, Serena (4); Poggio, Federica (3); Carducci, Andrea (3)
Organisations: 1: INGV, Italy; 2: Planetek Italia srl; 3: Università Gabriele D’Annunzio – Chieti, Italy; 4: Facultad Física (UCM), Avd. Complutense
The Mini-EUSO telescope for Earth observation from the ISS in the ultraviolet range (ID: 273)
Presenting: Conti, Livio

The Mini-EUSO instrument is a high sensitivity and high resolution compact telescope for night-time Earth observation, in the ultraviolet range, that will be installed by the next few months, in the Russian Zvezda module, pointing to the Earth from a nadir-oriented window. Mini-EUSO aims at measuring atmospheric events, such as transient luminous events (TLEs) and meteors, as well as at searching for strange quark matter and bioluminescence. The telescope includes an UV camera (with a spatial resolution of about 6 km and a temporal resolution of 2.5 microseconds) sensitive to the 300–400 nm frequency range and an ancillary IR camera to complement studies of atmospheric phenomena. Mini-EUSO is one of the pathfinder experiments of the larger UV telescopes of the JEM-EUSO program, under development to study UHECRs from space, by measuring the UV emissions produced in the Earth’s atmosphere by the cosmic rays showers. In this framework, Mini-EUSO will also allow detecting space debris to verify the possibility of using a EUSO-class telescope in combination with a high-energy laser for space debris remediation. Mini-EUSO is a mission approved and selected by the Italian Space Agency (ASI) and by the Russian Space Agency Roscosmos, under the name ”UV atmosphere”.

Authors: Conti, Livio (1,2); Bertaina, Mario (3); Casolino, Marco (2); Fornaro, Claudio (1,2); Picozza, Piergiorgio (4,2,1); Ricci, Marco (5)
Organisations: 1: Uninettuno University, Rome, Italy; 2: INFN, Sezione Roma Tor Vergata, Rome, Italy; 3: INFN, Sezione di Torino, Turin, Italy; 4: Università di Roma Tor Vergata, Rome, Italy; 5: INFN, LNF, Frascati (Roma), Italy
The CSES satellite mission for studying the near-Earth electromagnetic, plasma and particle environment (ID: 271)
Presenting: Conti, Livio

We present the CSES (China Seismo-Electromagnetic Satellite) mission launched on February 2nd, 2018. CSES is a Chinese-Italian satellite dedicated to monitoring electromagnetic fields, plasma parameters and particle fluxes induced by natural sources and artificial emitters in the near-Earth space. In particular, the mission aims to study the existence of possible (temporal and spatial) correlations between the observation of iono-magnetospheric perturbations (including precipitation of particles from the inner Van Allen belts) and the occurrence of seismic events. However, a careful analysis is needed in order to distinguish measurements possibly associated to earthquakes from the large background generated in the geomagnetic cavity by the solar activity and the tropospheric electromagnetic emissions. Data collected by the mission will also allow to studying solar-terrestrial interactions and phenomena of solar physics, namely Coronal Mass Ejections, solar flares and cosmic ray solar modulation. CSES is a 3-axes stabilized satellite; the orbit is circular Sun-synchronous, altitude of about 500 km, inclination of about 98°, descending node at 14:00 LT. The expected mission lifetime is of 5 years. The CSES payload includes: two particle detectors, a Search-Coil Magnetometer, a High Precision Magnetometer (provided by Austria), an Electric Field Detector, a Plasma analyzer, a Langmuir probe, a GNSS Occultation Receiver and a Tri-Band Beacon. Italy participates to the CSES mission with the LIMADOU Collaboration that has built the High Energy Particle Detector (HEPD) (conceived for optimizing detection of energetic charged particles precipitating from the inner Van Allen belts); has collaborated in developing and testing the Electronic Field Detector, and participates in analyzing data of all payloads of the CSES mission. CSES is the first of a series of Chinese LEO satellites for Earth Observation; the launch of the second mission (CSES-02), with a similar payload, is planned within two years on the same orbit, in order to monitor the same area on ground with a revisit time of about half of on hour.

Authors: Conti, Livio (1,2); Ambrosi, Giovanni (3); Battiston, Roberto (4,5); Contin, Andrea (6); De Santis, Angelo (7); De Santis, Cristian (2); Iuppa, Roberto (5); Osteria, Giuspeppe (8); Picozza, Piergiorgio (9,2,1); Ricci, Marco (10); Sparvoli, Roberta (9,2); Ubertini, Pietro (11); Zoffoli, Simona (4)
Organisations: 1: Uninettuno University, C.so Vittorio Emanuele II, 39, 00186, Rome, Italy; 2: INFN - Sezione Roma 2, V. della Ricerca Scientifica 1, 00133, Rome, Italy; 3: INFN - Sezione of Perugia, V. A. Pascoli, 06123, Perugia, Italy; 4: Agenzia Spaziale Italiana, V. del Politecnico snc, 00133 Rome, Italy; 5: University of Trento and INFN - TIFPA, V. Sommarive 14, 38123 Povo (TN); 6: University of Bologna and INFN - Sezione of Bologna, V.le Berti Pichat 6/2, Bologna, Italy; 7: INGV, V. di Vigna Murata 605, 00143 Rome, Italy; 8: INFN - Sezione of Napoli, via Cintia, I-80126, Napoli, Italy; 9: University of Tor Vergata, V. della Ricerca Scientifica 1, 00133, Rome, Italy; 10: INFN - LNF, V. E. Fermi, 40, 00044 Frascati (RM), Italy; 11: INAF - IAPS, V. Fosso del Cavaliere 100, 00133, Rome, Italy
Postprocessing Methodology For Crop Classification Maps (ID: 267)
Presenting: Lavreniuk, Mykola

Obtaining accurate agricultural land use (crop) maps with satellite Earth Observation, in particular high resolution data, is one of the most important tasks in remote sensing. Such maps provide basic information for many applications and are vital in remote sensing studies. In pixel-based classification maps, there is always some noise (sometimes referred to as “salt-and-pepper”). In order to decrease the level of noise, different types of filters have been used in the stage of satellite data preprocessing. The most popular and effective post-processing filters are based on moving windows that slide over the classified image and assign a new class to the central pixel of the given window based on certain rules or morphological filters take into account the geometric structure and shape of “objects” in pixel based thematic maps. The most challenging task in post-classification filtering is preserving edges and boundaries between different fields. Often these boundaries are narrow and some traditional filters tend to treat this like noise and remove them. Therefore, in this paper, we propose a novel object-based method for post-processing of the crop classification maps that allows to reduce noise in the maps and to increase their overall accuracy. The main idea of the object-based method is to explore each group of pixels with the same value of the class as a holistic object, in contrary to commonly used methods based on the principle of a moving window. This approach has been compared with the traditional methods for noise filtering and revealed its advantage in term of accuracy, using statistical McNemar test, and visually for the territory of the Kyiv region in 2017. Its main advantage is a preservation of the forms of the classified objects and the borders between them that, in turn, prevents disappearance of the reliably classified objects with small size. The overall accuracy of the final classification map (94.2%) increased by 2.3% compared to the original map (91.9%) and by 0.6% compared to the improved voting method.

Authors: Lavreniuk, Mykola (1,2); Kussul, Nataliia (2); Vasiliev, Vladimir (1)
Organisations: 1: EOSDA, Ukraine; 2: Space Research Institute, Ukraine
Compering TVDI classification to Agrocilmatic classification in Iran (ID: 266)
Presenting: Asadi Oskouei, Ebrahim

Temperature Vegetation Dryness Index (TVDI) is one the approaches to assess dryness using remote sensing data mostly in a certain period of time and tracking its results can have held a tool to monitor how dryness is changing during desired time. But the index is not used as a classification instrument frequently. The TVDI index is based on vegetation index data (NDVI) and surface temperature (Ts) that can effectively shows spatial and temporal distribution of dryness. MODIS Products are very suitable for wide range measurements, long-term monitoring, and high-resolution and drought assessments and monitoring. In this research, the year data of TVDI was calculated in 16 days step for years 2013, 2016 using the MODIS (Moderate Resolution Imaging Spectroradiometer) satellite products L3 products of NDVI and TSS for all Iranian political territory. This country includes many different climatologically classes such as very hot dry desserts, humid forest, Semi arid and etc. The final map of TVDI where classified in 7 groups with equal group length and compared with agro-climatic which is published by Iranian Meteorological Organization. The results show that long term TVDI map can reveal some boundaries of different climate classes but the main errors occur in high cold mountainous areas and also in very humid forest arias. The best accordance between TVDI and the classification can be seen in deserted areas. Another advantage of this method, in comparison with old classical classification methods based on ground data interpolation is the ability of reveling micro climates which exist inside bigger climate classes. Although for gaining more accurate results for sub-climatic and micro climates.

Authors: Asadi Oskouei, Ebrahim (1); Lopez-Baeza, Ernesto (2); Saboori Noghabi, Saeed (3)
Organisations: 1: Islamic Republic of Iran Meteorological Organization- Iran; 2: University of Valencia-Spain; 3: Shahrood University of Technology-Iran
Assessment of Sustainable Development Goals for Ukraine using NEXUS approach (ID: 265)
Presenting: Kussul, Nataliia

Aimed on reaching Sustainable Development Goals (SDGs) accepted in Sendai Framework for 2015-2030 project ERA-PLANET implements use of earth observation data in tasks of Environmental Management. In this paper we propose methodology for calculating indicators of sustainable development goals within the GEOEssential project, that is a part of ERA-PLANET Horizon 2020 project. We consider such three indicators 15.1.1 Forest area as proportion of total land area, 15.3.1 Proportion of land that is degraded over total land area and indicator and 2.4.1. Proportion of agricultural area under productive and sustainable agriculture. For this, we used remote sensing data, weather and climatic models’ data and in-situ data. Accurate land cover maps are important for precisely land cover changes assessment. To improve the resolution and quality of existing global land cover maps we proposed our own deep learning methodology for country level land cover providing. For calculating essential variables, that are vital for achieving indicators, NEXUS approach based on idea of fusion food, energy and water was applied. Long-term land cover change maps connected with land productivity maps are essential for determining environment changes and estimation of consequences of anthropogenic activity. The JRC developed methodology for Land Productivity Dynamics (LPD) estimation based on NDVI profile derived from SPOT-VGT time series with course spatial resolution. Taking into account that our national land cover maps have much higher spatial resolution, it is necessary to provide land productivity maps with the same resolution. Thus, we estimate productivity map for Ukraine territory for 2010-2014 years based on NDVI profile derived from Landsat images with simplified methodology. Further, we are going to extend of the JRC approach to Landsat and Sentinel-2 data. Also, we get 15 essential variables for food, water and energy, that can be used for monitoring state of Ukrainian resources and indicators calculation. Food essential variables will be used for building new productivity map with help of classification map, replacing productivity map based on NDVI trend.

Authors: Kussul, Nataliia (1,2); Lavreniuk, Mykola (1,2); Shelestov, Andrii (1,2); Kolotii, Andrii (1,2); Shumilo, Leonid (1,2)
Organisations: 1: Space Research Institute, Ukraine; 2: National Technical University of Ukraine “Igor Sikorsky Kiev Polytechnic Institute”
Development of workflow for the calculation of indicators of sustainable development goals 2.4.1 based on the Vlab platform (ID: 264)
Presenting: Shumilo, Leonid

Ecopotential Vlab is a new efficient and perspective way for scientists to carry out research aimed at the use of large satellite and in-situ measurement data supplied by GEOSS Platform. Using cloud computing resources, with direct access to data of GEOSS Portal, Vlab has the ability to introduce workflows to count and monitor essential variables of water, food and energy and, accordingly, to calculate indicators of sustainable development goals for different countries around the world. During our study in the project GeoEssential of ERA-Planet, we developed the workflow to calculate the indicator 2.4.1 Proportion of ag-ricultural area under productive and sustainable agriculture in the Vlab using JRC methodology for calculating to calculate agricultural land productivity index. This Workflow takes as input a time series of satellite images of Landsat-8 or Sentinel-2 and classification map of agricultural land. Based on the time series of satellite images, the trend of the change of the NDVI index is calculated, which determines whether the land is productive, sustainable or unproductive. Using the classification map, the calculation of the area of all agricultural land and productive agricultural land and, accordingly, indicator 2.4.1 as their ratio. In our study, we use a map of classification built using our methods of classification of the landcover using the satellite data of the high spatial resolution of Sentinel-1 and Sentinel-2 and in-situ data, which gives better opportunities for estimating the area of agricultural land for Ukraine, but there is the possibility of using global landcover products, for example ESA CCI-LC, which can be obtained with satellite images at the GEOSS Portal. Our next goal is to implement our algorithm for the classification of satellite images in the Vlab and the implementation of our Nexus approach, which includes a biophysical model of plant growth WOSFOST and a statistical climatic model that, in a collaborative work, enables us not only to calculate the essential variables for water, food and energy, but also to get their forecast, while using our model of food variables, we can improve our methodology for assessing the productivity of agricultural land and the algorithm of counting indicator 2.4.1.

Authors: Shumilo, Leonid; Lavreniuk, Mykola; Kussul, Nataliia
Organisations: NSAU-NASU Space Research Institute, Ukraine
Improving the Availability of Crop Information Using Google Street View (ID: 258)
Presenting: Maus, Victor

The availability of remote sensing data with higher spatiotemporal resolution, such as Sentinel and Landsat, has contributed to improving global land cover products. However, these products are not sufficient to analyse social and environmental impacts. For instance, quantifying and analysing impacts of global supply chains, e.g., due to international trade, requires more detailed time series of global crop maps, including crop types. Data-driven methods, such as Support Vector Machines, Random Forest, and Deep Learning, have proven high accuracy in crop mapping in several regions of the world. These methods could potentially be applied to classify larger areas using high-resolution satellite imagery. However, they require an enormous amount of training data to achieve high classification accuracy. In this work, we present a way to collect vast amounts of spatiotemporal crop reference data using Google Street View (GSV). Interacting with GSV’s features (e.g., moving and zooming), the user can identify different crop types and label and store them as samples. Combining geolocation and dates of the labelled GSV picture with the agricultural calendar, we can derive phenological cycles of the crop types, which are inputs to classification algorithms. Using our tool, one can quickly raise a significant amount of highly accurate field information to feed into machine-learning algorithms and extend the classification to larger areas. We performed a case study in Brazil, where we successfully collected reference data for several major crop types, including soybean, maize, cotton, and sugarcane. In future versions of the tool, we envisage integrating computer vision methods to help with identification of crop types. Besides, we could also extend the spatial coverage of the application by incorporating other sources of street view pictures, such as Mapillary https://www.mapillary.com. The tool will be openly available and advertised in the GeoWiki news at https://www.geo-wiki.org.

Authors: Maus, Victor (1,2); See, Linda (1); Fritz, Steffen (1); Perger, Christoph (1); Victoria, Daniel de Castro (3); Laso Bayas, Juan-Carlos (1)
Organisations: 1: IIASA - International Institute for Applied Systems Analysis, Austria; 2: WU - Vienna University of Economics and Business, Austria; 3: EMBRAPA - Brazilian Agricultural Research Corporation, Brazil
Detecting Potential Rapeseed Zones With Semi-Automated Classification Tool For Beekeepers (ID: 217)
Presenting: Smykała, Krzysztof

The paper is an overview of the first step for automated classification process of entomophilous crops for farmers and migratory apiarists. In described step rapeseed fields will be detected automatically in the area of interest. The results of described subproject will be semi-automatic tool allowing users to confirm crop type of a given place. The output of the tool will be ground truth for machine learning algorithms in the final full-autonomic classification tool. Machine learning (ML) and Artificial Intelligence (AI) are common topics, specially over the last few years. Because of the amount of data gathering by satellites instruments (e.g. by Sentinel-2 MultiSpectral Instrument (MSI)) everyday it is also a popular topic in Earth Observation communities. These technics (ML and AI) have to be used in Earth Observation processes analysis for better environment understanding. The decreasing honeybees population is known worldwide issue concerning scientists, beekeepers and farmers as a significant part of apiculture and agriculture. The EFSA (European Food Safety Authority) notices that beekeepers have been reporting unusual loss in honeybees population over the last 10 to 15 years. That issue affects particularly Western European countries like Germany, France, the Netherlands, Switzerland, the UK, Spain and Belgium. Bees are responsible for pollinating around 100 crop species. As FAO (Food and Agriculture Organisation) estimates, these species provide around 90% of food worldwide. Additionally Greenpease alarms that pollinators have an impact on better growth of about 4,000 vegetables species (only in Europe). Next to the invaluable of biodiversity, pollinators are responsible for rich European agri-food market. Annual cash value of pollinators are estimated at over 22 billions euros; worldwide it is hundreds of euros. Any beekeepers assistance programme provides uncountable profits for humanity and described project is dedicated to support beekeepers and farmers. In our work we use changes in normalised differential vegetation index (NDVI) and multi-temporal analysis of multispectral images take between April and August to determine potential rapeseed zones (PRZ). Workshops and consultations with beekeepers and farmers has been held for better process understanding, to meet industry requirements.

Authors: Smykała, Krzysztof
Organisations: QZ Solutions, Poland
3D mapping of existing observing capabilities in the frame of GAIA-CLIM H2020 project (ID: 256)
Presenting: Madonna, Fabio

The aim of the Gap Analysis for Integrated Atmospheric ECV CLImate Monitoring (GAIA-CLIM) project is to improve our ability to use ground-based and sub-orbital observations to characterise satellite observations for a number of atmospheric Essential Climate Variables (ECVs). The key outcomes will be a "Virtual Observatory" (VO) facility of co-locations and their uncertainties and a report on gaps in capabilities or understanding, which shall be used to inform subsequent Horizon 2020 activities. In particular, Work Package 1 (WP1) of the GAIA-CLIM project is devoted to the geographical mapping of existing non-satellite measurement capabilities for a number of ECVs in the atmospheric, oceanic and terrestrial domains. The work carried out within WP1 has allowed to provide the users with an up-to-date geographical identification, at the European and global scales, of current surface-based, balloon-based and oceanic (floats) observing capabilities on an ECV by ECV basis for several parameters which can be obtained using space-based observations from past, present and planned satellite missions. Having alighted on a set of metadata schema to follow, a consistent collection of discovery metadata has been provided into a common structure and will be made available to users through the GAIA-CLIM VO in 2018. Metadata can be interactively visualized through a 3D Graphical User Interface. The metadataset includes 54 plausible networks and 2 aircraft permanent infrastructures for EO Characterisation in the context of GAIA-CLIM currently operating on different spatial domains and measuring different ECVs using one or more measurement techniques. Each classified network has in addition been assessed for suitability against metrological criteria to identifyy those with a level of maturity which enables closure on a comparison with satellite measurements. The metadata GUI is based on Cesium, a virtual globe freeware and open source written in Javascript. It allows users to apply different filters to the data displayed on the globe, selecting data per ECV, network, measurements type and level of maturity. Filtering is operated with a query to GeoServer web application through the WFS interface on a data layer configured on our DB Postgres with PostGIS extension; filters set on the GUI are expressed using ECQL (Extended Common Query Language). The GUI allows to visualize in real-time the current non-satellite observing capabilities along with the satellite platforms measuring the same ECVs. Satellite ground track and footprint of the instruments on board can be also visualized. This work contributes to improve metadata and web map services and to facilitate users' experience in the spatio-temporal analysis of Earth Observation data.

Authors: Madonna, Fabio; Tramutola, Emanuele; Di Filippo, Alessandro; Rosoldi, Marco; Amato, Francesco
Organisations: CNR-IMAA, Italy
Identification of Drought Stress Using TVDI in Mashhad Plain-Iran (ID: 254)
Presenting: Asadi Oskouei, Ebrahim

Remote sensing measurements are very useful techniques that can be used to obtain useful information from soil and vegetation. MODIS Products are very suitable for wide range measurements, long-term monitoring, and high-resolution and drought assessments and monitoring. The TVDI index, which is based on vegetation index data (NDVI) and surface temperature (Ts), can effectively show spatial and temporal distribution of dryness. In this research, dryness index (TVDI) was calculated for drought evaluation in Mashhad plain (northeastern Iran) for 10 years (2003-2013 from mod 13a2& mod 11A2). The results showed that in the central, eastern and southeastern parts of the plain, which has a height of 850 to 1350 m, the dryness intensity and variation coefficient of TVDI are more than other parts. While the highlands (1350-3250 m) of study area have the lowest value of the TVDI index and experiences a sustained trend over a period of ten years. In order to justify drought stress, daily precipitation data of 42 weather stations were selected and the intensity of correlation of these two variables was investigated using Pearson correlation test. The nature of the vegetation response to delayed precipitation is proved, so the Pearson linear correlation results between the TVDI drought index and cumulative rainfall during the image period as 16, 32, 48 and 64 days before the year showed that the highest correlation intensity was related to: winter period in the image period with average intensity of -0.590, spring season with a delay of 16 days from rainfall with average intensity of -0.427, autumn season in the image period with intensity of -0.370 and the summer season with a 64 days delay of rainfall of -0.261 magnitudes. Based on the linear regression relationship, TVDI changes in areas less than 1350 meters in height, the rainfall rate has a greater impact on TVDI, while in areas with high altitudes, this plain, rainfall in linear equation cannot determine the TVDI strongly.

Authors: Saboori Noghabi, Saeed (1); Lopez-Baeza, Ernesto (2); Asadi Oskouei, Ebrahim (3)
Organisations: 1: Shahrood University of Technology-Iran; 2: University of Valencia-Spain; 3: Islamic Republic of Iran Meteorological Organization-Iran
Enhancing Remote Sensing Applications towards Exascale with the DEEP-EST Modular Supercomputer Architecture (ID: 252)
Presenting: Cavallaro, Gabriele

Due to the advancement of the latest-generation remote sensing instruments, a wealth of information is generated almost on a continuous basis and with an increasing rate at global scale. This sheer volume and variety of sensed data leads to a necessary re-definition of the challenges within the entire lifecycle of remote sensing data. Trends in parallel High-Performance Computing (HPC) architectures are constantly developing to tackle the growing demand of domain-specific applications for handling computationally intensive problems. In the context of large scale remote sensing applications, where the interpretation of the data is not straightforward and near-real-time answers are required, HPC can overcome the limitations of serial algorithms. The Dynamic Exascale Entry Platform - Extreme Scale Technologies (DEEP-EST) aims at delivering a pre-exascale platform based on a Modular Supercomputer Architecture (MSA) wherein each module has different characteristics. The MSA provides not only a standard CPU cluster module, but a many-core Extreme Scale Booster (ESB), a Global Collective Engine (GCE) to speed-up MPI collective operations in hardware, a Network Attached Memory (NAM) as a fast scratch file replacement, and a hardware accelerated Data Analytics Module (DAM). As partner in the DEEP-EST consortium, we aim at enhancing machine learning in the remote sensing application domain towards exascale performance. Several of the innovative DEEP-EST modules are co-designed by particular methods such as the clustering algorithm Density-Based Spatial Clustering (DBSCAN) and classification algorithms like Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs). We intend to present how the different phases of these algorithms (i.e., training, model generation and storing, testing, etc.) can be neatly distributed across the various cluster modules and thus leverage their unique functionality. The MSA will be used to not only improve the performance of these methods but also to serve as blueprint for the next generation of exascale HPC systems.

Authors: Cavallaro, Gabriele (1); Erlingsson, Ernir (2); Riedel, Morris (2); Neukirchen, Helmut (2)
Organisations: 1: Forschungszentrum Jülich, Germany; 2: University of Iceland
Fuzzy Based Technology for Automatic Monitoring of Tree Loss and Gain in Forest Areas – case study Romania. (ID: 251)
Presenting: Budileanu, Marius

This paper addresses a new optical technique, based on fuzzy and statistical analysis of Satellite Image Time Series for extracting estimations of forest density dynamics – forest loss and forest gain. The proposed algorithm was implemented and validated in Romania, a country with 28.8% (FAO, 2015) of the territory covered by forest. Many areas surveyed through Earth Observation Data are covered by natural forests which serve as home for some of the most endangered European mammal species such as Brown bear and Eurasian lynx. In 1990, a new law on retrocession was enforced and the state had only 52.2 percent of the country’s forest in 2014. The private property over the forests can be translated in forest harvesting and change of land cover. Using EO data from Copernicus Stream and Landsat Archive we have create a SITS that covered entire Romania between 1983 and 2017. In our analysis were ingested over 2500 satellite images, from early Landsat4 and Landsat5, to present Landsat8 and Sentinel2. Do to the specificity of the fuzzy approach, we were able to use even scenes with high degree of cloud and shadow contamination. The basic concept on which the solution is build is the finite probability of belonging to the forest class, which is calculated for each scene, at pixel level, through an unassisted machine learning process. We proved that the weighted average of the finite probability over a long enough period of time, defines a good estimator for vegetation density (to be understood as the fraction of the surface covered by vegetation). The weighted average operation is very complex, using as input, beside the time series of finite probability maps, ad hoc generated cloud and shadow masks, scene dependent distinguishability coefficients for each channel etc. To resume, the fuzzy approach allows to use a series of partial, approximate results to extract a full and precise conclusion. There are several types of results that can be obtained with the here presented technology: forest vegetation density maps and binary forest maps for a short interval of time (1 year to 3 years), forest evolution maps (indicating the tendencies of tree loss and gain in forest areas, for longer period of time, of the order of decades) and deforestation-reforestation maps (accurate for long enough periods, decades). The validation and the measurement of performances was made for the deforestation-reforestation maps over large areas of Romanian forests, using statistically representative samples of points (pixels of 30x30 m spatial resolution) and historical satellite imagery. This type of map indicates in which of the following 4 evolution scenarios a pixel can be: never was forest, always was forest, is no longer forest, has become forest. The overall performance was measured to be over 98% correct classification. The performances for the particular case of deforestation was 90.81% correct classification, with a 9.19% false positive and 6.00% false negative classifications. Similar performances were obtained for reforestation.

Authors: Budileanu, Marius; Cucu-Dumitrescu, Catalin
Organisations: Terrasigna, Romania
The Use Of Earth Observation Data In Assessing The Dynamics Of Land Fragmentation In Romania During The Past 30 Years (ID: 245)
Presenting: Copăcenaru, Olimpia

In Romania, land use and land cover face significant changes over the time, generated by political, socio-economic, technological factors and natural factors. In the light of the Common Agricultural Policy (CAP), one of the obstacles of sustainable agricultural development is land fragmentation. This can be defined as the situation in which a single farm or ownership consists of numerous spatially separated plots. As in most of the ex-communist East European countries, the immediate result of land retrocession was the fragmentation of land into small plots worked by separate owners. Dominant problems associated with land fragmentation are the small size, irregular shape and dispersion of parcels, therefore land fragmentation is often considered to be an obstacle for improving agricultural productivity by preventing efficiency gains. Earth Observation data advancements have improved our capability to monitor land use and land cover changes over vast areas and to assess its spatiotemporal dynamics. The objective of this research is to perform an unprecedented survey of the evolution of Romanian agricultural landscapes during the past 30 years, following the fall of the communist regime, using an innovative technology that involves heterogeneous data sources: satellite data, socio-economic, geomorphological and meteorological data. Previous similar researches in Romania focused only on parts of the country and therefore their results are not relevant at national level. Our analysis targets a significant number of administrative units from the main agricultural basins and provides valuable insights on the regional differences and patterns of land fragmentation and, lately, land consolidation process. Special attention has been paid to the influence of various political triggers related to land property rights, as well as the socio-economic indicators that drive or are driven by land fragmentation. Supervised land cover classifications performed on a 5-years interval (starting from 1990) together with specific segmentation techniques, both based on time series of high and medium resolution satellite imagery (Sentinel-2 and Landsat ) represent a solid approach for quantifying these changes, allowing the computation of a broad range of land fragmentation indices. The main outcome of the research is a modern approach focused on the analysis of agricultural landscape at national scale level, based on a very specific sampling system that integrates multiple data types. The outcomes of the project will provide a better knowledge base that can provide support for the refinement of the current policies and also a tool to investigate further changes in agricultural landscapes.

Authors: Copăcenaru, Olimpia; Flueraru, Cristian
Organisations: Terrasigna, Romania
Combining UAV and Sentinel-2 auxiliary data for forest growing stock volume estimation through hierarchical model-based inference (ID: 243)
Presenting: Puliti, Stefano

The increased availability of remotely sensed (RS) data at multiple levels of resolution, from coarse satellite imagery (e.g. Sentinel-2) to highly detailed 3D data from unmanned aerial vehicles (UAVs), offers new possibility to estimate and map forest resources cost-effectively. The development of new statistically estimation frameworks allow the estimation of forest resource parameters using multiple auxiliary RS data while ensuring a rigorous reporting of the uncertainty of the resulting estimates. In this study, growing stock was estimated and its variance assessed using a combination of field, UAV, and Sentinel-2 data in a hierarchical model-based (HMB) inferential framework. The main objective of this study was to compare the precision of the HMB estimates against three alternative cases, namely (1) a model-based estimation based on field data and wall-to-wall airborne laser scanning (ALS) data (MB-ALS) or (2) Sentinel-2 data (MB-S2), and (3) a hybrid inference using field data and of partial-coverage UAV data (HYB). Furthermore, the study investigated the possibility of reducing the number of UAV samples and its effect on the precision of the estimates of HMB and HYB. The results indicated that the precision in terms of standard error (SE; m3 ha-1) of the proposed HMB case was of similar magnitude (7.7) compared to the MB-ALS (8.3) and HYB (8.1) cases. In contrast, the SE nearly doubled (13.1) for case MB-S2 where only Sentinel-2 multispectral data were used as auxiliary data. The results also revealed a greater decrease in precision for the HYB case compared to the HMB when reducing the UAV sampling intensity. In particular, the precision of the HMB when including only 9% of the total number of UAV samples (55) was of similar magnitude to that of the HYB case with all the UAV samples. The findings of this study are encouraging for further application of the proposed HMB application, especially in light of the potential cost reductions due to the reduced need of UAV samples. A key advantages of the proposed methodology is that it does not assume probabilistic properties of any of the samples (the field data and the UAV data). It can therefore be adopted even when UAV data are acquired purposely and not according to probabilistic sampling designs. On the other hand, it relies on the assumption that the models connecting the different levels of data are correctly specified for the area of application for the estimators to be approximately unbiased.

Authors: Puliti, Stefano (1); Saarela, Svetana (2); Gobakken, Terje (3); Ståhl, Göran (2); Næsset, Erik (3)
Organisations: 1: Norwegian Insitute of Bioeconomy Research (NIBIO), Norway; 2: Swedish University of Agricultural Sciences (SLU), Sweden; 3: Norwegian University of Life Sciences (NMBU)
Forest/Nonforest “hot spots” in south-central Siberia Identifying by Landsat Data (ID: 234)
Presenting: Parfenova, Elena I.

In south-central Siberia, the January and July temperatures and annual moisture index time series were analyzed for 1961-2010 based on observation data and showed that January temperature increased 1-2oC and July temperature increased 0.7-1.5oC over the last 50 years. The moisture trends were positive supporting the forest portion of forest-steppe. The goal of this work was to identify the forest-to-nonforest change for the last 25 years in south-central Siberia from remotely sensed data. To identify forest/nonforest lands we used medium-resolution (30 m) Landsat imagery covering southern regions of central Siberia. Overall temporal data coverage was from the late 1980-s until 2015 and 2016 including periods of 1988–1992 and 2013–2016. The study area covered the central and southern areas of Krasnoyarsk region, Tyva and Khakassia Republics from 50°N to 58°N and from 88°E to 99°E. Totally we used 79 clear and near cloud-free scenes from Landsat 4, 5, 7 and 8 obtained between June and September. All scenes were mosaicked together to form a continuous image. To generate vegetation maps we applied a supervised classification approach (maximum likelihood classification) to the three-layer tasseled cap file. Training sites for classification procedure were selected using high-resolution imagery of Google Earth together with ancillary data including GLC 2000 Land Cover Map. Totally, about 90 samples were selected to perform algorithm training. We derived maps of forest/non-forest lands for two time periods: 1988–1992 and 2013–2016 and the difference map showing the areas of reforestation and of the forest loss. The total deforestation area was about 2.6 million hectares or 5% of the study area, reforestation area was 1.6 million hectares or 3% of the study area. Deforestation occurred mostly in Tyva Republic and in the Angara region. A comparison of the land cover change map with the locations of forest fires showed that most of the forest cover loss in these areas was caused by fires occurred in 2002 in Tyva Republic and in 2011, 2012 in the Angara region. These forest-nonforest “hot spots” were found to match to predicted “hot spots” of forest loss in a warming climate by 2020. Thus, climatic conditions would not be suitable for the forest recovery over burnt/ Logging areas under changing climate. The largest reforestation areas were located mainly in northern and central regions and can be attributed to the forest recovery on old logging sites.

Authors: Shvetsov, Eugene G.; Parfenova, Elena I.; Tchebakova, Nadezhda M.
Organisations: Forest Institute of FRC KSC SB RAS, Russian Federation
Linked Data Infrastructure for Monitoring Large Hydro Power Reservoirs (ID: 230)
Presenting: Damova, Mariana

The exploitation of hydropower reservoirs requires monitoring based on heterogeneous datasets, such as spatial information, digital measurements, meteorological forecasts. As hydropower reservoirs are practically an integral part of their contingent nature environment, their operations influence it. That is why it is important to monitor them within the overall setting of their nature environment. Periodically collected information from different sources, such as: detailed data about the condition of the distinct hydropower reservoirs, water economic data, meteorological data and forecasts, geographical information, information about the integrated nature environment are to be used by water resources managers. An information system that successfully addresses the need of daily and effective monitoring of the hydropower reservoirs and informed decision making for maintenance, routine exploitation and emergency situations requires capability of federated and integrated representation of spatial and digital, symbolic data, images and metadata along with their easy linking in an open, easy to maintain, update and rely on information infrastructure of interoperable heterogeneous data. Linked data technologies offer an optimal framework to deal with this issue, as they allow easy data integration, resource economy and seamless extendibility of the required information. So, we adopt semantic and linked data technologies to obtain data interoperability between spatial information of GIS systems, remote sensing information, symbolic and numerical data, e.g. meteorological data and proprietary measurement data to create such an interoperable, open, easily extendable and maintainable infrastructure for the needs of hydropower reservoirs exploitation and the management of their environment. The semantic infrastructure allows federated queries, application of analytics and intelligence mixing numeric with symbolic reasoning, and exposing the federated heterogeneous information in an easy to grasp manner. We demonstrate this on the example of the calculation of the water equivalent of the snow stock, one routine weekly task of water economic management in the winter of importance for the management of the water reservoirs and for the monitoring of the river catchment, prediction of floods and other impacts. To calculate the water equivalent of the snow cover it is necessary to pull and analyze information from 6-7 sources that are numeric, e.g. meteorological data, sunshine intensity, precipitation, soil moisture, air temperature, snow coverage size, some of which are best obtained from satellite data, and apply formulas to calculate the water equivalent of the snow stock and to consider the potential harmful impact of its quantity on contingent areas of the water reservoirs. We propose an ontology built on top of INSPIRE Data Specification on Hydrography , that includes semantic elements from GIS, satellite data and proprietary measurement data and show how it can interoperate with neural network models, e.g. Recurrent Neural Networks , Convolutional Neural Networks , calculating the snow volume in a certain geographic area, based on historic data and point measurements, so that forecasts for the water equivalent of the snow stock can be produced, displayed on a GIS system, and alerts or recommendations generated for the water resources managers. We show the advantages of the proposed approach with respect to non-semantic solutions and standalone GIS applications.

Authors: Damova, Mariana; Stoyanov, Emil; Petrov, Martin
Organisations: Mozaika, Bulgaria
Exploiting Time on the Google Earth Engine: Cloud Detection of Landsat-8 Time Series (ID: 226)
Presenting: Gómez-Chova, Luis

Proliferation of Earth observation (EO) satellites such as Landsat-8 or Sentinel-2, together with their high spatial and spectral resolutions, has increased the amount of data to be processed exponentially. Quick access to distributed archives, high system automation, and parallel and scalable implementations are required in order to handle these huge geo-spatial data volumes. Furthermore, some of the preprocessing steps of images acquired by these sensors, such as cloud detection, must be carried out operationally within the standard instrument processing chain. Therefore, particularly demanding remote sensing algorithms would benefit from existing platforms that allow us to implement them in a distributed pool of cloud computer resources with direct access to the satellite data. In this context, the Google Earth Engine platform (GEE) has revealed as one of the most promising developments for Earth science data access and analysis. The Google cloud infrastructure contains satellite imagery from well-known satellites, which include Landsat, Sentinels and MODIS among others. In addition, it provides programming tools to access, operate and visualize such data in an easy and scalable manner. Therefore, the convenience of the platform is twofold: it grants access to the a vast satellite imagery, without the hassle of downloading it, using the same syntax regardless of the satellite sensor; and the computational burden of training and testing the model is done on the GEE parallel distributed platform, which automatically performs an efficient memory management and distributes the processing power on virtual machines avoiding concurrency problems, which commonly arise in the context of high data volume processing. In this presentation, in order to illustrate the practical advantages of the GEE, we implement a multitemporal cloud detection algorithm for high-resolution satellite image time series from Landsat-8. The GEE infrastructure grants access to the full Landsat-8 catalog and to a parallel distributed computing platform, which are essential requirements to process satellite image time series on an operational basis.

Authors: Mateo-García, Gonzalo; Gómez-Chova, Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustau
Organisations: University of Valencia, Spain
Transferring Knowledge between EO Satellite Missions: Proba-V Cloud Detection through Deep Learning (ID: 225)
Presenting: Gómez-Chova, Luis

Accurate and automatic detection of clouds is a key issue for further exploiting the information from Earth observation (EO) satellite images. With no accurate cloud masking, undetected clouds are one of the most significant sources of error for a wide range of remote sensing applications. Cloud detection approaches are usually based on the assumption that clouds present some useful characteristics for their identification. Therefore, common approaches to cloud detection consist in applying thresholds over particular reflectance features, thermal bands, or atmospheric absorptions. However, most instruments onboard EO satellites present a limited number of spectral bands, which makes cloud detection particularly challenging. In this context, advanced machine learning methods have shown an improved performance on this problem when they learn from enough labeled data. This presentation reviews from classical to advance deep learning methods applied to the field of cloud detection. In particular, we present the development and implementation of cloud detection approach for Proba-V. The proposed cloud detection algorithms rely on advanced non-linear methods capable of fully exploiting the information of Proba-V in order to improve the cloud masking products. Classical artificial neural networks (NN) are well-established machine learning methods in remote sensing, while deep convolutional neural networks (CNN) have proven to be state of the art methods for many image classification tasks and their use is rapidly increasing in remote sensing problems. These methods directly learn from available data and thus the obtained performance will depend on the quality of the employed training datasets. However, in cloud detection problems, simultaneous collocated information about the presence of clouds is usually not available or requires a great amount of manual labor. We propose to extend the applicability of these methods though transfer learning, i.e., to learn from the available cloud masks datasets built for other similar satellites, such as Landsat-8, and transfer this learning to solve the Proba-V cloud detection problem. CNN are trained with Landsat images adapted to resemble Proba-V characteristics and tested on a large set of real Proba-V scenes. The effectiveness of the proposed methods is successfully illustrated using a large number of real Proba-V images over the world, covering the four seasons. Developed models outperform current operational Proba-V cloud detection without being trained with any real Proba-V data. Moreover, cloud detection accuracy can be further increased if the CNN are fine-tuned using a limited amount of Proba-V data.

Authors: Gómez-Chova, Luis; Mateo-García, Gonzalo; Camps-Valls, Gustau
Organisations: University of Valencia, Spain
Preliminary Results Of 2D/3D Industrial Mapping By fusing Sentinel-2 Data And ALS Point Clouds (ID: 223)
Presenting: Charalampopoulou, Vasiliki

The technological development in the fields of aerial/space technology, computer vision, and image processing, provides new tools and automated solutions for 2D/3D mapping and reconstruction. Research activities combining satellite remote sensing and Aerial Laser Scanning (ALS) have increased in recent years to exploit both: the available rich spectral image information from satellite sensors and the good geometric quality of ALS point clouds. This study discuss the complementary use of Sentinel-2 data with ALS point clouds (augmented with additional information such intensity) for 2D/3D mapping of complex scenes such the industrial ones. The preliminary results illustrate the utility of such multi-temporal approach as well as its potential for object detection and feature extraction processes.

Authors: Charalampopoulou, Vasiliki betty; Maltezos, Evangelos
Organisations: GEOSYSTEMS HELLAS SA, Greece
Latest Airbus' Innovations for Geo-Information Applications (ID: 221)
Presenting: Pentier, Martin

The Intelligence programme line of Airbus Defence and Space is the supplier of choice for commercial satellite imagery, C2ISR systems and related services. Airbus Defence and Space has unrivalled expertise in satellite imagery acquisition as well as data processing, fusion, dissemination and intelligence extraction. Based on these key assets and Cloud technology, Airbus Defence and Space is developing a comprehensive range of services offered under the umbrella of its OneAtlas brand. As developments accelerate, a large variety of services will be made available on the OneAtlas Platform: Users will have access to vast archives of data from a broad range of Earth observation assets (Pléiades, SPOT, TerraSAR-X, Sentinel, etc.) as well as unique image processing technologies (e.g. Pixel Factory, Overland™) and analytics. In addition to archived imagery, newly acquired images will be pushed directly to the cloud and delivered in multiple ways (streaming services, clip and ship, and APIs), depending on what is most convenient for their applications. With these new services, Airbus Defence and Space enables the conversion of Earth observation imagery and data into actionable intelligence across a comprehensive range of markets: defence and security, agriculture & environment, location-based services & navigation, mapping, and aeronautics. In terms of missions, Airbus Defence and Space will start launching the Pléiades Neo constellation in 2020, providing a European solution of 30 cm optical satellite data, and we are working on the next generation of a global SAR solution. Satellite imagery will be complemented by our High-Altitude-Pseudo-Satellite (HAPS) platform: Zephyr, the first of its kind to become operational around 2021, will offer persistent surveillance over entire regions at 10-20cm resolution. The essence for our services offering is to seamlessly combine all Airbus, free and third party data in order to provide data-intensive and -agnostic applications. Our innovation and development roadmap includes Artificial Intelligence applications, e.g. current extensive experimentation with Deep Learning which we start to transform into operational services, or Blockchain. We strongly support innovation activities around Europe in the form of hackathons, such as the Airbus GEO Challenge and the sponsorship of Copernicus Masters. Airbus DS Intelligence is excited about the upcoming Φ-week and would be delighted to contribute to the panel discussions. We are ready to contribute to one or more of the following panels (with preference for one of the FE topics): FE1 – Future EO Capabilities addressing the latest developments in EO sensors, mission concepts and new generation of data-intensive applications FE2 – New Space Economy addressing the emergence of new business models, new approaches, and new inventors in the digital data economy. OS1 – AI & Data Analytics in and for Science: Exploring the transformational effects of AI and Data Analytics in EO research and Earth system science. OS2 – Exploitation Platforms for collaborative Science: Addressing the advancements and opportunities offered by next Exploitation Platforms to advance EO research and promote collaborative approaches in Earth System Science. OS3 – Data Cube: Exploring new approaches for Big Data processing exploiting latest advancements in Data Cube technologies.

Authors: Diesing, Franziska; Pentier, Martin
Organisations: Airbus Defence and Space, France
"Title Case" Exploring fishing sustainability with innovative multi-layered satellite data applications. (ID: 117)
Presenting: Durá Hurtado, Isaac

More than 10 years ago, ESA kicked off the Technology Transfer Programme Office (TTPO). Their mission is to inspire, and to facilitate the use of space technology, systems and know-how for non-space applications. ESA has been very active promoting this program in the entrepreneurial scene. For example, the ESA sponsored the Junction hackathon in Helsinki in November 2016, where there were more than 1,400 competitors. ESA awarded one of the two main prizes to the application HeraSpace for the best idea for the Arctic. Today, just over one year later, HeraSpace is a Copernicus Accelerator Mentee, being selected startup of the month by March 2018, and it has been selected to be incubated at the ESA BIC Madrid. HeraSpace helps fishermen to locate the closest legal fishing grounds, optimizing operating budgets while reducing environmental impacts. The system supports healthy food production, income, employment, and sustainable fishing, as described in this Copernicus use case. HeraSpace dynamically predicts and updates fish distribution patterns. By combining Copernicus satellite data with actual fishing data, the selection of optimal fishing grounds can be drastically improved, as well as the efficient routing of vessels to those locations. Particularly interesting are the features aimed at supporting sustainable exploitation of ocean resources, promoting circularity (circular economy), low carbon activities by the seafood companies, and the support of current and anticipated environmental regulations by global governments and regional fisheries authorities. The key spatial state-of-the-art technology providing improved accuracy, increased temporal coverage and improved spatial resolution is the Copernicus Sentinel 3A-B OLCI Level 2 instrument, a continuation of ENVISAT-MERIS. The technologies used are data from the Copernicus Sentinel 3A-B OLCI, SRAL, SLSTR L2, the Sentinel 2 MSI instrument data products (10 metres resolution), and the multi-mission CMEMS products. Sentinel 3A has already been calibrated and validated (ongoing) with in-situ devices. Data from the second satellite, Sentinel 3B, will be available in the middle of 2018. HeraSpace also uses CMEMS multi-mission products, which are mostly mature already. The high quality, near-real time data retrieved from Sentinel 3 includes variables like temperature, salinity, water depth, and dissolved oxygen levels. These data are supplemented with near-real-time data from the Sentinel 2 SMI from ESA and additional technologies from other government space agencies like NASA. HeraSpace proposes a core machine learning algorithm that will “learn” and successfully apply satellite oceanography patterns to fish behaviour. As the algorithm is applied in a real scenario with an integrated customer feedback loop, the more accurate it becomes. It helps fishermen by forecasting the closest legal fishing grounds, dynamically updating fish distribution patterns that may become outdated due to climate change, fishing pressure, migration patterns, or typical interannual variability. The algorithm model already has been selected and validated by the ESA Research Service Support, and corresponds to the supervised machine learning n-layered neural network model. HeraSpace is composed of an international and highly-experienced team, with decades of experience in software innovation and remote sensing for international fisheries. which is adding high doses of innovation, putting together a hype tech stack, and the Blockchain which warranties that the flux data conforms an unhackable system. It adds transparency about where the fish was catched. This data is correlated with data from an expert knowledge DB (fishing domain), the preferences of the user seafood company, and of course the logic checks every legal regulation coming from fishing authorities datasets like EMODnet. From the interaction of all the mentioned inputs, the HeraSpace algorithm builds the shortest possible route to fishing locations that comply with local fishing regulations and sustainability guidelines, ensuring sustainability of the raw material (fish) while also avoiding administrative fines, and improving the vessel’s efficiency, drastically reducing the operational costs. HeraSpace further envisions itself as a platform for innovations in fisheries management by offering the capacity to reduce the incidental bycatch of sensitive non-target species. For example, the HeraSpace technology should permit the targeting of a species like tuna or swordfish, but only in locations where interaction with sensitive species like turtles, sharks, marine mammals or other sensitive species are unlikely. HeraSpace is a win-win innovative solution for the environment, administration, industry and for the planet’s population in the short and middle term, following the European Common Policy and the Marine Strategy Framework Directive. The space tech stack is served by unhackable Blockchain technology, ensuring that the optimal sustainable routes can´t be hacked by third parties, and protecting data reported by the captain in support of regulations combating illegal, unreported and unregulated fishing (IUU). The HeraSpace team is analysing behavioural and biological patterns, chemical oceanography, bathymetry, temperature structure, oceanic circulation patterns, pelagic life, and other indicators to build appropriate correlations to design the logic to be followed by the algorithm. Once operational within the ESA cloud servers (Red Hat Enterprise Linux), HeraSpace will become a tool to harmonize the fishing industry by helping to reduce operational costs, document captures, boost sustainability, and maintain a healthy ocean ecosystem. On top of CMEMS multimission data products, HeraSpace uses spatial data from the CODA Web Service, EumetCast, and EUMETSAT Data Center depending of the need of permanent flux of data, punctual downloads or old historical data rolles. HeraSpace downloads spatial data by using a customized dhusget.sh script from ESA combined with regulative data from EMODnet and potentially NOAA. The core system will includes REST methods linking with CODA, CMEMS and EMODnet download scripts. The retrieval of the data is based in OData REST queries. The OData Service Root URI for the CODA Web Service is https://coda.eumetsat.int/odata/v1 . The Service Root for the main Resource Paths are /Products /Collections , admitting specific query options: $format: Specifies the HTTP response format of the record e.g. XML or JSON; $filter: Specifies an expression or function that must evaluate to true for a record to be returned in the collection; $orderby: Determines what values are used to order a collection of records; $select: Specifies a subset of properties to return; $skip: Sets the number of records to skip before it retrieves records in a collection; $top: Determines the maximum number of records to return; The default response format is Atom[RFC4287], an XML-based document format that describes Collections of related information known as “feeds”. Also, products can be filtered by ingestionDates, evictionDate, and also by UUID (Unique User Identifier) when filters are added in the query definition. HeraSpace flux of spatial and non spatial data will be transparent and unhackable by our Hyperledger- Blockchain tech disruption applied to industry sustainability. Authors Isaac Durá (HeraSpace´s CEO & CTO) Richard Holmquist (HeraSpace´s Marine Science Expert & Pelagic Concepts Owner) Address for correspondence: Mr. Isaac Durá, HeraSpace LTD Managing Director, 20-22 Wenlock Road, London, N1 7GU, England. Email: isaacdura@heraspace.com References 1.EUMETSAT EUM/OPS-SEN3/MAN/16/880763 v2 Draft, 24 January 2017 2. EUMETSAT https://www.eumetsat.int/website/home/News/DAT_3532805.html 3. IUCN RED LIST http://www.iucnredlist.org/ 4. NOAA http://www.nmfs.noaa.gov/pr/species/fish/bluefintuna.htm 5. JSTOR http://www.jstor.org/stable/1540122 6. Konstantinos I Stergiou, Editor https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3788109/ 7.http://www.abc.es/natural-biodiversidad/20150603/abci-sobrepesca-peces-marinos-europeos-201506031319.html (ABC Newspaper) 8. WWF Panda http://wwf.panda.org/what_we_do/how_we_work/our_global_goals/oceans/ 9. ESA Technology_Transfer_Programm

Authors: Durá Hurtado, Isaac; Holmquist, Richard
Organisations: HeraSpace, Spain

New Space Economy (Part 4)
09:00 - 10:30
Chair: Marcello Maranesi - Phiunet

09:00 - 09:15
Keeping Forests Healthy by Leveraging Deep Learning and Satellite Imagery (ID: 337)
Presenting: den Bakker, Indra

Forests are the most valuable asset of our planet. They produce the air we breath and the products we use. Forests cover large areas and are often difficult to access and therefore we need a view from the sky. To protect rainforests against deforestation and to increase the health and productivity of production forests. 20tree.ai creates forest intelligence by combining deep learning and satellite imagery. How can we create actionable forest insights that are not visible for the human eye? How can we leverage the latest advancements in AI to discover new complex patterns?

Authors: den Bakker, Indra
Organisations: 20tree.ai
09:15 - 09:30
Innovation management – “Where technology can improve our quality of life: UNIQUE, SOUNIQUE” (ID: 180)
Presenting: Pelloquin, Camille

A high-quality environment can have a significant impact on the economic life of urban spaces and is therefore an essential part of any successful regeneration strategy. As cities increasingly compete with one another to attract investment, the presence of tranquillity spaces, good parks, squares and other public spaces becomes a vital business and marketing tool. In cities people consider scenes that include natural elements to be of higher visual quality than those with only man-made features. Higher property values, cleaner air, moderated storm runoff, reduced energy consumption, increase biodiversity, increase productivity, and improved health are some of the direct albeit tangible benefits and functions that urban nature and urban soundscape provides. With that context in mind, Starlab went through an innovation development pathway to identify, define, develop, and validate potential services from Earth Observation data for specific market within urban boundaries: Municipalities, Real Estate, Retail, Construction markets. A set of public funded projects supported the opportunity through feasibility study and proof-of-market activities: ESA STREET HEALTH, InnovateUK UNIQUE, and potentially, SOUNIQUE. The proof-of-market supported our activities of getting commitment from potential customers and, through the feasibility study, develop a Minimum Valuable Product (MVP), corresponding to the real user needs, for future customer prospection and service development. The initial idea came from the request from a particular customer, Barcelona city, parks and garden department, to monitor tree health from space, restraining tree inspection to unhealthy specimens. After, a successful proof-of-concept in collaboration with the user, the ESA STREET HEALTH project supporting the development of an MVP for urban forest health monitoring in urban areas. The Proof-of-market concluded the low willingness to pay from such customers as a low priority in the budget lines, even if the service might provide substantial benefits. Such results helped us to pivot to other markets, with the objective to exploit the same technology. The InnovateUK UNIQUE project proposed to support principally UK Real Estate portals in extending their set of features to evaluate and present housing neighbourhood to their own customers. Through an API based models and map tile generators, the urban green information has been integrated and providing in a user-friendly way to attract more customers looking for their location for life. The conclusion of the proof-of-market were successful and provided the evidence of interest and direct benefits from Real Estate portal looking for competitive edge. The potential future InnovateUK SOUNIQUE project is going further in evaluating soundscape and urban characterization technologies and benefits for customers to complete the portfolio of integrated and delivered information to Real Estate portals and extend market prospection to construction companies subjected to follow the recent UK policy: “Noise Policy Statement for England”. 3D urban characterization and noise map from local sensors would be transferred from R&D to operation through an Open Innovation process.

Authors: Pelloquin, Camille (1); Alhaddad, Bahaaeddin (2); Prados, Jordi (1); Moreno, Laura (1)
Organisations: 1: Starlab Barcelona SL, Spain; 2: Starlab Limited, United Kingdom
09:30 - 09:45
NOVA HUB - the first innovation hub to turn space data into business growth (ID: 211)
Presenting: Baker, Aurélie

The Toulouse region and, more broadly, Occitanie, are major players in the entire value chain of the space sector. With two global manufacturers that accumulate between 30 and 40% of the satellite market, with a strong presence of the French Space Agency (CNES), but also with a complete value chain, our territory is home to 50% of space jobs in France and a quarter at the European level. But, like the satellite manufacturing sector, the field of space applications knows a very strong competition: the explosion of the digital opens vast spaces of confluence between the digital and the data resulting from space systems. This development is stimulated by the explosion of free data provided by European programs such as Copernicus and Galileo. The development of these downstream services is expanding, with growth rates of 15% per year. The goal of NOVA HUB initiative is to create a place and a service offer that will greatly enhance the development of high added-value services combining digital and space use. This location will enable local businesses to develop new projects in collaboration with international customers, stimulate the creation of startups and services and offer a concentration of expertise and technical resources unique in Europe. This HUB will also bring together the skills of industry players and offer them, with the NOVA Hub, a visibility tool that is resolutely turned towards Europe and the international market. The aim of the NOVA Hub is to attract end-users, start-ups, data providers and accelerate their cooperation. The main objectives set by the HUB are: 1. Create the 1st digital and space ecosystem in France and Europe. 2. Generate new services and products, with quantified job creation targets. 3. Create a reference for the development of applications, services and products based on spatial data

Authors: Baker, Aurélie; Lattes, Philippe; Convers, David; Christa, Bardot; Joanna, Emery; Aude, NZehndong
Organisations: Aerospace Valley, France
09:45 - 10:00
Copernicus Masters - Minting new entrepreneurs with the help of satellite data (ID: 291)
Presenting: Beer, Thomas

ESA launched the Copernicus Masters in 2011, with strong support of world-class partners. The organisation of this annual idea competition was entrusted to AZO Oberpfaffenhafen (DE). The initial goal was to foster the User Uptake of the Copernicus programme. The Copernicus Masters is looking for new applications in Earth observation (EO) addressing a wide area of application fields. These include agriculture, environmental protection, transportation and urban management, just to name a few. EO and big data from space offer great potential for the creation of innovative products and services. With prize partners such as ESA, the European Commission (EC), the German Aerospace Center (DLR), CGI, Planet Labs Ltd, BayWa AG, Stevenson Astrosat Ltd., Airbus, Satellite Applications Catapult Ltd., and the German Federal Ministry of Transport and Digital Infrastructure (BMVI), the competition awards prizes to innovative solutions for business and society. With the expansion of the Copernicus Space Component every year, new prize categories enable solutions that tackle global challenges. Since 2011, more than 3,100 participants from over 38 countries have taken part, and the organisers have selected and rewarded over 70 winners with approximately 4.3 million Euro.

Authors: Beer, Thomas
Organisations: ESA, Esrin, Italy
10:00 - 10:15
BioScope: Improving Environmental Impact of Agriculture With Satellites (ID: 288)
Presenting: van der Wal, Tamme

Globally, food systems are pressed to produce more, sustainably, and nutritiously. Yet our food systems are falling far short to achieve this and current food production has a large impact on environment and climate. A systemic transformation is needed at an unprecedented speed and scale. The European Copernicus programme, adjacent to private initiatives create new capabilities to monitor agriculture from space. These space data have high relevance for precision farming and contribute to reduce inputs and increase yields. Bioscope is a farmer-driven initiative to acquire, process and deliver actionable knowledge from space data. Crucial in delivery to farmers is that any service must comply with seasonal demand: data must be delivered at the right time with the right information. Bioscope has translated these user needs into specific data and system requirements and is now the first on the market with guaranteed data delivery. Bioscope will absorb current and future satellite data, drones and airborne data as well as other sensor systems.

Authors: van der Wal, Tamme
Organisations: BIOSCOPE, Netherlands, The
10:15 - 10:30
Wildfire AWARE (ID: 400)
Presenting: Jupp, Peter

Wildfire AWARE provides the tools to prevent wildfires before they happen. This talk displays the tool in action, discusses it's implementation and how it utilises advanced machine learning techniques alongside hyper-local weather data. This will be followed by details on how the tool can be used directly in industry by both government and private organisations.

Authors: Jupp, Peter; Soomaney Vijaykumar, Vishal; Dolman, Flinn
Organisations: Complexiti - Wildfire AWARE

Φ-week Summary (summary )
12:30 - 12:40


EO Open Science

Research Infrastructures & Platforms (part2)
09:00 - 10:45
Chairs: Sanna Sorvari - Finnish Meteorological Institute, Marc Paganini - ESA- ESRIN

09:00 - 09:15
Near Real Time Fire Detection Service via the PROBA-V Mission Exploitation Platform (ID: 168)
Presenting: Arcorace, Mauro
(PDF )

In the framework of the PROBA -V Mission Exploitation Platform (PROBA-V MEP), the “Detection of fires and burned areas” research activity is part of the PROBA-V MEP Third Party Services aimed at better facilitating the exploitation of PROBA-V data across the EO open science community. Progressive Systems carried out this research activity dedicated to support the Centre de Suivi Ecologique (CSE, Senegal) participation in the Monitoring for Environment and Security in Africa (MESA) project through the development of a fire detection and burned areas characterization service over the “Economic Community of Western Africa States” (ECOWAS). The fire detection algorithm is based on a modified implementation of a temporal Kalman filter which is capable to detect hotspots in near real time from Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI) geostationary multispectral data. The first component of the algorithm consists of a clear-air anomaly detection making use of multispectral Kalman features. After that, the second component is capable to classify identified anomalies between clouds and hotspots. The service takes in input SEVIRI multispectral data provisioned in near real time by EumetCast every 15 minutes. A direct access to PROBA-V archive of vegetation index and burned area products is then provisioned via the PROBA-V MEP infrastructure. In order to initialize the algorithm over the entire domain, a background model has been retrieved for each pixel and considered channel to depict the daily radiance trend in time of nominal clear air observations. Average values have been calculated, for each channel used by the Kalman filter, by exploiting the EUMETSAT’s Cloud Mask products to filter out anomalies from SEVIRI measurements and consider only clear-sky conditions. Main outputs of the algorithm are fire detections given in tabular and vector formats containing information such as fire ID, geolocation and confidence level together with PROBA-V derived NDVI and NDWI index estimations. Moreover the system is capable to compute a “fire occurrence” product over a defined composite period that complements the available PROBA-V Burnt Area product. The main code has been developed in Python while wrapper scripts have been written in BASH. The service prototype has been deployed within a Virtual Machine equipped with 4vCPUs and 8GB RAM within the PROBA-V MEP. Such resources are sufficient to guarantee the near real-time processing over the Western Africa area according to the input product delivery every 15 minutes. First investigations on clear sky classification of MSG scenes over ECOWAS region have shown a strong correlation with respect to EUMETSAT’s Cloud Mask products. Furthermore preliminary fire detection comparisons with respect to EUMETSAT’s FRP products has shown a fairly good agreement within hotspots having similar confidence level. Fine tuning of clear-sky and anomaly thresholds is required together with a validation of fire detections with respect to other products (e.g. MODIS FIRMS). Finally further activities, such as field validation campaign and CSE staff training, are planned to validate results and gather feedback from stakeholders.

Authors: Arcorace, Mauro (1); Milani, Luca (2); Cuccu, Roberto (1); Rivolta, Giancarlo (1); Delgado Blasco, Josè Manuel (1); Orrù, Carla (1)
Organisations: 1: Progressive Systems Srl, Italy; 2: Sapienza University of Rome, Italy
09:15 - 09:30
A Software Platform for Maritime Monitoring and Prompt Target Characterization (ID: 216)
Presenting: Reggiannini, Marco
(PDF )

The main purpose of the work described here concerns the development of a software platform dedicated to sea surveillance, capable of detecting and identifying illegal maritime traffic. This platform results from the cascade implementation of several image processing algorithms that take as input Synthetic Aperture Radar (SAR) and Optical imagery captured by satellite-borne sensors. Inspired by a computer vision approach, the mentioned platform consists of a pipeline of processing steps devoted to i) the detection of vessel targets in the input imagery data, ii) the extraction of the vessel descriptive features and finally, iii) the estimation of the kinematics of the targets by identifying and analyzing the wake patterns on the water surface. The first task in the processing chain concerns the identification in the input imagery data of potential vessel targets. This is obtained by a dedicated detector based on a signal thresholding method. The threshold value is conceived as a spatially varying parameter, in order to adapt the algorithm sensitivity to the nonstationary properties of noise. A second step in the processing pipeline focuses on the analysis of the individual vessel images in order to perform morphological and radiometric measurements. At this stage, the signal is processed to refine the identification of the image pixels belonging to the vessel silhouette and to perform meaningful measurements on them. The extracted features provide quantitative attributes (length, width, pixel radiometry distribution) that can also be exploited to implement a classification module. The final stage of the processing pipeline concerns the analysis of the areas surrounding the vessel silhouette, where, in case of ship motion, a surface wake pattern is expected to be observed. A proper processing of these surface patterns allows us to estimate the route and, whenever the image resolution is large enough to observe the internal wake components, the ship velocity. By integrating the information returned by the procedures described so far, a system for maritime surveillance and vessel traffic monitoring can be developed. Given the quantitative approach inspiring each link of the processing chain, this software platform represents a reliable tool that can be exploited by concerned decision makers in critical maritime frameworks, such as illegal fishing, irregular migration and smuggling activities.

Authors: Reggiannini, Marco (1); Righi, Marco (1); Tampucci, Marco (1); Bedini, Luigi (1); Di Paola, Claudio (2); Martinelli, Massimo (1); Mercurio, Costanzo (2); Salerno, Emanuele (1)
Organisations: 1: CNR, Italy; 2: Mapsat S.R.L.
09:30 - 09:45
Value-Added Information by Rheticus® to Create Dynamic Information Services and Geoanalytics on DIAS (ID: 186)
Presenting: Maldera, Giuseppe

The cloud-based platform developed by Planetek Italia, called Rheticus®, provides value-added data and application services, based on open data, such as satellite images and geospatial, environmental and socio-cultural data available online. The main product and services already available on the platform are based on Sentinel-1, Sentinel-2 and Sentinel-3 satellite data. Thanks to these data, Rheticus® is capable of delivering continuous monitoring services of Earth's surface transformation phenomena, as the urban evolution, landslides, fires, or the quality of marine waters. Rheticus® works as a big hub that processes the data automatically to deliver geoinformation services ready-to-use in users' final applications. The information produced by Rheticus® can be visualized as simple dynamic maps, in continuous evolution, to be enjoyed through web interfaces. At the same time, however, this information can represent the fuel for smart applications that provide knowledge as a service, in the form of geoanalytics, to users who are not accustomed to working with geospatial information. These applications can be implemented through smart platforms which simplify the creation of workflows and web interfaces with dynamic graphs and geospatial indicators, such as the M.App Suite of Hexagon Geospatial. Planetek has implemented Rheticus® in view of the future deployment on the DIAS, the Copernicus Data and Information Access Service. In an ideal chain of platforms and software, the DIAS offers data storage and processing capabilities, which Rheticus® uses to produce value-added information; this information may thus be consumed in two different ways. On the one hand, Rheticus® offers subscriptions to continuous information services; on the other hand, the value added information produced by Rheticus® represent the fuel for applications that, thanks to the M.App Suite, can be easily realized by entrepreneurs, startups and innovators to provide timely solutions to the needs of different industries and markets. As an example, one of the services provided is Rheticus® Displacement, which offers – thanks to the processing of Sentinel-1 SAR data with interferometric algorithms - monthly monitoring of millimetric displacements of the ground surface, landslide areas, the stability of infrastructures, and subsidence due to groundwater withdrawal/entry or from the excavation of mines and tunnels. Planetek has then created several Smart M.Apps, powered by Rheticus® Displacement, to provide actionable knowledge to decision maker trough geo-analytics and dynamic indicators. Rheticus® Network Alert is a powerful tool to prevent and detect potential sewerage failures (water networks, district heating). Rheticus® Bridge Alert provides information about bridges and roads stability, preventing possible collapse and interruption of the transport routes. These two applications are examples of vertical applications for specific industries for the provision of knowledge-as-a-service in the form of geoanaytics.

Authors: Zotti, Massimo; Abbattista, Cristoforo; Drimaco, Daniela; Maldera, Giuseppe
Organisations: Planetek Italia s.r.l., Italy
09:45 - 10:00
Perceptive Sentinel - Big Data Knowledge Extraction and Re-creation Platform (ID: 159)
Presenting: Zupanc, Anze
(PDF )

Free and open access to high temporal and high spatial resolution Copernicus Earth observation (EO) data is becoming a major game-changer in the EO sector, delivering new opportunities as well as new challenges at the same time. How to ingest enormous amounts of data, how to unlock the hidden value of the data and how to deliver new value to end user community, remain unanswered questions up to date. A consortium of six organizations – Sinergise, GeoVille, Magellium, Landbrug & Fodevarer, Jožef Stefan Institute and Agricultural Institute of Slovenia - has joined to address this challenge by establishing a big data platform Perceptive Sentinel, that will deliver cloud software solution using either commercial cloud infrastructure or one of the DIAS platforms. The Perceptive Sentinel platform aims to help newcomers to enter the field of EO and remote sensing by reducing the complexity of EO processing chain while at the same time provide high added value of newly developed services. The platform will on the other hand enable existing EO experts to transform their EO processing chains into EO value added services and expose them towards end-user community. As a platform Perceptive Sentinel will differ from most of the existing systems in a way that it will ensure autonomous and continuous operation and will essentially act as a big-data stream processing engine. Our ambition is to automate one of the most time-consuming tasks in data analytics - data pre-processing, which includes automatic data enrichment (calculation of different aggregates, etc.), fusion of other relevant environmental data sources (weather, land use, various other static model data or features extracted from high-resolution imagery, and other data) and feature vector generation. Usage of multiple heterogeneous data sources will open a window for improving state-of-the art methods’ performance and introduce novel machine learning techniques for different modeling tasks. The project is funded by H2020-EO-2017 grant (GA nr. 776115) and will be completed in 2020. First implementations and results of the Perceptive Sentinel will be demonstrated.

Authors: Zupanc, Anze; Milcinski, Grega; Hafner, Janez
Organisations: Sinergise, Slovenia
10:00 - 10:15
The Urban TEP – Joint Analysis of Multi-Source Data for Innovative Urban Monitoring (ID: 120)
Presenting: Bachofer, Felix
(PDF )

Settlements and urban areas represent the cores of human activity. Urbanization and climate change, representing two of the most relevant developments related to the human presence on the planet, challenge our environmental, societal and economic development. The availability of and access to accurate, detailed and up-to-date information will impact decision making processes all over the world. The suite of Sentinel satellites in combination with their free and open access data policy contributes to a spatially and temporally detailed monitoring of the Earth’s surface. At the same time a multitude of additional sources of open geo-data is available – e.g. from national or international statistics or land surveying offices, volunteered geographic information or social media. However, the capability to effectively and efficiently access, process, and jointly analyze the mass data collections poses a key technical challenge. The Urban Thematic Exploitation Platform (U-TEP) is developed to provide end-to-end and ready-to-use solutions for a broad spectrum of users (experts and non-experts) to extract unique information/ indicators required for urban management and sustainability. Key components of the system are an open, web-based portal connected to distributed high-level computing infrastructures and providing key functionalities for i) high-performance data access and processing, ii) modular and generic state-of-the art pre-processing, analysis, and visualization, iii) customized development and sharing of algorithms, products and services, and iv) networking and communication. U-TEP aims at opening up new opportunities to facilitate effective and efficient urban management and the safeguarding of livable cities by systematically exploring the unique EO capabilities in Europe in combination with the big data perspective arising from the constantly growing sources of geo-data. The capabilities of participation and sharing of knowledge by using new media and ways of communication will help to boost interdisciplinary applications with an urban background. The services and functionalities are supposed to enable any interested user to easily exploit and generate thematic information on the status and development of the environment based on EO data and technologies. The innovative character of U-TEP platform in terms of available data and processing and analysis functionalities attracted already a large user community (>300 institutions from >40 countries) of diverse users (i.a. from science, public institutions, NGOs, industry).

Authors: Bachofer, Felix (1); Esch, Thomas (1); Asamer, Hubert (1); Balhar, Jakub (2); Martin, Boettcher (3); Enguerran, Boissier (4); Andreas, Hirner (1); Emmanuel, Mathot (4); Mattia, Marconcini (1); Annekatrin, Metz-Marconcini (1); Hans, Permana (3); Tomas, Soukup (2); Soner, Uereyen (1); Vaclav, Svaton (5); Julian, Zeidler (1)
Organisations: 1: German Aerospace Center - DLR, Germany; 2: GISAT, Czech Republic; 3: Brockmann Consult, Germany; 4: Terradue, Italy; 5: IT4Innovations - Technical University of Ostrava, Czech Republic
10:15 - 10:30
MULTIPLY: Towards A Platform For The Retrieval Of Bio-Physical Parameters On User-Defined Spatial And Temporal Grids (ID: 174)
Presenting: Fincke, Tonio
(PDF )

The advent of the Sentinel missions has provided scientists with an unprecedented amount of Earth Observation data. These are now accessible from dedicated EO data and infrastructure platforms such as DIAS, national collaborative ground segments like CODE.DE, or private clouds. MULTIPLY is a software platform for users to utilize this infrastructure to generate land bio-physical variables. Distinct EO sensors deliver (raw-) data in the optical spectrum in a high temporal resolution (Sentinel-3), a high spatial resolution (Sentinel-2) as well as in the SAR range (Sentinel-1). While each of the missions widens the possibilities for data evaluation and processing, each of the sensors also has its limitations. Moreover, data from these missions is provided on different grids, making it necessary to perform pre-processing steps if they should be considered jointly. However, users often might not actually be interested in any of these grids, but have their own definitions of a spatial and temporal set. Also, they are not necessarily interested in the measured EO data, but in variables that have been derived from it. The MULTIPLY project aims at providing a software platform that can derive bio-physical parameters such as Leaf Area Index or fAPAR by accessing and combining data from different EO Data sources in a seamless way. In particular, it will be able to combine data from different missions – primarily Sentinel – to derive these parameters. This will be realized through the application of bio-optical models, and aided by pre-existing knowledge on the parameters in question in the form of priors. The platform is designed as a software framework which ideally runs close to the data sources. It will allow users to plug-in forward models and priors, thereby extending the capability of the platform and tailoring it to their needs. The most prominent feature though will be that users will be free to design the spatial and temporal area and resolution within which to derive the parameters. The platform will further provide post-processing functionality to derive biodiversity and disturbance indicators from the parameters. A graphical user interface serves as frontend. It connects the backend with multiple end user devices and enables users to set up processing tasks, define parameters (up to a fine level of detail, if requested), start the tasks, and ultimately inspect the results. The platform is conceived to run both on local work stations and cluster infrastructures. Users will be enabled to decide whether the platform shall consider only locally available or also remote data (in which case they can register data stores). At the time of writing this abstract, the MULTIPLY platform is not in operational use and also the DIAS’es, the most promising backend EO infrastructure, are not available. It is planned, though, to by the end of the project’s running time (end of 2019) to have it up and running on an exploitation platform. We furthermore expect that already by the time the phi-week takes place we will have some first experience from operational use, e.g. on DIAS, that we can present.

Authors: Fincke, Tonio; Brockmann, Carsten
Organisations: Brockmann Consult GmbH, Germany

Research Infrastructures & Platforms (part3)
11:15 - 12:15
Chairs: Vasiliki Charalampopoulou - GEOSYSTEMS HELLAS SA, Guenther Landgraf - ESA- ESRIN

11:15 - 11:30
ESA’s Food Security Thematic Exploitation Platform “Supporting Sustainable Food Production from Space” (ID: 131)
Presenting: Muerth, Markus
(PDF )

The Food Security Thematic Exploitation Platform (FS-TEP) is the youngest out of seven TEPs and is developed in an agile mode in close coordination with its users. It provides a platform for the extraction of information from EO data for services in the food security sector mainly in Europe & Africa, allowing both access to EO data and processing of these data sets. Thereby it targets to foster smart, data-intensive agricultural and aquacultural applications in the scientific, private and public domain. The platform has been open to the public since March 2018 and is currently in its second phase of development. FS-TEP builds on a large and heterogeneous user community, spanning from agriculture to aquaculture, from small-scale farmers to agricultural industry, from application developers and public science to the finance and insurance sectors, from local and national administration to international agencies. Service pilots demonstrate the platform’s ability to support agriculture and aquaculture with tailored EO based information services. The main point of access to the FS-TEP is the Open Expert Interface providing the main functionalities of the platform and access to a variety of EO data sets as well as supplemental data. Furthermore, FS-TEP allows the mobile visualization of crop parameters and the provision of customized products and services to selected users. The capabilities of the platform as well as the results from the first service pilot for EO-guided crop monitoring and management as well as the set-up for the second and third pilot for micro-finance and insurance in Africa and respectively aquacultural management in Africa will be presented. The project team developing the FS-TEP and implementing pilot services during a 30 months period (started in April 2017) is led by Vista GmbH, Germany, supported by CGI Italy, VITO, Belgium, and Hatfield Consultants, Canada. It is funded by ESA under contract number 4000120074/17/I-EF.

Authors: Migdall, Silke (1); Muerth, Markus (1); Hodrius, Martina (1); Niggemann, Fabian (1); Bach, Heike (1); Harwood, Phillip (2); Colapicchioni, Andrea (2); Cuomo, Antonio (2); Gilliams, Sven (3); Goor, Erwin (3); Van Roey, Tom (3); Dean, Andy (4); Suwala, Jason (4); Volden, Espen (5); Amler, Esther (5); Mougnaud, Philippe (5); Alonso, Itziar (5)
Organisations: 1: VISTA GmbH, Germany; 2: CGI Italia S.r.I, Italy; 3: VITO, Belgium; 4: Hatfield Consultants, Canada; 5: ESA ESRIN, Italy
11:30 - 11:45
Federating the C-TEP with DAME Platform using WPS (ID: 152)
Presenting: Amodio, Angelo
(PDF )

ESA launched in the last years a set of Thematic Exploitation Platforms (TEPs), each tailored to users of a thematic domain. One of them is the Coastal Thematic Exploitation Platform, currently being developed by a team led by ACRI-ST that comprises Planetek Italia. DAME (Data Intensive Technologies for Multi-mission Environments) is a platform developed by Planetek Italia in the frame of a GSTP-ESA/ASI initiative, with the purpose to support ASI and ESA vision on the impact and benefits that the new IT technologies bring to the Earth Observation, in particular to the ground segment architectures and their evolution as big data management infrastructure. C-TEP is a data exploitation platform combining 1) data access and exploitation tools for EO (in particular Copernicus), in-situ and model output data selected by a coastal users’ community, 2) support services to develop and generate new information products for advanced users (“players”), and 3) management and scientific animation of the users’ community. C-TEP supports the concept of mother/child TEP, which consists in a federation among two different platforms (C-TEP being the mother platform), as in the following: 1. child-TEP as data provider: products of the child-TEP are visible from the catalogue of the C-TEP, and can be downloaded on the virtual environment of the C-TEP 2. child-TEP as a service provider: the processing services of the child-TEP are visible from the C-TEP catalogue. Execution requests from the C-TEP are forwarded to the child-TEP using WPS protocol. Input products are obtained then from one of the two platform as required. Results are uploaded from the child-TEP to the user basket on the C-TEP using Secure File Transfer Protocol. 3. C-TEP as data and service provider: in this mode the child-TEP is seen by the C-TEP users as an application portal, having access to a subset of the products and services with a customized interface. C-TEP handles product access and processing requests (using the TEP APIs). Accounting and data management can be performed at user level (child-TEP users are registered as C-TEP users) or globally (child-TEP is totally seen as one single user). The Mediterranean Pilot Case of the C-TEP project is an implementation of the federation above described: C-TEP and DAME platforms work together, using both C-TEP processing and DAME catalogue and data provision capabilities. To this goal, we developed and physically deployed on the C-TEP a processor to compute several water quality parameters, exploiting Landsat multiband imagery. The processor is available on DAME processing services portfolio, allowing its configuration by selecting the area of interest and eventually defining limits for the cloud cover percentage. The processor can be launched using the WPS protocol that updates DAME on its processing state. Since Landsat8 is not available on the C-TEP, the first step of the processing chain is the transfer of the Landsat8 input data using SFTP from DAME to C-TEP, making DAME act also as data provider for the C-TEP. Once the processing is complete, the output product is transferred using SFTP from the C-TEP to DAME.

Authors: Drimaco, Daniela (1); Ceriola, Giulio (1); Amodio, Angelo (1); Coletta, Francesco (1); Clerc, Sébastien (2); Tuohy, Eimear (3); Craciunescu, Vasile (4); Aspetsberger, Michael (7); Campbell, Gordon (5); Leone, Rosemarie (6); Mougnaud, Philippe (5)
Organisations: 1: Planetek Italia s.r.l., Italy; 2: ACRI-ST; 3: UCC; 4: Terrasigna; 5: ESA ESRIN; 6: ESA ESOC; 7: Catalysts
11:45 - 12:00
Generating InSAR products with COSMO-SkyMed and TerraSAR-X imagery in the Geohazards Exploitation Platform (GEP) to support the CEOS Recovery Observatory in Haiti (ID: 164)
Presenting: Cigna, Francesca
(PDF )

In this work, we present the first results of scientific experiments carried out by ASI in collaboration with DLR, ESA and CNES, with the objective to generate satellite Interferometric SAR (InSAR) ground motion and SAR change detection products for Haiti by processing X-band COSMO-SkyMed and TerraSAR-X data within ESA’s Geohazards Exploitation Platform (GEP). These activities feed into the 4 year-long Recovery Observatory pilot project of the Committee on Earth Observation Satellites (CEOS) – Working Group on Disasters. The project was triggered by CEOS to address the needs of the Haitian community involved in recovery and rehabilitation after the impact of Hurricane Matthew in October 2016, and is led by the National Center for Geo-spatial Information (CNIGS) of Haiti with technical support from CNES [www.recovery-observatory.org]. Alongside satellite-based data and information useful in planning and monitoring recovery, the project is developing experimental science products. These aim to address specific user needs, e.g. ground instability assessment in urban areas and landslide deformation monitoring, for which site-specific tailoring is required to account for local land use and human activities, as well as extent and dynamics of ongoing geological and anthropogenic change processes. To achieve this goal, the cross-agency research team of ASI, DLR, ESA and CNES with support from ARGANS, Terradue and Athena Global, has started to focus on the development of a SAR-based workflow, which encompasses steps ranging from satellite data acquisition, to image processing in GEP and generation of value-added geohazard products. The geographic targets of the experiments are the Haitian departments most affected by Hurricane Matthew, i.e. Grand’Anse, Sud and Nippes. A tailored SAR acquisition campaign in X-band has been tasked to create a digital record to support disaster recovery. Since the end of 2016, DLR has started to provide full coverage of the region of interest every ~4 months with a mosaic of 3 m resolution TerraSAR-X StripMap scenes in each orbit mode, ascending and descending. For three hotspots selected by the stakeholders, since the end of 2017 ASI is acquiring a bespoke 1 m resolution COSMO-SkyMed SpotLight time series with 16 days revisit, both in ascending and descending mode. X-band SAR data are registered into ESA’s GEP, where hosted processing services based on conventional and advanced InSAR such as Persistent Scatterers (PS) and Small BAseline Subset (SBAS) are available. These services are exploited to generate regional scale change detection maps (e.g. coherence and ratio maps) and local scale ground motion products based on X-band SAR data, extract deformation time series for the areas of interest, estimate magnitude and extent of land instability, and identify its geological and/or anthropogenic causes. Interpretation of X-band InSAR products and analysis of geohazards is carried out in combination with high and very high resolution multi-spectral imagery of the SPOT and Pléiades constellations provided by CNES. The retrieval of detailed, multi-sensor and multi-scale geohazard information is crucial to support the Haitian end-users in their decision-making processes and recovery progress monitoring.

Authors: Cigna, Francesca (1); Tapete, Deodato (1); Danzeglocke, Jens (2); Bally, Philippe (3); Cuccu, Roberto (4,5); Papadopoulou, Theodora (6); Caumont, Hervé (7); Collet, Agwilh (8); de Boissezon, Hélène (8); Eddy, Andrew (9); Piard, Boby E. (10)
Organisations: 1: Italian Space Agency (ASI), Italy; 2: German Aerospace Center (DLR), Germany; 3: European Space Agency (ESA), Italy; 4: ESA Research and Service Support, Italy; 5: Progressive Systems Srl, Italy; 6: ARGANS Ltd, France; 7: Terradue Srl., Italy; 8: National Centre for Space Studies (CNES), France; 9: Athena Global; 10: National Center for Geo-spatial Information (CNIGS), Haiti
12:00 - 12:15
The Interactive Application Service - a virtual laboratory for the 21st century (ID: 122)
Presenting: Holter, Christoph
(PDF )

Earth-Observation satellites keep a watchful eye on our planet, 24 hours, 7 days a week. The data volume they collect grows steadily and can reach significant quantities. All that data, however, would have little value when left unprocessed. Over the past decades processing, analysis, and visualization toolboxes have been built in order to simplify dealing with the datasets, e.g. ESA’s SNAP toolboxes for the Sentinels, the Orfeo Toolbox, QGIS, SAGA, and others. While rich in functionality, except for expert usage, all those toolboxes are desktop applications and require the data to be available locally for exploration. In an effort to bring both assets together, and allow combining various datasets, the Interactive Application Service (IAS) has been built in the frame of the Coastal Thematic Exploitation Platform (CTEP) project. It encapsulates traditional desktop applications as app containers, and provides a simple web interface to launch them, with direct access to full mission satellite archives. The app containers are not limited to desktop applications, but can also be web services, including the popular Jupyter Notebook and Jupyter Lab. All containers are isolated, which preserves the privacy of the individual users – yet another challenge with a stock Jupyter installation. However, on user-preference, running apps can be shared for joint experiments and pair-programming. The IAS has been evaluated as a future component for ESA’s Phi-Lab. In this talk, we will outline the general architecture, show how apps are encapsulated, how datasets can be accessed, what benefit it brings over classic environments, and how science is catalyzed. As a practical example we will show how Jupyter Lab can be used for training purposes. A set of tutorial applications of increasing complexity demonstrate how to use EO data for coastal applications. A first application provides a very simple algorithm to retrieve the chlorophyll-a concentration in the water from Sentinel-2 or Sentinel-3 images. The second notebook explains how to call the algorithm and visualize the results. Last, we show an example of Web Processing Service (WPS) command called from the Jupiter lab to automatize the generation of such process for a complete dataset.

Authors: Aspetsberger, Michael (1); Holter, Christoph (1); Wanzenböck, Moritz (1); Mücke, Werner (1); Saulquin, Bertrand (2); Clerc, Sébastien (2); Rebuffel, Manuel (2); Gilles, Nicolas (2); Bevy, Christophe (2)
Organisations: 1: Catalysts GmbH, Austria; 2: ACRI-ST, France

Workshop on Virtual Reality
09:00 - 10:30
Chairs: Chris Stewart - ESA- ESRIN, Sveinung Loekken - ESA- ESRIN

09:00 - 09:15
The Future Of VR Video And Applications Of VR In Science Communication (ID: 282)
Presenting: Day, Phil Edward

I'll be talking about the current technologies that are being released and moving VR further and further into the main stream. Hardware that I'll be focused on in particular will be the range of stand alone portable headsets such as the Vive Focus and Oculus Go, which I'll also be demonstrating. This will lead into the applications of 360 video distribution and how these devices are being used in training and science communication and why VR video is one of the leading forces in this. In conclusion I will talk about future develop beyond video. How desktop VR pushes beyond the portable headsets and how these VR headsets are being used in science and medicine to assist with diagnoses and data representation.

Authors: Day, Phil Edward
Organisations: Whirligig, United Kingdom
09:15 - 09:30
Look, holograms! A short introduction to Mixed Reality and HoloLens. (ID: 306)
Presenting: Schulte, Rene

Mixed Reality with devices like the Microsoft HoloLens are turning science fiction movie technology into reality and changing how users interact with computers. It’s an amazing time to be alive and to shape that future. In this short session Rene will introduce Mixed Reality, the HoloLens and show a quick demo of some of the Holographic apps he and his team are working on.

Authors: Schulte, Rene
Organisations: Valorem
09:30 - 09:45
A collaborative VR platform for training in industrial environment (ID: 308)
Presenting: Cuomo, Massimo

ACS has developed for an Oil & Gas industry a collaborative training platform based on Virtual Reality. The system has the objective of supporting the training of the personnel involved in emergency operations. It integrates an End-to-End scenario that allows to simulate the actions that must be carried out in case of a fire in a refinery. In this Virtual Environment many users interact each other in order to afford and resolve a specific incident. The platform involves actively the participants and allows to evaluate the level of knowledge about emergency procedures. This technical solution offers several advantages: - Collaborative training in a highly realistic VR environment (many users share the environment and interact each other); - Emotional involvement of virtual reality (stereoscopic helmets, audio connection, interactive devices for manipulation of virtual objects); - Remote attendance of participants (users can be geographically distributed, thus avoiding travels for training); - Unified and centralised management of contents that simplifies the addition of new DPIs (devices for personal protection) and of new operational procedures; - the Recording capability of avatar actions enables the platform to be used in assessment environment and extends the concept of training exam. Each user can move around in the 3D model of the refinery and see the avatars of the others. Moreover, he/she can interact with the 3D component like hose or valves. The following application sessions are available: - Tutorial: show of the correct operational sequence using 3D avatar automatic animation; - Training: interactive simulation where the users work to solve the incident; - Assessment: recording of an interactive session and successive replay in order to observe each operator and to generate the relevant evaluation reports. The system supports the following user roles: - Tutor: this role manages the simulation and follows the operator actions - Operator: this role runs the operational sequence - Visitor: this role is a passive visitor that can see (but not interact with) the simulation. The system is based on a cross platform 3D engine and can be extended to all sectors where a safe training is needed to familiarize operators with remote or dangerous operations.

Authors: Cuomo, Massimo; Tartaglia, Marco; Di Giammatteo, Ugo
Organisations: ACS/Exprivia, Italy
09:45 - 10:00
Mixed Reality for Earth Observation data MR-EO (ID: 363)
Presenting: Mantovani, Simone

A huge effort has been devoted in recent years to develop tools to exploit at the maximum the enormous amount of Earth Observation data being generated by the new generation of satellite platforms. Nowadays fast data access technologies such as datacubes offer services able to satisfy the needs of massive data exploitation tools (e.g. deep leaning, artificial intelligence) as demonstrated by the recent Frontier Development Lab Europe initiative. Stating the continuous increase in maturity of data access and exploitation tools, it is now time to boost the user experience, going beyond traditional web application tools, towards more immersive, interactive, and collaborative technologies. Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) are the last frontier of human computer interaction, linking real and digital world together. Using VR/AR/MR through a wearable device, user takes advantage of a super sight based on enhanced reality application. In the wearable-tech scenario, Microsoft HoloLens smartglass is the benchmark due to the very effective immersive holographic experience without being tethered to a PC, phone or external cameras (stand alone device); it features a powerful 3D sensor for accurate 3D content placement in a real environment (holograms) and very friendly gesture control, best tool for geospatial data display. The present work aims at demonstrating during the mini-workshop on Virtual Reality, an application of Mixed Reality to the EO domain, where real time EO data access and exploitation is implemented within a 3D Mixed Reality environment. One or more users take part to the same data exploitation experience, using voice commands, hands and fingers to extract data, visualize, manipulate and trigger processing on massive data volume and exploit the results. This tool has wide potential applications, from science collaborative environment to education and finally fast deployment and operations of emergency control rooms in crisis management scenarios.

Authors: Mantovani, Simone (1,2); Natali, Stefano (1,2); Borasio, Emauele (3); De Cosmo, Pietro Domenico (3)
Organisations: 1: MEEO, Italy; 2: SISTEMA, Austria; 3: weAR, Italy
10:00 - 10:15
The Future Of Simulation & Training: How XR Is Disrupting The Corporate Learning & Development Industry. (ID: 370)
Presenting: Cappannari, Lorenzo

Proven effective since the age of Confucius and Aristotle, its only from the beginning of last century that Experiential Learning became scalable within military environments, thanks to the advent of computer aided simulators. But today a new paradigm shift is on the way, allowing simulation and digital learning to finally distrupt the corporate learning & development market. Corporations are forced by the unsettling pace of market innovation to continuously look for more effective ways of skilling and reskilling their employees; at the same time, obsessed by financial results, top lines are continously requesting down the ladder to provide larger savings on non-sales-related functions. In this scenario, the impact of low-cost XR devices, the evolution of AI, and the spread of high speed connectivity are bound to completely reshape the way companies and institutions train their workforce. New remote collaboration platforms will emerge, and instructional designers will need to rethink their learning paradigms, taking into account the strong emotional impact of the new technology, and the need of (finally) applying gamification techniques to their designs.

Authors: Cappannari, Lorenzo
Organisations: Anothereality, Italy
10:15 - 10:30
The ESA Φ-Lab VR installation (ID: 399)
Presenting: Sacramento, Paulo

ESA has recently deployed a Virtual Reality (VR) installation in the Φ-Lab as a way of enabling the visualisation of Earth Observation (EO) data in this novel and appealing manner, combined with the exploration of virtual worlds. Users wearing a VR headset can travel in a virtual Earth populated with several examples of EO products (e.g. snow cover, avalanche detection, soil subsidence using InSAR technology, flood monitoring) and Point cloud sites (e.g. a model of ESRIN, a tree plantation in Spain). Spot and surprise locations and environments are also available for outreach purposes. The system, currently in an embryonic phase, can be used to load and visualise outputs of ESA EO projects, such as those resulting from the Thematic Exploitation Platforms (TEPs), and there are ongoing plans to consolidate and streamline it as a fixed feature of any occasion where ESA EO data visualisation can be enhanced with VR, naturally integrating projects' flows. The presentation briefly introduces the technology that supports the installation, both hardware and software, shows examples of available products, and ends with an outline of the future roadmap, which includes the viewing of hyperspectral data and other advanced visualisation applications.

Authors: Sacramento, Paulo (1); Caccia, Michele (2); Stodle, Daniel (3); Loekken, Sveinung (4)
Organisations: 1: Solenix c/o European Space Agency, Italy; 2: CGI c/o European Space Agency, Italy; 3: Norut Northern Research Institute, Norway; 4: European Space Agency, Italy

EO University Net
09:00 - 10:30

09:00 - 10:30
Φ-Unet and FabSpace evolution (ID: 386)
Presenting: Baker, Aurelile

Fabspace 2.0 is an European project financed by the European Commission since 2016. It’s an open-innovation network for geodata-based innovation – by leveraging Space data in particular in Universities 2.0. The main objectives of the FabSpace 2.0 project are to: - Set up and operate at University a free-access place & service where students, researchers and external users can make use of a data platform and design and test their own applications. - Train the users to improve their capacity to process space data and develop new applications. - Foster the creation of new innovative solutions and support further business development. The project is ending and considering the great results obtained, ESA has decided to take it over and take the project to next level by creating Φ-Unet. Come and learn more about the 5 most promising ideas selected by FabSpace 2.0 and attend to the launch of the International Φ-Unet Network.

Authors: Baker, Aurelile
Organisations: Aerospace Valley, France


This event is managed as a sustainable event according to ISO 20121