Explainable AI4EO Training and Benchmarks: data sets, methodologies and tools

A GRSL special stream – 2021
Editors:
· Mihai Datcu, Professor, German Aerospace Centre (DLR) and University Politehnica Bucharest (UPB) (Lead Editor)
· Marco Chini, Senior research and technology associate, Luxembourg Institute of Science and Technology (LIST) (Guest Editor)
· Fabio Pacifici, Principal R&D Scientist, MAXAR (Guest Editor)
· Xiuping Jia, Associate Professor, The University of New South Wales, Canberra, Australia (Guest Editor)
· Xiangrong Zhang, Professor, Xidian University, Xi’an, China (Guest Editor)

The volume and variety of valuable Earth Observation (EO) images as well as non-EO related data is rapidly growing. The open free data access is already widespread and has an enormous scientific and socio-economic relevance. EO images are acquired by sensors on satellite, suborbital or airborne platforms. They extend the observation beyond the visual information, gathering physical parameters of the observed scenes in a broad electromagnetic spectrum. Multispectral, hyperspectral, synthetic aperture radar (SAR), altimeter, atmospheric, radiometer data are jointly contributing to the understanding of Earth, assess and forecast phenomena in the domains of atmosphere, marine, land, climate, emergency, or security. There is a huge demand to elaborate methods and tools to valorise these large amounts of EO data. In synergy with the techniques based on physical principles and scientific interpretation, artificial intelligence (AI) is indubitably the most prominent field of science expected to provide new solution for the Big EO Data challenges. Together with the methods of inference, machine learning, deep learning, or prediction, benchmarking is itself part of the AI paradigms. Benchmarking is the most influential instrument leading the development and evolution in the fields of AI, however itspotential in EO still needs to be empowered.

An important objective and goal of this special issue is to define training and benchmark data sets, methodologies and tools for explainable AI, a benchmarking protocol with a broad applicability and use in explainable AI4EO for delivering the applications domains with practically usable reference data sets, the best methods and paradigms and tools. Benchmarking/reference data sets should be provided with detailed annotation, benchmark performance criteria, evaluation methodology and code for EO data semantics and bio-chemical-physical quantitative parameters.

Papers are solicited pertaining to the following topics:

  • Methodologies and algorithms for creation of benchmark/reference EO data sets
  • Theoretical bounds and limits for EO physical parameter retrieval
  • Benchmarking for:
    • Multisensor EO algorithms and applications
    • Big EO data parameter retrieval, physical and quantitative query/search engines, data mining, visualization, etc.
  • EO image time series algorithms and applications
  • Joint EO and related heterogeneous data fusion, eg. GIS, maps, in-situ data, etc.
  • EO tools and system at large scale
  • EO applications: Atmosphere, Marine, Land, Climate, Emergency, Security
  • Performance estimation methodologies
  • Database biases analysis and design methodologies
  • EO sensory and semantic gaps, taxonomies and ontologies
  • Artificial Intelligence based benchmarking paradigms
  • Benchmarking simulation and model based method

Submissions to this special issue are required to submit a reproducible benchmark/validation code, and the benchmark data sets along with the manuscript. The benchmark code and data sets will be made available on the GRSS DASE website.

Schedule
Starting date for submission: 1 January 2021
Closing date for submission: 31 July 2021
Paper Submission Link: mc.manuscriptcentral.com/grsl

  • Artboard 1 Technical Committees
  • Artboard 1 Webinars
  • Artboard 1 Publications
  • Artboard 1 Chapters
  • Artboard 1 Opportunities