Image Analysis and Data Fusion

Image Analysis and Data Fusion (IADF)

MISSION

The Image Analysis and Data Fusion Technical Committee (IADF TC) of the Geoscience and Remote Sensing Society serves as a global, multi-disciplinary, network for geospatial image analysis (e.g., machine learning, deep learning, image and signal processing, and big data) and data fusion (e.g., multi-sensor, multi-scale, and multi-temporal data integration). It aims at connecting people and resources, educating students and professionals, and promoting theoretical advances and best practices in image analysis and data fusion.

The TC is comprised of 3 working groups dedicated to distinct fields within the scope of image analysis and data fusion, namely WG-MIA (Machine/Deep Learning for Image Analysis), WG-ISP (Image and Signal Processing), and WG-BEN (Benchmarking).

Organization

The IADF Technical Committee encourages participation from all its members. The committee organization includes the Chair, two Co-Chairs, and three working groups led by working group leads.

IADF Technical Committee Chair

Prof. Claudio Persello
University of Twente
The Netherlands

IADF Technical Committee Co-Chair

Dr. Gemine Vivone
National Research Council
Italy

Dr. Saurabh Prasad
University of Houston USA

Working Group Leads

WG on Machine/Deep Learning for Image Analysis (WG-MIA)

WG-MIA Lead

Dr. Dalton Lunga
Oak Ridge National Laboratory
USA

WG-MIA Co-Lead

Dr. Ujjwal Verma
Manipal Institute of Technology
India

Dr. Silvia Ullo
University of Sannio Italy

Dr. Ronny Hänsch
German Aerospace Center (DLR)
Germany

Prof. Danfeng Hong
Chinese Academy of Sciences
China

WG on Image and Signal Processing (WG-ISP)

WG-ISP Lead

Dr. Gülşen Taşkın
Istanbul Technical University (VITO)
Turkey

WG-ISP Co-Lead

Dr. Stefan Auer
German Aerospace Center (DLR)
Germany

Dr. Loic Landrieu
LASTIG, IGN/ENSG, UGE,
France
Dr. Lexie Yang
Oak Ridge National Laboratory
USA

WG on Benchmarking (WG-BEN)

WG-BEN Lead

Prof. Xian Sun
Chinese Academy of Sciences
China

WG-BEN Co-Lead

Seyed Ali Ahmadi
K. N. Toosi University of Technology
Iran

Dr. Francescopaolo Sica
Universität der Bundeswehr München
Germany

Dr. Yonghao Xu
Linköping University
Sweden

Valerio Marsocci
KU Leuven
Ghent, Belgium

 

News

Registration is now open for GRSS IADF School on Computer Vision for Earth Observation

We are thrilled to announce that registration for the 3rd Edition of the GRSS IADF School on Computer Vision for Earth Observation is now open. This highly anticipated event will take place at the University of Sannio in Benevento, Italy, from 11th to 13th September 2024. This school will contain a series of lectures on the existing methods utilized for analyzing satellite images, along with the challenges encountered. Each lecture will be followed by a practical session where the participants will go deep into the details of the techniques discussed in the lecture using some commonly used programming languages (e.g., Python) and open-source software tools.

Application Closes 31st May 2024.

More Details: iadf-school.org/

The EarthVision 2024 workshop will take place at the Computer Vision and Pattern Recognition (CVPR) 2024 Conference. We have awesome keynote speakers and a challenging contest! Don’t miss out on the latest advancements in Computer Vision and AI / ML for Remote Sensing and Earth Observation. Find out more about Earth Vision 2024.

Community Contributed Sessions at IGARSS 2024: IADF TC is organizing the following Community Contributed Sessions at IGARSS 2024, More information : www.2024.ieeeigarss.org/community_contributed_sessions.php

  1. Title: Image Analysis and Data Fusion: The AI Era
    Organized by: IADF Chairs
  2. Title: IEEE GRSS Data Fusion Contest – Track 1, Track 2
    Organized by: IADF Chairs
  3. Title: Datasets and Benchmarking in Remote Sensing: Towards Large-Scale, Multi-Modality and Sustainability
    Organized by: WG-BEN
  4. Title: Sustainable Development Goals through Image Analysis and Data Fusion of Earth Observation Data
    Organized by: WG-MIA
  5. Title: Advances in Multimodal Remote Sensing Image Processing and Interpretation
    Organized by: WG-ISP


Machine Learning in Remote Sensing – Theory and Applications for Earth Observation Tutorial at IGARSS 2024

Workshop at ICLR 2024: Machine Learning for Remote Sensing ml-for-rs.github.io/iclr2024/

TC Newsletter

The committee distributes an e-mail newsletter to all committee members on a monthly basis regarding recent advancements, datasets, and opportunities. If you are interested in receiving the newsletter, please join the TC. We would highly appreciate your input if you want to let us know about upcoming conference/workshop/journal deadlines, new datasets or challenges, or vacant positions in remote sensing and earth observation.

Contests and Challenges

Current
Past
EOD: The Earth Observation Database

EOD provides an interactive and searchable catalog of public benchmark datasets for remote sensing and earth observation with the aim to support researchers in the fields of geoscience, remote sensing, and machine learning.

IADF School

The IADF School focuses on applying CV/ML methods to address challenges in remote sensing and contains a series of lectures on the existing methods utilized for analyzing satellite images, along with the challenges encountered.

Workshops

Current

Past

Special / Invited Sessions

Current

Community-Contributed Sessions

  • IGARSS 2024: “Image Analysis and Data Fusion: The AI Era”
  • IGARSS 2024: “Sustainable Development Goals through Image Analysis and Data Fusion of Earth Observation Data”
  • IGARSS 2024: “Advances in Multimodal Remote Sensing Image Processing and Interpretation”
  • IGARSS 2024: “Datasets and Benchmarking in Remote Sensing: Towards Large-Scale, Multi-Modality and Sustainability”

Past

  • IGARSS 2023: “Data Fusion: The AI Era”
  • IGARSS 2023: “Advances in Multimodal Remote Sensing Image Processing and Interpretation”
  • IGARSS 2023: “Sustainable Development Goals through Image Analysis and Data Fusion of Earth Observation Data”
  • IGARSS 2022: “Data Fusion: The AI Era”
  • IGARSS 2022: “Multi-resolution and Multimodal Remote Sensing Image Processing and Interpretation”
  • IGARSS 2022: “Embedding Ethics and Trustworthiness for Sustainable AI in Earth Sciences: Where do we begin?”
  • IGARSS 2021: The main IADF session “Data Fusion: The AI Era”
  • IGARSS 2021: The DFC21 session “IEEE GRSS Data Fusion Contest”
  • IGARSS 2021: “Machine Learning Datasets in Remote Sensing”
  • IGARSS 2021: “Multi-resolution and Multimodal Remote Sensing Image Processing and Interpretation”
Tutorials
Machine Learning in Remote Sensing – Theory and Applications for Earth Observation at IGARSS 2024
Special Issues / Streams

Current

Past

Webinars
Papers

IGARSS Papers

Developing

GeoAI: Best Practices and Design Considerations

Data and Algorithm Standard Evaluation (DASE)

The GRSS Data and Algorithm Standard Evaluation (DASE) website provides data sets and algorithm evaluation standards to support research, development, and testing of algorithms for remote sensing data analysis (e.g., machine/deep learning, image/signal processing).

 

Working Groups

To encourage the active participation of all TC members, the IADF organization comprises, in addition to the 3 Technical Committee Co-Chairs, 3 working groups (WGs). These working groups focus on Machine/Deep Learning for Image Analysis (MIA), Image and Signal Processing (ISP), and Benchmarking (BEN). Each WG will address a specific topic, will provide input and feedback to the TC chairs, organize topic-related events (such as workshops, contests, tutorials, invited sessions, etc.). Please find the corresponding WG and their thematic scope below. If you feel that certain research or applicational areas are within the scope of IADF but not well represented, feel free to propose additional WGs.

WG on Machine/Deep Learning for Image Analysis (WG-MIA)

The WG-MIA fosters theoretical and practical advancements in Machine Learning and Deep Learning (ML/DL) for the analysis of geospatial and remotely sensed images. Under the umbrella of the IADF TC, WG-MIA serves as a global network that promotes the development of ML/DL techniques and their application in the context of various geospatial domains. It aims at connecting engineers, scientists, teachers, and practitioners, promoting scientific/technical advancements and geospatial applications. To promote the societal impact of ML-based solutions for the analysis of geospatial data, we seek accountability, transparency, and explainability. We encourage the development of ethical, understandable, and trustworthy techniques. Current Activities: Organization of invited sessions at international conferences and special issues in international journals.

WG on Image and Signal Processing (WG-ISP)

The WG-ISP promotes advances in signal and image processing relying upon the use of remotely sensed data. It serves as a global, multi-disciplinary, network for both data fusion and image analysis supporting activities about several specific topics under the umbrella of the GRSS IADF TC. It aims at connecting people, supporting educational initiatives for both students and professionals, and promoting advances in signal processing for remotely sensed data. The WG-ISP oversees different topics, such as pansharpening, super-resolution, data fusion, segmentation/clustering, denoising, despeckling, image enhancement, image restoration, and many others. Current Activities: Organization of invited sessions at international conferences, special issues in international journals, and challenges and contests using remotely sensed data.

WG on Benchmarking (WG-BEN)

Datasets have always been important in methodical remote sensing. They have always been used as a backbone for the development and evaluation of new algorithms. In today’s era of big data and deep learning, datasets have become even more important than before: Large, well-curated, and annotated datasets are of crucial importance for the training and validation of state-of-the-art models for information extraction from increasingly versatile multi-sensor remote sensing data. In addition, due to the increasing number of new methods being proposed by scientists and engineers, the possibility to compare these methods in a fair and transparent manner has become more and more important. The WG-BEN addresses these challenges and provides input with respect to evaluation methods, datasets, benchmarks, competitions, and tools for the creation of reference data. Furthermore, we contribute to evaluation sites and databases. Current Activities: Organization of Invited Session on IGARSS, contribution to an online database for datasets (DASE 2.0), showcasing of selected public datasets in the monthly IADF Newsletter.

 

2025 IEEE GRSS Data Fusion Contest


All-Weather Land Cover and Building Damage Mapping
    


The Contest: Goals and Organization

With rapid advances in small Synthetic Aperture Radar (SAR) satellite technology, Earth Observation (EO) now provides submeter-resolution all-weather mapping with increasing temporal resolution. While optical data offer intuitive visuals and fine detail, but are limited by weather and lighting conditions. In contrast, SAR can penetrate cloud cover and provide consistent imagery in adverse weather and nighttime, enabling frequent monitoring of critical areas—valuable when disasters occur or environments rapidly change. Effectively exploiting the complementary properties of SAR and optical data to solve complex remote sensing image analysis problems remains a significant technical challenge.

The 2025 IEEE GRSS Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee, the University of Tokyo, RIKEN, and ETH Zurich aims to foster the development of innovative solutions for all-weather land-cover and building damage mapping using multimodal SAR and optical EO data at submeter resolution. The contest comprises two tracks focusing on land cover types and building damage, respectively, and presents two main technical challenges: effective integration of multimodal data and handling of noisy labels.

Track 1: All-Weather Land Cover Mapping

Track 1 focuses on developing methods for land cover mapping in all weather conditions using SAR data. The training data consists of multimodal submeter-resolution optical and SAR images with 8-class land cover labels. These labels are pseudo-labels derived from optical images based on pre-trained models. During the evaluation phase, models will rely exclusively on SAR to ensure they perform well in real-world, all-weather scenarios. It aims to improve the accuracy of land cover mapping under varying environmental conditions, demonstrating the utility of SAR data in monitoring land cover. The performance will be evaluated using the mean intersection over union (mIoU) metric.

Track 2: All-Weather Building Damage Mapping

Track 2 aims to develop methods for assessing building damage using bi-temporal multimodal images. The training data contain optical images from before the disaster and SAR images after the disaster, all at submeter resolution, labeled with four classes: background, intact, damaged, and destroyed buildings. Mapping building damage from multimodal image pairs presents unique challenges due to the different characteristics of optical and SAR imagery. During the evaluation phase, models will be applied to pre-disaster optical and post-disaster SAR image pairs to produce accurate assessments of building damage, showing the extent and severity of building damage, which are essential for effective disaster response and recovery planning. The performance will be evaluated using the mIoU metric.

Scientific papers describing the best entries will be included in the Technical Program of IGARSS 2025, presented in an invited session “IEEE GRSS Data Fusion Contest,” and published in the IGARSS 2025 Proceedings.

Competition Phases

The contest in both tracks will consist of two phases:

  • Phase 1 (Development phase): Participants are provided with training data and additional validation images (without any corresponding reference data) to train and validate their algorithms. Participants can submit prediction results for the validation set to the Codalab competition websites to get feedback on their performance. The performance of the best submission from each account will be displayed on the leaderboard. In parallel, participants submit a short description of the approach used to be eligible to enter Phase 2.
  • Phase 2 (Test phase): Participants receive the test data set (without the corresponding reference data) and submit their results within five days from the release of the test data set. After evaluation of the results, four winners from the two tracks are announced. Following this, they will have one month to write their manuscript, which will be included in the IGARSS proceedings. Manuscripts are 4-page IEEE-style formatted. Each manuscript describes the addressed problem, the proposed method, and the experimental results.


Calendar

Phase 1

  • January 13
    Contest opening: release of training and validation data
    The evaluation server begins accepting submissions for the validation set.
  • February 28
    Participants submit a short description of their approach in 1-2 pages (using the IGARSS paper template)


Phase 2

  • March 3
    Release of test data; evaluation server begins accepting test submissions
  • March 7
    Evaluation server stops accepting submissions


Winner announcement and publications

  • March 21
    Winner announcement
  • April 18
    Internal deadline for papers, DFC Committee review process
  • May 12
    Submission deadline of final papers to be published in the IGARSS 2025 proceedings
  • August
    Presentation at IADF-dedicated IGARSS 2025 Community-Contributed Sessions


The Data

Track 1

In Track 1, we provide a multimodal dataset of optical and SAR images, consisting of approximately 4300 aerial RGB and SAR image pairs with 8-class land cover pseudo-labels. The 8 classes include bareland, rangeland, developed space, road, tree, water, agriculture land, and building. The pseudo labels are generated from pre-trained OpenEarthMap models. The images cover 35 regions in Japan, USA, and France, with a ground sampling distance (GSD) between 0.15-0.5 meters. Both the images and labels are provided in TIFF format. For training, paired optical and SAR images with pseudo labels are available. For validation and testing, only SAR images are provided for metric evaluation.

  • Aerial RGB images: Aerial images are from the National Agriculture Imagery Program (NAIP), the French National Institute of Geographic and Forest Information (IGN), and the Geospatial Information Authority of Japan (GSI). Each aerial image is available as an 8-bit RGB TIFF tile, with a standard tile size of 1,024 × 1,024 pixels to match the SAR resolution.
  • Umbra SAR images: The SAR images are all provided by Umbra. These SAR images are provided as 8-bit single-channel TIFF tiles, with a pixel spacing ranging of 0.15 to 0.5 meters per pixel and a title size of 1,024 × 1,024 pixels.
  • Land cover labels: Pseudo land cover labels for training are generated using pre-trained OpenEarthMap models. For testing, selected areas within urban regions are manually labeled by experts to provide high-quality evaluation data.

Examples of images in the dataset for Track 1. SAR image © 2024 Umbra Lab, Inc., used under CC BY 4.0 license. Optical images of the first and third columns © 2024 National Institute of Geographic and Forest Information (IGN), France, used under CC BY 2.0 license; Optical image of the second column courtesy of Geospatial Information Authority of Japan (GSI); Optical images of the fourth and fifth columns courtesy of the National Agriculture Imagery Program (NAIP), USA.


Track 2

In Track 2, we provide the following multimodal VHR dataset (called BRIGHT). This dataset encompasses eleven disaster events across the globe, covering six types of disasters: earthquakes, wildfires, volcanic eruptions, floods, storms, and explosions. Approximately 3,000 multimodal image pairs are available, each consisting of a pre-disaster optical image and a post-disaster SAR image, accompanied by building damage labels. The dataset provides labels for four classes: background, intact buildings, damaged buildings, and destroyed buildings. All images have a ground sampling distance (GSD) of 0.3 to 1 meter per pixel, and both images and labels are in TIFF format. In this Track, nine of the eleven disaster events are designated as training and validation data, while the remaining two serve as test data. This setup allows us to assess model generalization on unseen disaster events.

  • Pre-disaster optical images: These images primarily originate from the MAXAR Open Data Program, the National Agriculture Imagery Program (NAIP), NOAA Digital Coast Raster Datasets, and the National Plan for Aerial Orthophotography Spain. Each optical image is provided as an 8-bit RGB TIFF tile with a resolution range of 0.3–1 meter per pixel and a standard tile size of 1,024 × 1,024 pixels.
  • Post-disaster SAR images: The post-disaster SAR images are mainly sourced from the Capella Space Open Data Gallery, with a few images from the Umbra Space Open Data Program. These SAR images are provided as 8-bit single-channel TIFF tiles, matching the optical images’ 1,024 × 1,024 pixels’ dimensions and resolutions.
  • Building labels: Expert annotators manually labeled buildings, and all labels underwent independent visual inspection to ensure accuracy.
  • Building damage information: Building damage information was obtained from the Copernicus Emergency Management Service and UNOSAT’s Emergency Mapping Service. The specific label values represent the meanings: 0 for background, 1 for intact building, 2 for damaged building, and 3 for destroyed building.

Examples of images in the dataset for Track 2. From left to right, the disaster types are explosions, earthquakes, wildfires, and floods. Optical images of Beirut, Malatya, and Derna © 2024 MAXAR, used under CC BY-NC 4.0 license; Optical image of Maui courtesy of NOAA Office for Coastal Management. SAR image © 2024 Capella Space, used under CC BY 4.0 license.


Submission and Evaluation

Participants will submit land cover maps and building damage maps to the Codalab server for Tracks 1 and 2, respectively.

Track 1: codalab.lisn.upsaclay.fr/competitions/21121
Track 2: codalab.lisn.upsaclay.fr/competitions/21122

All maps shall be a png gridded raster product. Each map will have the same grid and resolution as the identified test data file.

Classification accuracy will be evaluated against the reference data of the test set, which will not be provided to participants. The mIoU metric will be used to rank the results. The algorithm with the highest mIoU on the Phase 2 test set will be the winner.

Baseline

A baseline that shows how to use the DFC25 data to train models, make submissions, etc., is provided below for both the tracks.

Track 1: github.com/cliffbb/DFC2025-OEM-SAR-Baseline
Track 2: github.com/ChenHongruixuan/BRIGHT

Results, Awards, and Prizes

The first and second-ranked teams in both tracks will be recognized as winners. To be eligible for the prize, teams must contribute to the community by sharing their code openly, such as on GitHub. The winning teams will:

  • Present their approaches in a dedicated DFC25 session at IGARSS 2025
  • Publish their manuscripts in the Proceedings of IGARSS 2025
  • Receive IEEE Certificates of Recognition
  • Be awarded during IGARSS 2025, Brisbane, Australia in August 2025
  • Be awarded travel support of up to $4,500 USD per team to attend IGARSS 2025 (*subject to adjustment based on currency exchange rates).
  • Co-author a journal paper summarizing the DFC25 outcomes, which will be submitted with open access to IEEE JSTARS.


The costs for open-access publication will be supported by the GRSS. The winning team prize is sponsored by Mitsubishi Electric Corporation.

The Rules of the Game

  • To enter the contest, participants must read and accept the Contest Terms and Conditions.
  • For the sake of visual comparability of the results, all land cover maps shown in figures or illustrations should follow the color palette of the class tables below.

 

Land Cover Color Palette for Track 1

 

 

 

 

 

 

 

Building Damage Color Palette for Track 2

 

 

 

 

  • The results should be submitted to the Codalab competition websites for evaluation.
  • Ranking between the participants will be based on the metrics as described in the Submission and Evaluation Section.
  • The maximum number of trials of one team for each track is ten in the test phase.
  • The submission server of the test phase will be opened on March 2, 2025, at 23:59 UTC.
  • The deadline for result submission is March 7, 2025, 23:59 UTC (e.g., March 7, 2025, 18:59 in New York City, March 8, 2025, 00:59 in Paris, or 07:59 in Beijing).
  • Each team needs to submit a short paper of 1–2 pages by February 28, 2025, clarifying the used approach, the team members, their Codalab accounts, and one Codalab account to be used for the test phase. The paper must follow the IGARSS paper template and should be submitted via Microsoft Forms.
  • For the winning teams, the internal deadline for full paper submission is April 18, 2025, 23:59 UTC (e.g., April 18, 2025, 19:59 in New York City, April 19, 2025, 01:59 in Paris, or 07:59 in Beijing). IGARSS Full paper submission deadline is May 15, 2025.
  • Important: Only team members explicitly stated on these documents will be considered for the next steps of the DFC, i.e., being eligible to be awarded as winners and joining the author list of the respective potential publications (IGARSS25 and JSTARS articles). Furthermore, no overlap among teams is allowed, i.e., one person can only be a member of one team. Adding more team members after the end of the development phase, i.e., after submitting these documents is not possible.
  • Persons directly involved in the organization of the contest, i.e., the (co-)chairs of IADF as well as the co-organizers are not allowed to enter the contest. Please note that IADF WG leads can enter the contest. They have been excluded from relevant information concerning the content of the DFC to ensure fair competition.

 

Failure to follow any of these rules will automatically make the submission invalid, resulting in the manuscript not being evaluated and disqualification from the prize award.

Participants in the Contest are requested not to submit an extended abstract to IGARSS 2025 by the corresponding conference deadline in January 2025. Only contest winners (participants corresponding to the eight best-ranking submissions) will submit a 4-page paper describing their approach to the Contest by April 18, 2025. The received manuscripts will be reviewed by the Award Committee of the Contest, and reviews sent to the winners. Winners will submit the 4-pages full paper to the Award Committee of the Contest by May 12, who will then take care of the submission to the IGARSS Data Fusion Contest Community Contributed Session by May 15, 2025, for inclusion in the IGARSS Technical Program and Proceedings.

Acknowledgments

The IADF TC chairs would like to thank Capella Space, MAXAR, Umbra Space, the University of Tokyo, RIKEN, and ETH Zurich for providing the data, the IEEE GRSS for continuously supporting the annual Data Fusion Contest through funding and resources, and Mitsubishi Electric Corporation for sponsoring the winner team prize.

Organizers

Sponsor

For any information about past Data Fusion Contests, released data, and the related terms and conditions, please email iadf_chairs@grss-ieee.org.

2024 IEEE GRSS Data Fusion Contest

The goal of IEEE GRSS 2024 Data Fusion Contest challenge was to design and develop an algorithm that will combine multi-source data to classify flood surface water extent–that is, water and non-water areas. Provided data sources include optical and Synthetic Aperture Radar (SAR) remote sensing images as well as a digital terrain model, land-use and water occurrence. The content for the two bulleted points (Data, and Contest Results) can be found in the existing data fusion contest tab.

2023 IEEE GRSS Data Fusion Contest

The 2023 IEEE GRSS Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (GRSS), the Aerospace Information Research Institute under the Chinese Academy of Sciences, the Universität der Bundeswehr München, and GEOVIS Earth Technology Co., Ltd. aims to push current research on building extraction, classification, and 3D reconstruction towards urban reconstruction with fine-grained semantic information of roof types.

2022 IEEE GRSS Data Fusion Contest

The semi-supervised learning challenge of the 2022 IEEE GRSS Data Fusion Contest aims to promote research in automatic land cover classification from only partially annotated training data consisting of VHR RGB imagery.

2021 IEEE GRSS Data Fusion Contest

The 2021 IEEE GRSS Data Fusion Contest aimed to promote research on geospatial AI for social good. The global objective was to build models for understanding the state and changes of artificial and natural environments from multimodal and multitemporal remote sensing data towards sustainable developments. The 2021 Data Fusion Contest consisted of two challenge tracks: Detection of settlements without electricity and Multitemporal semantic change detection.

2020 IEEE GRSS Data Fusion Contest

The 2020 Data Fusion Contest aimed to promote research in large-scale land cover mapping from globally available multimodal satellite data. The task was to train a machine learning model for global land cover mapping based on weakly annotated samples. The Contest consisted of two challenge tracks: Track 1: Landcover classification with low-resolution labels, and Track 2: Landcover classification with low- and high-resolution labels.

2019 IEEE GRSS Data Fusion Contest

The 2019 Data Fusion Contest aimed to promote research in semantic 3D reconstruction and stereo using machine intelligence and deep learning applied to satellite images. The global objective was to reconstruct both a 3D geometric model and a segmentation of semantic classes for an urban scene. Incidental satellite images, airborne lidar data, and semantic labels were provided to the community.

2018 IEEE GRSS Data Fusion Contest

The 2018 Data Fusion Contest aimed to promote progress on fusion and analysis methodologies for multi-source remote sensing data. It consisted of a classification benchmark, the task to be performed being urban land use and land cover classification. The following advanced multi-source optical remote sensing data are provided to the community: multispectral LiDAR point cloud data (intensity rasters and digital surface models), hyperspectral data, and very high-resolution RGB imagery.

2017 IEEE GRSS Data Fusion Contest

The 2017 IEEE GRSS Data Fusion Contest focused on global land use mapping using open data. Participants were provided with remote sensing (Landsat and Sentinel2) data and vector layers (Open Street Map), as well as a 17 classes ground reference at 100 x 100m resolution over five cities worldwide (Local climate zones, see Stewart and Oke, 2012): Berlin, Hong Kong, Paris, Rome, Sao Paulo. The task was to provide land use maps over four other cities: Amsterdam, Chicago, Madrid, and Xi’an. The maps were to be uploaded on an evaluation server. Please refer to the links below to know more about the challenge, download the data and submit your results (even now that the contest is over).

2016 IEEE GRSS Data Fusion Contest

The 2016 IEEE GRSS Data Fusion Contest, organized by the IADF TC, was opened on January 3, 2016. The submission deadline was April 29, 2016. Participants submitted open topic manuscripts using the VHR and video-from-space data released for the competition. 25 teams worldwide participated to the Contest. Evaluation and ranking were conducted by the Award Committee.

Paper: Mou, L.; Zhu, X.; Vakalopoulou, M.; Karantzalos, K.; Paragios, N.; Le Saux, B.; Moser, G. & Tuia, D., Multi-temporal very high resolution from space: Outcome of the 2016 IEEE GRSS Data Fusion Contest, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., in press.

2015 IEEE GRSS Data Fusion Contest

The 2015 Contest was focused on multiresolution and multisensor fusion at extremely high spatial resolution. A 5-cm resolution color RGB orthophoto and a LiDAR dataset, for which both the raw 3D point cloud with a density of 65 pts/m² and a digital surface model with a point spacing of 10 cm, were distributed to the community. These data were collected using an airborne platform over the harbor and urban area of Zeebruges, Belgium. The department of Communication, Information, Systems, and Sensors of the Belgian Royal Military Academy acquired and provided the dataset. Participants were supposed to submit original IGARSS-style full papers using these data for the generation of either 2D or 3D thematic mapping products at extremely high spatial resolution.

Paper: M. Campos-Taberner, A. Romero-Soriano, C. Gatta, G. Camps-Valls, A. Lagrange, B. Le Saux, A. Beaupère, A. Boulch, A. Chan-Hon-Tong, S. Herbin, H. Randrianarivo, M. Ferecatu, M. Shimoni, G. Moser, and D. Tuia. Processing of extremely high-resolution LiDAR and RGB data: Outcome of the 2015 IEEE GRSS Data Fusion Contest. Part A: 2D contest. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., 9(12):5547–5559, 2016.

Paper: A.-V. Vo, L. Truong-Hong, D.F. Laefer, D. Tiede, S. d’Oleire Oltmanns, A. Baraldi, M. Shimoni, G. Moser, and D. Tuia. Processing of extremely high-resolution LiDAR and RGB data: Outcome of the 2015 IEEE GRSS Data Fusion Contest. Part B: 3D contest. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., 9(12):5560–5575, 2016.

2014 IEEE GRSS Data Fusion Contest

The 2014 Contest involved two datasets acquired at different spectral ranges and spatial resolutions: a coarser-resolution long-wave infrared (LWIR, thermal infrared) hyperspectral data set and fine-resolution data acquired in the visible (VIS) wavelength range. The former was acquired by an 84-channel imager covering the wavelengths between 7.8 to 11.5 μm with approximately 1-meter spatial resolution. The latter is a series of color images acquired during separate flight-lines with approximately 20-cm spatial resolution. The two data sources cover an urban area near Thetford Mines in Québec, Canada, and were acquired and were provided for the Contest by Telops Inc. (Canada). A ground truth with 7 landcover classes is provided and the mapping is performed at the higher of the two data resolutions.

Paper: W. Liao, X. Huang, F. Van Coillie, S. Gautama, A. Pizurica, W. Philips, H. Liu, T. Zhu, M. Shimoni, G. Moser, D. Tuia. Processing of Multiresolution Thermal Hyperspectral and Digital Color Data: Outcome of the 2014 IEEE GRSS DataFusion Contest. IEEE J. Sel. Topics Appl. Earth Observ. and Remote Sensing, 8(6): 2984-2996, 2015.

2013 IEEE GRSS Data Fusion Contest

The 2013 Contest involved two datasets, a hyperspectral image and a LiDAR-derived Digital Surface Model (DSM), both at the same spatial resolution (2.5m). The hyperspectral imagery has 144 spectral bands in the 380 nm to 1050 nm region. The dataset was acquired over the University of Houston campus and the neighboring urban area. A ground reference with 15 land use classes is available.

Paper: Debes, C.; Merentitis, A.; Heremans, R.; Hahn, J.; Frangiadakis, N.; van Kasteren, T.; Liao, W.; Bellens, R.; Pizurica, A.; Gautama, S.; Philips, W.; Prasad, S.; Du, Q.; Pacifici, F.: Hyperspectral and LiDAR Data Fusion: Outcome of the 2013 GRSS Data Fusion Contest. IEEE J. Sel. Topics Appl. Earth Observ. and Remote Sensing, 7 (6) pp. 2405-2418.

2012 IEEE GRSS Data Fusion Contest

The 2012 Contest was designed to investigate the potential of multi-modal/multi-temporal fusion of very high spatial resolution imagery in various remote sensing applications [6]. Three different types of data sets (optical, SAR, and LiDAR) over downtown San Francisco were made available by DigitalGlobe, Astrium Services, and the United States Geological Survey (USGS), including QuickBird, WorldView-2, TerraSAR-X, and LiDAR imagery. The image scenes covered a number of large buildings, skyscrapers, commercial and industrial structures, a mixture of community parks and private housing, and highways and bridges. Following the success of the multi-angular Data Fusion Contest in 2011, each participant was again required to submit a paper describing in detail the problem addressed, the method used, and final results generated for review.
Paper: Berger, C.; Voltersen, M.; Eckardt, R.; Eberle, J.; Heyer, T.; Salepci, N.; Hese, S.; Schmullius, C.; Tao, J.; Auer, S.; Bamler, R.; Ewald, K.; Gartley, M.; Jacobson, J.; Buswell, A.; Du, Q.; Pacifici, F., “Multi-Modal and Multi-Temporal Data Fusion: Outcome of the 2012 GRSS Data Fusion Contest”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol.6, no.3, pp.1324-1340, June 2013.

2011 IEEE GRSS Data Fusion Contest

A set of WorldView-2 multi-angular images was provided by DigitalGlobe for the 2011 Contest. This unique set was composed of five Ortho Ready Standard multi-angular acquisitions, including both 16 bit panchromatic and multispectral 8-band images. The data were collected over Rio de Janeiro (Brazil) in January 2010 within a three-minute time frame with satellite elevation angles of 44.7°, 56.0°, and 81.4° in the forward direction, and 59.8° and 44.6° in the backward direction. Since there were a large variety of possible applications, each participant was allowed to decide a research topic to work on, exploring the most creative use of optical multi-angular information. At the end of the Contest, each participant was required to submit a paper describing in detail the problem addressed, the method used, and the final result generated. The papers submitted were automatically formatted to hide the names and affiliations of the authors to ensure neutrality and impartiality of the reviewing process.
Paper: F. Pacifici, Q. Du, “Foreword to the Special Issue on Optical Multiangular Data Exploitation and Outcome of the 2011 GRSS Data Fusion Contest”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 1, pp.3-7, February 2012.

2009-2010 IEEE GRSS Data Fusion Contest

In 2009-2010, the aim of the contest was to perform change detection using multi-temporal and multi-modal data. Two pairs of data sets were available over Gloucester, UK, before and after a flood event. The data set contained SPOT and ERS images (before and after the disaster). The optical and SAR images were provided by CNES. Similar to previous years’ Contests, the ground truth used to assess the results was not provided to the participants. Each set of results was tested and ranked a first-time using the Kappa coefficient. The best five results were used to perform decision fusion with majority voting. Then, re-ranking was carried out after evaluating the level of improvement with respect to the fusion results.
Paper: N. Longbotham, F. Pacifici, T. Glenn, A. Zare, M. Volpi, D. Tuia, E. Christophe, J. Michel, J. Inglada, J. Chanussot, Q. Du “Multi-modal Change Detection, Application to the Detection of Flooded Areas: Outcome of the 2009-2010 Data Fusion Contest”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 1, pp. 331-342, February 2012.

2008 IEEE GRSS Data Fusion Contest

The 2008 Contest was dedicated to the classification of very high spatial resolution (1.3 m) hyperspectral imagery. The task was again to obtain a classification map as accurate as possible with respect to the unknown (to the participants) ground reference. The data set was collected by the Reflective Optics System Imaging Spectrometer (ROSIS-03) optical sensor with 115 bands covering the 0.43-0.86 μm spectral range.

Paper: G. Licciardi, F. Pacifici, D. Tuia, S. Prasad, T. West, F. Giacco, J. Inglada, E. Christophe, J. Chanussot, P. Gamba, “Decision fusion for the classification of hyperspectral data: outcome of the 2008 GRS-S data fusion contest”, IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 11, pp. 3857-3865, November 2009.

2007 IEEE GRSS Data Fusion Contest

In 2007, the Contest theme was urban mapping using synthetic aperture radar (SAR) and optical data, and 9 ERS amplitude data sets and 2 Landsat multi-spectral images were made available. The task was to obtain a classification map as accurate as possible with respect to the unknown (to the participants) ground reference, depicting land cover and land use patterns for the urban area under study.

Paper: F. Pacifici, F. Del Frate, W. J. Emery, P. Gamba, J. Chanussot, “Urban mapping using coarse SAR and optical data: outcome of the 2007 GRS-S data fusion contest”, IEEE Geoscience and Remote Sensing Letters, vol. 5, no. 3, pp. 331-335, July 2008.

2006 IEEE GRSS Data Fusion Contest

The focus of the 2006 Contest was on the fusion of multispectral and panchromatic images [1]. Six simulated Pleiades images were provided by the French National Space Agency (CNES). Each data set included a very high spatial resolution panchromatic image (0.80 m resolution) and its corresponding multi-spectral image (3.2 m resolution). A high spatial resolution multi-spectral image was available as ground reference, which was used by the organizing committee for evaluation but not distributed to the participants.
Paper: L. Alparone, L. Wald, J. Chanussot, C. Thomas, P. Gamba, L. M. Bruce, “Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data fusion contest”, IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 10, pp. 3012–3021, Oct. 2007.

Current membership (as of January 2024)

Contact

The IADF TC is open for a wide range of people with different expertise and background and working in different application areas. We are happy if you:

  • Provide feedback, suggestions, or ideas for future activities
  • Propose input for next newsletter
  • Propose the next Data Fusion Contest
  • Propose a new IADF Working Group

 

You can engage with us by contacting the Committee Chairs by email, follow us on Twitter, join the LinkedIn IEEE GRSS Data Fusion Discussion Forum, or join the IADF TC!

More Technical Committees