Image Analysis and Data Fusion (IADF)
The TC is comprised of 3 working groups dedicated to distinct fields within the scope of image analysis and data fusion, namely WG-MIA (Machine/Deep Learning for Image Analysis), WG-ISP (Image and Signal Processing), and WG-BEN (Benchmarking).
The TC maintains this site as a platform to share ideas and inform the community regarding the recent advances of image analysis and data fusion and distributes an e-mail newsletter to all committee members on a regular basis regarding recent advancements, datasets, and opportunities. The IADF TC activities include:
- Organization of a special session held annually during the IGARSS meeting, gathering cutting edge contributions covering various issues related to analysis and fusion of multi-modal and multi-temporal earth observation data via artificial intelligence, machine/deep learning, computer vision, and image/signal processing.
- Organization of the Data Fusion Contest, a scientific challenge held annually since 2006. The Contest is open not only to IEEE members but to everyone, with the goal of promoting innovation and benchmarking in analyzing multi-source big earth observation data.
- Organization of EarthVision, a workshop on large scale computer vision for remote sensing imagery held in conjunction with one of the major computer vision conferences (e.g., CVPR). The workshop aims to foster collaboration between the computer vision and earth observation communities and to advance automated interpretation of remotely sensed data.
- Operation of the GRSS Data and Algorithm Standard Evaluation (DASE) website. The website provides data sets and algorithm evaluation standards to support research, development, and testing of algorithms for remote sensing data analysis (e.g., machine/deep learning, image/signal processing).
If you are interested in receiving the newsletter, please join the IADF TC.
The IADF Technical Committee encourages participation from all its members. The committee organization includes the Chair, two Co-Chairs, and three working groups led by working group leads.
IADF Technical Committee Chair
|Dr. Ronny Hänsch|
German Aerospace Center (DLR)
IADF Technical Committee Co-Chair
|Dr. Claudio Persello|
University of Twente
IADF Technical Committee Co-Chair
|Dr. Gemine Vivone|
National Research Council
Working Group Leads
WG on Machine/Deep Learning for Image Analysis (WG-MIA)
|Dr. Dalton Lunga|
Oak Ridge National Laboratory
|Prof. Xian Sun|
Chinese Academy of Sciences
WG on Image and Signal Processing (WG-ISP)
Istanbul Technical University (VITO)
LASTIG, IGN/ENSG, UGE,
Oak Ridge National Laboratory
WG on Benchmarking (WG-BEN)
|Prof. Michael Schmitt|
Munich University of Applied Sciences
|Seyed Ali Ahmadi|
K. N. Toosi University of Technology
First IEEE GRSS & IADF School on Computer Vision for Earth Observation will take place Oct 3 -7, 2022 covering Image Fusion, Explainable AI for the Earth Science, Big Geo-Data, Multi-source Image Analysis, Deep Learning for Spectral Unmixing, SAR Image Analysis, Learning with Zero/Few Labels taught by experts! The school will include lectures, theoretical sessions, hands-on sessions and future livestreams. Find out more: IADF-School
The EarthVision21 workshop will take place at the next CVPR. We already have awesome keynote speakers and challenging contests! Don't miss to submit your paper on ComputerVision and AI / ML for Remote Sensing and Earth Observation: grss-ieee.org/
It is not too late to join the IEEE GRSS Data Fusion Contest 2021: Geospatial AI for Social Good. Two tracks. Exciting tasks. Nice prizes! www.grss-ieee.
The IADF working group on Image and Signal Processing (ISP) opened the GRSL Special Stream “Fusion of Multimodal Remote Sensing Data for Analysis and Interpretation”. Editors: Yang Xu, Gemine Vivone, Wenzhi Liao, Ronny Hänsch. Submission: Feb. 1 - Apr. 30 www.classic.grss-ieee.org/specail-stream-on-fusion-of-multimodal-remote-sensing-...
The committee distributes an e-mail newsletter to all committee members on a monthly basis regarding recent advancements, datasets, and opportunities. If you are interested in receiving the newsletter, please join the TC. If you want to let us know about upcoming conference/workshop/journal deadlines, new datasets or challenges, or vacant positions in remote sensing and earth observation, we would highly appreciate your input.
- Special issue on “Benchmarking in Remote Sensing Data Science”
- Special Issue on “2020 Gaofen Challenge on Automated High-Resolution Earth Observation Image Interpretation”
- Special Stream on "Machine Learning in Remote Sensing towards the Sustainable Development Goals"
- Special Stream on "Explainable Machine Learning for Remote Sensing"
Special / Invited Sessions
- IGARSS 2021: The main IADF session "Data Fusion: The AI Era"
- IGARSS 2021: The DFC21 session "IEEE GRSS Data Fusion Contest"
- IGARSS 2021: "Machine Learning Datasets in Remote Sensing" (WG-BEN)
- IGARSS 2021: "Multi-resolution and Multimodal Remote Sensing Image Processing and Interpretation" (WG-ISP)
is a workshop on large-scale computer vision for remote sensing imagery held in conjunction with CVPR, one of the major computer vision conferences. The workshop aims to foster collaboration between the computer vision and earth observation communities and to advance automated interpretation of remotely sensed data.
Data and Algorithm Standard Evaluation (DASE)
The GRSS Data and Algorithm Standard Evaluation (DASE) website provides data sets and algorithm evaluation standards to support research, development, and testing of algorithms for remote sensing data analysis (e.g., machine/deep learning, image/signal processing).
2020 Gaofen Challenge on Automated High-Resolution Earth Observation Image Interpretation
The 2020 Gaofen Challenge (en.sw.chreos.org/) is the most influential challenge on remote sensing in China and has been successfully held for four years with the support of China High-Resolution Major Scientific and Technological Projects. The challenge tracks including remote sensing image classification, object detection, semantic segmentation, etc. Thousands of remote sensing data have been published with sensors covering optics, SAR, multispectral, etc. The 2020 Gaofen Challenge is technically co-sponsored by IEEE GRSS and ISPRS. More than 700 teams from more than 20 countries (including China, Germany, France, Japan, Australia, Singapore, India, Sweden, etc.) have joined in this Challenge. The final phase was ended on 11th October, and the workshop was held on October 30, 2020.
ML in RS Tutorial
Despite the wide application of machine learning to analyze remotely sensed data, the complexity of these methods often hinders to use them to their full potential. The aim of this tutorial is threefold: First, to provide insights into the algorithmic principles behind state-of-the-art machine learning approaches. Second, to illustrate the benefits and limitations of machine learning with practical examples. Third, to inspire new ideas by discussing unusual applications from remote sensing and other domains. Coming next at IGARSS21.
- Sep 1, 2020: GRSS Image Analysis and Data Fusion TC & Sample Activity: Benchmarking ML4RS
- Dec 8, 2020: Mapping urban deprivation and socio-economic inequalities using earth observation and deep learning
- October 26th, 2022: Scaling Geospatial Artificial Intelligence for Disaster Response
IEEE GRSS IADF Photo Contest 2023
Are you working on image analysis or data fusion for Earth observation ? Do you have exciting work to be shared with the community? Then submit your illustrations to the IEEE GRSS IADF Photo Contest 2023! We are looking for enlightening illustrations that explain your method, fancy visualizations of the input, intermediate representations, or final results, or a visual summary of a core problem that you are aiming to solve. Use this opportunity to make your work known in the community (and get some GRSS prizes on the way). Theme of the IEEE GRSS IADF Photo Contest 2023: “Eye in the Sky”
To encourage the active participation of all TC members, the IADF organization comprises, in addition to the 3 Technical Committee Co-Chairs, 3 working groups (WGs). These working groups focus on Machine/Deep Learning for Image Analysis (MIA), Image and Signal Processing (ISP), and Benchmarking (BEN). Each WG will address a specific topic, will provide input and feedback to the TC chairs, organize topic-related events (such as workshops, contests, tutorials, invited sessions, etc.). Please find the corresponding WG and their thematic scope below. If you feel that certain research or applicational areas are within the scope of IADF but not well represented, feel free to propose additional WGs.
WG on Machine/Deep Learning for Image Analysis (WG-MIA)
The WG-MIA fosters theoretical and practical advancements in Machine Learning and Deep Learning (ML/DL) for the analysis of geospatial and remotely sensed images. Under the umbrella of the IADF TC, WG-MIA serves as a global network that promotes the development of ML/DL techniques and their application in the context of various geospatial domains. It aims at connecting engineers, scientists, teachers, and practitioners, promoting scientific/technical advancements and geospatial applications. To promote the societal impact of ML-based solutions for the analysis of geospatial data, we seek accountability, transparency, and explainability. We encourage the development of ethical, understandable, and trustworthy techniques.
Current Activities: Organization of invited sessions at international conferences and special issues in international journals.
WG on Image and Signal Processing (WG-ISP)
The WG-ISP promotes advances in signal and image processing relying upon the use of remotely sensed data. It serves as a global, multi-disciplinary, network for both data fusion and image analysis supporting activities about several specific topics under the umbrella of the GRSS IADF TC. It aims at connecting people, supporting educational initiatives for both students and professionals, and promoting advances in signal processing for remotely sensed data.
The WG-ISP oversees different topics, such as pansharpening, super-resolution, data fusion, segmentation/clustering, denoising, despeckling, image enhancement, image restoration, and many others.
Current Activities: Organization of invited sessions at international conferences, special issues in international journals, and challenges and contests using remotely sensed data.
WG on Benchmarking (WG-BEN)
Datasets have always been important in methodical remote sensing. They have always been used as a backbone for the development and evaluation of new algorithms. In today’s era of big data and deep learning, datasets have become even more important than before: Large, well-curated, and annotated datasets are of crucial importance for the training and validation of state-of-the-art models for information extraction from increasingly versatile multi-sensor remote sensing data. In addition, due to the increasing number of new methods being proposed by scientists and engineers, the possibility to compare these methods in a fair and transparent manner has become more and more important.
The WG-BEN addresses these challenges and provides input with respect to evaluation methods, datasets, benchmarks, competitions, and tools for the creation of reference data. Furthermore, we contribute to evaluation sites and databases.
Current Activities: Organization of Invited Session on IGARSS, contribution to an online database for datasets (DASE 2.0), showcasing of selected public datasets in the monthly IADF Newsletter.
2023 IEEE GRSS Data Fusion Contest
Large-Scale Fine-Grained Building Classification for Semantic Urban Reconstruction
The Challenge Task
Buildings are essential components of urban areas. While research on the extraction and 3D reconstruction of buildings is widely conducted, information on fine-grained roof types of buildings is usually ignored. This limits the potential of further analysis, e.g., in the context of urban planning applications. The fine-grained classification of building roof type from satellite images is a highly challenging task due to ambiguous visual features within the satellite imagery. The difficulty is further increased by the lack of corresponding fine-grained building classification datasets.
The 2023 IEEE GRSS Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (GRSS), the Aerospace Information Research Institute under the Chinese Academy of Sciences, the Universität der Bundeswehr München, and GEOVIS Earth Technology Co., Ltd. aims to push current research on building extraction, classification, and 3D reconstruction towards urban reconstruction with fine-grained semantic information of roof types.
To this aim, the DFC23 establishes a large-scale, fine-grained, and multi-modal benchmark for the classification of building roof types. It consists of two challenging competition tracks investigating the fusion of optical and SAR data, while focusing on roof type classification and building height estimation, respectively. The data provided by the DFC23 includes several novel properties:
Globally Distributed Large-Scale Dataset. A novel large-scale urban building classification and reconstruction dataset is provided. Buildings are distributed across seventeen cities in six continents to cover a wide range of different building styles. This allows capturing the heterogeneity of cities in different continents with various landforms.
Fine-Grained Roof Type Categories. The buildings are labeled according to a detailed (fine-grained) categorization of roof types. The DFC23 provides nearly 300k instances with twelve different types of building roofs which renders building classification significantly more challenging.
Multimodal Data. To facilitate multimodal data fusion, not only optical imagery, but also Synthetic Aperture Radar (SAR) images are provided. The information captured by these different modalities can be jointly exploited, potentially resulting in the development of more accurate building extraction and classification models.
Track 1: Building Detection and Roof Type Classification
This track focuses on the detection and classification of building roof types from high-resolution optical satellite imagery and SAR images. The SAR and optical modalities are expected to provide complementary information. The given dataset covers seventeen cities worldwide across six continents. The classification task consists of twelve fine-grained, predefined roof types. Figure 1 shows an example.
Track 2: Multi-Task Learning of Joint Building Extraction and Height Estimation
This track defines the joint task of building extraction and height estimation. Both are two very fundamental and essential tasks for building reconstruction. Same as in Track 1, the input data are multimodal optical and SAR satellite imagery. Building extraction and height estimation from single-view satellite imagery depend on semantic features extracted from the imagery. Multi-task learning provides a potentially superior solution by reusing features and forming implicit constraints between multiple tasks in comparison to conventional separate implementations. Satellite images are provided with reference data, i.e., building annotations and normalized Digital Surface Models (nDSMs). The participants are required to reconstruct building heights and extract building footprints. Figure 2 shows an example.
The contest in both tracks will consist of two phases:
Phase 1: Participants are provided with training data and additional validation images (without corresponding reference data) to train and validate their algorithms. Participants can submit results for the validation set to the Codalab competition website (Track 1, Track 2) to get feedback on the performance. The performance of the best submission from each account will be displayed on the leaderboard. In parallel, participants submit a short description of the approach used to be eligible to enter Phase 2.
Phase 2: Participants receive the test data set (without the corresponding reference data) and submit their results within seven days from the release of the test data set. After evaluation of the results, four winners for each track are announced. Following this, they will have one month to write their manuscript that will be included in the IGARSS 2023 proceedings. Manuscripts are 4-page IEEE-style formatted. Each manuscript describes the addressed problem, the proposed method, and the experimental results.
- January 3: Contest opening: release of training and validation data
- January 4: Evaluation server begins accepting submissions for validation data set
- February 28: Short description of the approach in 1-2 pages for each track is sent to firstname.lastname@example.org (using the IGARSS paper template)
- March 7: Release of test data; evaluation server begins accepting test submissions
- March 13: Evaluation server stops accepting submissions
- March 15: Updated and final description of the approach
- March 28: Winner announcement
- April 23: Internal deadline for papers, DFC Committee review process
- May 22: Submission deadline of final papers to be published in the IGARSS 2023 proceedings
Images of the DFC23 dataset are collected from the SuperView-1 (or “GaoJing” in Chinese), Gaofen-2 and Gaofen-3 satellites, with spatial resolutions of 0.5 m, 0.8 m, and 1m, respectively. Normalized Digital Surface Models (nDSMs) provided for reference in Track 2 are produced from stereo images captured by Gaofen-7 and WorldView with a ground sampling distance (GSD) of roughly 2 m. Data was collected from seventeen cities on six continents to provide a large and representative data set of high diversity regarding landforms, architecture, and building types.
There are twelve fine-grained roof type classes based on the geometry of the roof. Table 1 provides the detailed definition of these roof types.
Table 1 Categories of roof type
The data is partitioned into three partitions, which are training set, validation set and test set. The participants can access the training set (including the optical images, the corresponding SAR images, and the reference) and the validation set (without reference) in the development phase. In the test phase, the test set (without reference) will be released.
Submission and Evaluation
Track 1: Participants should submit results to the Codalab competition website to get feedback on the performance. The submitted format should follow the MS COCO format (in a JSON file), including segmentation results represented by polygons (i.e., a sequence of points to delineate the building contours) or RLE (run-length encoding), one fine-grained category with a confidence for each instance and one optional bounding box.
For evaluation, we adopt the standard COCO metric (the IoU threshold is 0.5), where the loU is evaluated based on the masks converted from the ground truth masks and the submitted masks. The participants with the highest are declared as the winners. It is noted that is a very strict metric. Categories are taken into consideration when computing the IoU score.
Track 2: Participants should submit results to the Codalab competition website to get feedback on the performance. The submitted results should include two parts. The first part is a building extraction result, which is the same as the previous track with the only exception that the category is not taken into consideration in this track. The second part is a pixel-wise height estimation result, which is a map of the same resolution as the input. The second part should be a (possibly compressed) tif file with the same name as the corresponding optical image.
For evaluation, the metric of the building extraction is the same as Track 1 (categories are not taken into consideration) and the metric of height estimation is threshold accuracy , where is the total number of pixels and is the number of pixels that meet , where is the reference height and is the predicted height. The final metric is the average of the two metrics, i.e., .
A baseline that shows how to use the DFC23 data to train models, make submissions, etc., can be found here.
Results, Awards, and Prizes
The first, second, and third and fourth-ranked teams in each track will be declared as winners.
Present their approach in an invited session dedicated to the DFC23 at IGARSS 2023
- Publish their manuscripts in the proceedings of IGARSS 2023
- Be awarded IEEE Certificates of Recognition
- The first to third-ranked teams of each track will receive $5,000, $2,000, and $1,000 (USD), respectively, as a cash prize.
- The authors of the first and second-ranked teams of each track will co-author a journal paper which will summarize the outcome of the DFC23 and will be submitted with open access to IEEE JSTARS.
- Top-ranked teams will be awarded during IGARSS 2023, Pasadena, USA in July 2023. The costs for open-access publication in JSTARS will be supported by the GRSS. The winner team prize is kindly sponsored by the organizing partners.
The Rules of the Game
- The dataset can be openly downloaded at ieee-dataport.org/
competitions/2023-ieee-grss- data-fusion-contest-large- scale-fine-grained-building- classification.
- Validation and test data can be requested by registering for the Contest at IEEE DataPort.
- To enter the contest, participants must read and accept the Contest Terms and Conditions.
- Participants of the contest are intended to submit results as the Submission and Evaluation Section
- The results will be submitted to the Codalab competition website (Track 1, Track 2) for evaluation.
- Ranking between the participants will be based on the metrics as described in the Submission and Evaluation Section.
- The maximum number of trials of one team is five per day in the test phase.
- The submission server of the test phase will be opened on March 7, 2023 at 23:59 UTC-12 hours. The deadline for result submission and final description is March 15, 2023, 23:59 UTC-12 hours (e.g., March 16, 2023, 6:59 in New York City, 12:59 in Paris, or 19:59 in Beijing).
- Each team needs to submit a short paper of 1–2 pages clarifying the used approach, the team members, their Codalab accounts, and one Codalab account to be used for the test phase by Feburary 28, 2023. Please send a paper to email@example.com using the IGARSS paper template. Only teams that have submitted the short description complete with all information will be admitted to the test phase.
- For the winning teams, the internal deadline for full paper submission is April 23, 2023, 23:59 UTC – 12 hours (e.g., April 23, 2023, 7:59 in New York City, 13:59 in Paris, or 19:59 in Beijing).
- Important: Only team members explicitly stated on these documents will be considered for the next steps of the DFC, i.e., being eligible to be awarded as winners and joining the author list of the respective potential publications (IGARSS23 and JSTARS articles). Furthermore, no overlap among teams is allowed, i.e., one person can only be a member of one team. Adding more team members after the end of the development phase, i.e., after submitting these documents is not possible.
- Persons directly involved in the organization of the contest, i.e., the (co-)chairs of IADF as well as the co-organizers are not allowed to enter the contest. Please note that IADF WG leads can enter the contest. They have been actively excluded from all information concerning the content of the DFC to ensure a fair competition.
Failure to follow any of these rules will automatically make the submission invalid, resulting in the manuscript not being evaluated and disqualification from the prize award.
Participants to the Contest are requested not to submit an extended abstract describing their approach to tackle the DFC23 to IGARSS 2023 by the corresponding conference deadline in January 2023. Only contest winners (participants corresponding to the best-ranking submissions) will submit a 4-page paper describing their approach to the Contest by April 23, 2023. The received manuscripts will be reviewed by the Award Committee of the Contest, and reviews will be sent to the winners. Winners will submit the 4-pages full-paper to the Award Committee of the Contest by May 22, who will then take care of the submission to the IGARSS Data Fusion Contest Community Contributed Session by May 31, 2023, for inclusion in the IGARSS Technical Program and Proceedings.
Terms and Conditions
Participants of this challenge acknowledge that they have read and agree to the following Contest Terms and Conditions:
- In any scientific publication using the data, the data shall be referenced as follows: “[REF. NO.] 2023 IEEE GRSS Data Fusion Contest. Online: grss-ieee.org/technical-committees/image-analysis-and-data-fusion/”.
- Any scientific publication using the data shall include a section “Acknowledgement”. This section shall include the following sentence: “The authors would like to thank the IEEE GRSS Image Analysis and Data Fusion Technical Committee, Aerospace Information Research Institute, Chinese Academy of Sciences, Universität der Bundeswehr München, and GEOVIS Earth Technology Co., Ltd. for organizing the Data Fusion Contest”.
- Any scientific publication using the data shall refer to the following paper:
- [Huang et al., 2022] Huang, X., Ren, L., Liu, C., Wang, Y., Yu, H., Schmitt, M., Hänsch, R., Sun, X., Huang, H., Mayer, H., 2022. Urban Building Classification (UBC) – A Dataset for Individual Building Detection and Classification from Satellite Imagery. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1413-1421.
The IADF TC chairs would like to thank the Aerospace Information Research Institute under the Chinese Academy of Sciences, the Universität der Bundeswehr München, and GEOVIS Earth Technology Co., Ltd. for providing the data and the IEEE GRSS for continuously supporting the annual Data Fusion Contest through funding and resources.
The winners of the competition will receive a total of $16k as prizes, courtesy of GEOVIS Earth Technology Co., Ltd.
For any information about past Data Fusion Contests, released data, and the related terms and conditions, please email firstname.lastname@example.org.
2022 IEEE GRSS Data Fusion Contest
The semi-supervised learning challenge of the 2022 IEEE GRSS Data Fusion Contest aims to promote research in automatic land cover classification from only partially annotated training data consisting of VHR RGB imagery.
2021 IEEE GRSS Data Fusion Contest
The 2021 IEEE GRSS Data Fusion Contest aimed to promote research on geospatial AI for social good. The global objective was to build models for understanding the state and changes of artificial and natural environments from multimodal and multitemporal remote sensing data towards sustainable developments. The 2021 Data Fusion Contest consisted of two challenge tracks: Detection of settlements without electricity and Multitemporal semantic change detection.
2020 IEEE GRSS Data Fusion Contest
The 2020 Data Fusion Contest aimed to promote research in large-scale land cover mapping from globally available multimodal satellite data. The task was to train a machine learning model for global land cover mapping based on weakly annotated samples. The Contest consisted of two challenge tracks: Track 1: Landcover classification with low-resolution labels, and Track 2: Landcover classification with low- and high-resolution labels.
2019 IEEE GRSS Data Fusion Contest
The 2019 Data Fusion Contest aimed to promote research in semantic 3D reconstruction and stereo using machine intelligence and deep learning applied to satellite images. The global objective was to reconstruct both a 3D geometric model and a segmentation of semantic classes for an urban scene. Incidental satellite images, airborne lidar data, and semantic labels were provided to the community.
2018 IEEE GRSS Data Fusion Contest
The 2018 Data Fusion Contest aimed to promote progress on fusion and analysis methodologies for multi-source remote sensing data. It consisted of a classification benchmark, the task to be performed being urban land use and land cover classification. The following advanced multi-source optical remote sensing data are provided to the community: multispectral LiDAR point cloud data (intensity rasters and digital surface models), hyperspectral data, and very high-resolution RGB imagery.
2017 IEEE GRSS Data Fusion Contest
The 2017 IEEE GRSS Data Fusion Contest focused on global land use mapping using open data. Participants were provided with remote sensing (Landsat and Sentinel2) data and vector layers (Open Street Map), as well as a 17 classes ground reference at 100 x 100m resolution over five cities worldwide (Local climate zones, see Stewart and Oke, 2012): Berlin, Hong Kong, Paris, Rome, Sao Paulo. The task was to provide land use maps over four other cities: Amsterdam, Chicago, Madrid, and Xi’an. The maps were to be uploaded on an evaluation server. Please refer to the links below to know more about the challenge, download the data and submit your results (even now that the contest is over).
2016 IEEE GRSS Data Fusion Contest
The 2016 IEEE GRSS Data Fusion Contest, organized by the IADF TC, was opened on January 3, 2016. The submission deadline was April 29, 2016. Participants submitted open topic manuscripts using the VHR and video-from-space data released for the competition. 25 teams worldwide participated to the Contest. Evaluation and ranking were conducted by the Award Committee.
Paper: Mou, L.; Zhu, X.; Vakalopoulou, M.; Karantzalos, K.; Paragios, N.; Le Saux, B.; Moser, G. & Tuia, D., Multi-temporal very high resolution from space: Outcome of the 2016 IEEE GRSS Data Fusion Contest, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., in press.
2015 IEEE GRSS Data Fusion Contest
The 2015 Contest was focused on multiresolution and multisensor fusion at extremely high spatial resolution. A 5-cm resolution color RGB orthophoto and a LiDAR dataset, for which both the raw 3D point cloud with a density of 65 pts/m² and a digital surface model with a point spacing of 10 cm, were distributed to the community. These data were collected using an airborne platform over the harbor and urban area of Zeebruges, Belgium. The department of Communication, Information, Systems, and Sensors of the Belgian Royal Military Academy acquired and provided the dataset. Participants were supposed to submit original IGARSS-style full papers using these data for the generation of either 2D or 3D thematic mapping products at extremely high spatial resolution.
Paper: M. Campos-Taberner, A. Romero-Soriano, C. Gatta, G. Camps-Valls, A. Lagrange, B. Le Saux, A. Beaupère, A. Boulch, A. Chan-Hon-Tong, S. Herbin, H. Randrianarivo, M. Ferecatu, M. Shimoni, G. Moser, and D. Tuia. Processing of extremely high-resolution LiDAR and RGB data: Outcome of the 2015 IEEE GRSS Data Fusion Contest. Part A: 2D contest. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., 9(12):5547–5559, 2016.
Paper: A.-V. Vo, L. Truong-Hong, D.F. Laefer, D. Tiede, S. d’Oleire Oltmanns, A. Baraldi, M. Shimoni, G. Moser, and D. Tuia. Processing of extremely high-resolution LiDAR and RGB data: Outcome of the 2015 IEEE GRSS Data Fusion Contest. Part B: 3D contest. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., 9(12):5560–5575, 2016.
2014 IEEE GRSS Data Fusion Contest
The 2014 Contest involved two datasets acquired at different spectral ranges and spatial resolutions: a coarser-resolution long-wave infrared (LWIR, thermal infrared) hyperspectral data set and fine-resolution data acquired in the visible (VIS) wavelength range. The former was acquired by an 84-channel imager covering the wavelengths between 7.8 to 11.5 μm with approximately 1-meter spatial resolution. The latter is a series of color images acquired during separate flight-lines with approximately 20-cm spatial resolution. The two data sources cover an urban area near Thetford Mines in Québec, Canada, and were acquired and were provided for the Contest by Telops Inc. (Canada). A ground truth with 7 landcover classes is provided and the mapping is performed at the higher of the two data resolutions.
Paper: W. Liao, X. Huang, F. Van Coillie, S. Gautama, A. Pizurica, W. Philips, H. Liu, T. Zhu, M. Shimoni, G. Moser, D. Tuia. Processing of Multiresolution Thermal Hyperspectral and Digital Color Data: Outcome of the 2014 IEEE GRSS DataFusion Contest. IEEE J. Sel. Topics Appl. Earth Observ. and Remote Sensing, 8(6): 2984-2996, 2015.
2013 IEEE GRSS Data Fusion Contest
The 2013 Contest involved two datasets, a hyperspectral image and a LiDAR-derived Digital Surface Model (DSM), both at the same spatial resolution (2.5m). The hyperspectral imagery has 144 spectral bands in the 380 nm to 1050 nm region. The dataset was acquired over the University of Houston campus and the neighboring urban area. A ground reference with 15 land use classes is available.
Paper: Debes, C.; Merentitis, A.; Heremans, R.; Hahn, J.; Frangiadakis, N.; van Kasteren, T.; Liao, W.; Bellens, R.; Pizurica, A.; Gautama, S.; Philips, W.; Prasad, S.; Du, Q.; Pacifici, F.: Hyperspectral and LiDAR Data Fusion: Outcome of the 2013 GRSS Data Fusion Contest. IEEE J. Sel. Topics Appl. Earth Observ. and Remote Sensing, 7 (6) pp. 2405-2418.
2012 IEEE GRSS Data Fusion Contest
The 2012 Contest was designed to investigate the potential of multi-modal/multi-temporal fusion of very high spatial resolution imagery in various remote sensing applications . Three different types of data sets (optical, SAR, and LiDAR) over downtown San Francisco were made available by DigitalGlobe, Astrium Services, and the United States Geological Survey (USGS), including QuickBird, WorldView-2, TerraSAR-X, and LiDAR imagery. The image scenes covered a number of large buildings, skyscrapers, commercial and industrial structures, a mixture of community parks and private housing, and highways and bridges. Following the success of the multi-angular Data Fusion Contest in 2011, each participant was again required to submit a paper describing in detail the problem addressed, the method used, and final results generated for review.
Paper: Berger, C.; Voltersen, M.; Eckardt, R.; Eberle, J.; Heyer, T.; Salepci, N.; Hese, S.; Schmullius, C.; Tao, J.; Auer, S.; Bamler, R.; Ewald, K.; Gartley, M.; Jacobson, J.; Buswell, A.; Du, Q.; Pacifici, F., “Multi-Modal and Multi-Temporal Data Fusion: Outcome of the 2012 GRSS Data Fusion Contest”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol.6, no.3, pp.1324-1340, June 2013.
2011 IEEE GRSS Data Fusion Contest
A set of WorldView-2 multi-angular images was provided by DigitalGlobe for the 2011 Contest. This unique set was composed of five Ortho Ready Standard multi-angular acquisitions, including both 16 bit panchromatic and multispectral 8-band images. The data were collected over Rio de Janeiro (Brazil) in January 2010 within a three-minute time frame with satellite elevation angles of 44.7°, 56.0°, and 81.4° in the forward direction, and 59.8° and 44.6° in the backward direction. Since there were a large variety of possible applications, each participant was allowed to decide a research topic to work on, exploring the most creative use of optical multi-angular information. At the end of the Contest, each participant was required to submit a paper describing in detail the problem addressed, the method used, and the final result generated. The papers submitted were automatically formatted to hide the names and affiliations of the authors to ensure neutrality and impartiality of the reviewing process.
Paper: F. Pacifici, Q. Du, “Foreword to the Special Issue on Optical Multiangular Data Exploitation and Outcome of the 2011 GRSS Data Fusion Contest”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 1, pp.3-7, February 2012.
2009-2010 IEEE GRSS Data Fusion Contest
In 2009-2010, the aim of the contest was to perform change detection using multi-temporal and multi-modal data. Two pairs of data sets were available over Gloucester, UK, before and after a flood event. The data set contained SPOT and ERS images (before and after the disaster). The optical and SAR images were provided by CNES. Similar to previous years’ Contests, the ground truth used to assess the results was not provided to the participants. Each set of results was tested and ranked a first-time using the Kappa coefficient. The best five results were used to perform decision fusion with majority voting. Then, re-ranking was carried out after evaluating the level of improvement with respect to the fusion results.
Paper: N. Longbotham, F. Pacifici, T. Glenn, A. Zare, M. Volpi, D. Tuia, E. Christophe, J. Michel, J. Inglada, J. Chanussot, Q. Du “Multi-modal Change Detection, Application to the Detection of Flooded Areas: Outcome of the 2009-2010 Data Fusion Contest”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 1, pp. 331-342, February 2012.
2008 IEEE GRSS Data Fusion Contest
The 2008 Contest was dedicated to the classification of very high spatial resolution (1.3 m) hyperspectral imagery. The task was again to obtain a classification map as accurate as possible with respect to the unknown (to the participants) ground reference. The data set was collected by the Reflective Optics System Imaging Spectrometer (ROSIS-03) optical sensor with 115 bands covering the 0.43-0.86 μm spectral range.
Paper: G. Licciardi, F. Paciﬁci, D. Tuia, S. Prasad, T. West, F. Giacco, J. Inglada, E. Christophe, J. Chanussot, P. Gamba, “Decision fusion for the classiﬁcation of hyperspectral data: outcome of the 2008 GRS-S data fusion contest”, IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 11, pp. 3857-3865, November 2009.
2007 IEEE GRSS Data Fusion Contest
In 2007, the Contest theme was urban mapping using synthetic aperture radar (SAR) and optical data, and 9 ERS amplitude data sets and 2 Landsat multi-spectral images were made available. The task was to obtain a classification map as accurate as possible with respect to the unknown (to the participants) ground reference, depicting land cover and land use patterns for the urban area under study.
Paper: F. Paciﬁci, F. Del Frate, W. J. Emery, P. Gamba, J. Chanussot, “Urban mapping using coarse SAR and optical data: outcome of the 2007 GRS-S data fusion contest”, IEEE Geoscience and Remote Sensing Letters, vol. 5, no. 3, pp. 331-335, July 2008.
2006 IEEE GRSS Data Fusion Contest
The focus of the 2006 Contest was on the fusion of multispectral and panchromatic images . Six simulated Pleiades images were provided by the French National Space Agency (CNES). Each data set included a very high spatial resolution panchromatic image (0.80 m resolution) and its corresponding multi-spectral image (3.2 m resolution). A high spatial resolution multi-spectral image was available as ground reference, which was used by the organizing committee for evaluation but not distributed to the participants.
Paper: L. Alparone, L. Wald, J. Chanussot, C. Thomas, P. Gamba, L. M. Bruce, “Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data fusion contest”, IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 10, pp. 3012–3021, Oct. 2007.
Current membership (as of February 2021)
The IADF TC is open for a wide range of people with different expertise and background and working in different application areas. We are happy if you:
- Provide feedback, suggestions, or ideas for future activities
- Propose input for next newsletter
- Propose the next Data Fusion Contest
- Propose a new IADF Working Group
You can engage with us by contacting the Committee Chairs by email, follow us on Twitter, join the LinkedIn IEEE GRSS Data Fusion Discussion Forum, or join the IADF TC!