Fusion of Multimodal Remote Sensing Data for Analysis and Interpretation

A GRSL special stream – 2021
Editors:
· Dr. Yang Xu, Nanjing University of Science and Technology, China (Lead Editor)
· Dr. Gemine Vivone, National Research Council and University of Salerno, Italy (Guest Editor)
· Dr. Wenzhi Liao, Flemish Institute for Technological Research (VITO) and Ghent University, Belgium (Guest Editor)
· Dr. Ronny Hänsch, German Aerospace Center (DLR), Germany(Guest Editor)

 

Recent advances in sensor and aircraft technologies allow us to acquire huge amounts of remote sensing data, benefiting the Earth observation. Diverse information of Earth’s surface can be derived from data acquired by multiple sources. Multi- and hyper- spectral images can reveal the material composition, panchromatic images can reach fine spatial resolutions, synthetic aperture radar (SAR) data can be used to map different properties of the terrain, while laser imaging detection and ranging (LIDAR) data provides the elevation of the observed objects.

Unfortunately, the use of a single data source is not always enough to reach proper spectral and spatial resolutions. Thus, multiple source data, acquired by sensors on board of different platforms, should be combined. Multimodal data fusion has received enormous attention in a wide variety of applications as it combines the advantages of the multimodal data. The main advantage of using multimodal remotely sensed data is to exploit complementary properties to improve each data considered separately. Therefore, the analysis of the acquired scene (classification, target detection, geological mapping, etc.) can be improved by fusing the multimodal data.

The rapid expansion in the number and availability of multimodal data causes new challenges for their effective and efficient processing. While there have been considerable attempts in the research of multimodal fusion of remotely sensed data, many technical challenges are left open. The IEEE Geoscience and Remote Sensing Letters (GRSL) calls for papers for a Special Stream on “Fusion of Multimodal Remote Sensing Data for Analysis and Interpretation”. The objective of this special stream is to provide a forum for academic and industrial communities to report recent theoretical and application results related to multimodal fusion of remote sensing data from the perspectives of theories, algorithms, architectures, and applications.

The topics to be covered are as follows:
• Multimodal fusion for resolution enhancement
• Multimodal image processing
• Multimodal image classification
• Multimodal image change detection
• Multimodal image segmentation
• Spatial-temporal data fusion
• Multimodal image feature fusion
• Multimodal data decision fusion
• Fusion of grid optical data and unstructured point cloud data
• Applications exploiting multimodal data
• New benchmark multimodal datasets
• Machine learning using multimodal data

The Special Stream will be made to the GRSS community through an announcement on the GRSL special stream website. The guest editors are members of the working group on Image and Signal Processing (ISP) of the GRSS Image Analysis and Data Fusion Technical Committee which means that the SS will be announced and advertised via its various communication channels (newsletter, twitter, etc). In China’s community, the advertisement will be distributed in remote sensing groups, such as Wechat groups. In addition, the SS will be made available on the newsletter of GRSS.

Schedule
Starting date for submission: 1 February 2021
Closing date for submission: 15 June 2021
Paper Submission Link: mc.manuscriptcentral.com/grsl.