• Overview

    gsistc The Image Analysis and Data Fusion Technical Committee (IADF TC) of the Geoscience and Remote Sensing Society serves as a global, multi-disciplinary, network for geospatial image analysis (e.g., machine learning, deep learning, image and signal processing, and big data) and data fusion (e.g., multi-sensor, multi-scale, and multi-temporal data integration). It aims at connecting people and resources, educating students and professionals, and promoting theoretical advances and best practices in image analysis and data fusion.

    Join this Technical Committee

    The committee maintains this site as a platform to share ideas and inform the community regarding the recent advances of image analysis and data fusion and distributes an e-mail newsletter to all committee members on a regular basis regarding recent advancements, datasets, and opportunities. The IADF TC activities include:

    • organization of a special session held annually during the IGARSS meeting, gathering cutting edge contributions covering various issues related to analysis and fusion of multi-modal and multi-temporal earth observation data via artificial intelligence, machine/deep learning, computer vision, and image/signal processing.
    • organization of the Data Fusion Contest, a scientific challenge held annually since 2006. The Contest is open not only to IEEE members but to everyone, with the goal of promoting innovation and benchmarking in analyzing multi-source big earth observation data.
    • organization of EarthVision, a workshop on large scale computer vision for remote sensing imagery held in conjunction with one of the major computer vision conferences (e.g., CVPR). The workshop aims to foster collaboration between the computer vision and earth observation communities and to advance automated interpretation of remotely sensed data.
    • operation of the GRSS Data and Algorithm Standard Evaluation (DASE) website. The website provides data sets and algorithm evaluation standards to support research, development, and testing of algorithms for remote sensing data analysis (e.g., machine/deep learning, image/signal processing).
  • Data Sets
    Indian Pines: This scene was gathered by AVIRIS sensor over the Indian Pines test site in North-western Indiana and consists of 145 X 145 pixels and 224 spectral reflectance bands in the wavelength range 0.4-2.5 X 10^(-6) meters. This scene is a subset of a larger one. The Indian Pines scene contains two-thirds agriculture, and one-third forest or other natural perennial vegetation. There are two major dual lane highways, a rail line, as well as some low density housing, other built structures, and smaller roads. Since the scene is taken in June some of the crops present, corn, soybeans, are in early stages of growth with less than 5% coverage. The ground truth available is designated into sixteen classes and is not all mutually exclusive. We have also reduced the number of bands to 200 by removing bands covering the region of water absorption: [104-108], [150-163], 220. [courtesy of: http://www.ehu.es/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes] download (matlab format): image data ground truth
    Pavia Centre: This scene was acquired by the ROSIS sensor during a flight campaign over Pavia, nothern Italy. The number of spectral bands is 102 with a size of 1096 X 1096 pixels, while some of the samples contain no information, and appear as broad black strips in the image. The geometric resolution is 1.3 meters. The groundtruth data indentifies 9 classes. [courtesy of: http://www.ehu.es/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes] download (matlab format): image data ground truth
    Salinas: This scene was collected by the 224-band AVIRIS sensor over Salinas Valley, California, and is characterized by high spatial resolution (3.7-meter pixels). The area covered comprises 512 lines by 217 samples. As with Indian Pines scene, we discarded the 20 water absorption bands, in this case bands: [108-112], [154-167], 224. This image was available only as at-sensor radiance data. It includes vegetables, bare soils, and vineyard fields. Salinas groundtruth contains 16 classes. [courtesy of: http://www.ehu.es/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes] download

    (matlab format): image data ground truth

    Viareggio 2013 trial: Hyperspectral data, acquired over Viareggio, Italy, through the SIM.GA avionic hyperspectral VNIR/SWIR instrument (manufactured by Selex ES) along with panchromatic data, are released to the scientific community for anomaly detection, object detection, and anomalous change detection algorithms. The data set is fully ground-truthed and documented and includes scenarios and experiments specifically conceived for detection algorithm comparison and benchmarking. More details on http://dx.doi.org/10.1109/JSTARS.2016.2531747 [courtesy of http://rsipg.dii.unipi.it/mod/page/view.php?id=21 ] Registration and download: http://rsipg.dii.unipi.it
  • Contacts

    Dr. Naoto Yokoya
    Image Analysis and Data Fusion Technical Committee Chair
    Geoinformatics Unit
    RIKEN Center for Advanced Intelligence Project (AIP)
    Tokyo, Japan
    Email: naoto.yokoya@riken.jp

    Dr. Ronny Hänsch
    Department SAR Technology
    German Aerospace Center (DLR)
    Wessling, Germany
    Email: rww.haensch@gmail.com

    Dr. Pedram Ghamisi
    Image Analysis and Data Fusion Technical Committee Co-Chair
    Machine Learning Group
    Exploration Department
    Helmholtz Institute Freiberg for Resource Technology
    Helmholtz-Zentrum Dresden-Rossendorf (HZDR)
    Freiberg, Germany
    Email: p.ghamisi@gmail.com

    You can contact the Committee Chairs by email at: iadf_chairs@grss-ieee.org

    Join the LinkedIn IEEE GRSS Data Fusion Discussion Forum:

  • Technical Resources

    1. A list of open-source software that is relevant to work in geoscience and remote sensing.

    2. The presentation at the IGARSS-2014 meeting of the Image Analysis and Data Fusion Technical Committee.

  • Members

    Current membership (as of July 2017):

    Last Name First Name Affiliation Country
    Abadpour Sepideh University of Tehran Iran
    Abbasi Bahareh University of Tehran Iran
    Abdelkebir Layachi KGC Algeria Algeria
    Abdellatif Bassam Narss – National Authority For Remote Sensing & Space Sciences Egypt
    Abdi Ghasem University of Tehran Iran
    Achim Alin Univ of Bristol UK
    Aggarwal Hemant Kumar Indraprastha Institute of Information Technology-Delhi India
    Aghaee Reza K. N. Toosi University of Technology Iran
    Ahmadi S. Ali KN Toosi University of Technology Iran
    Ahn Yushin Michigan Technological University USA
    Ahrari Amirhossein University of Tehran Iran
    Akbari Davood Univ of Tehran Iran
    Akbari Vahid Department of Physics and Technology, University of Tromsø Norway
    Aksoy Selim Bilkent University Turkey
    Alampatta Shibumon Centre for Artificial Intelligence and Robotics, Ministry of Defence India
    Alazawi Fadia W Dijlah University College Iraq
    Alboody Ahed University Paul Sabatier, Toulouse France
    Ali Iftikhar Department of Geoinformation for Environmental Planning, Technical University of Berlin Germany
    Ali Syed Saad Pakistan Space and Upper Atmosphere Research Commission, SUPARCO, The National Space Agency of Pakistan Pakistan
    Alidoost Fatemeh University of Tehran Iran
    Aljumaily Harith Charles III University of Madrid Spain
    Alsayel M. The Quality Geospatial Engineering Com Saudi Arabia
    Alsubaie Naif Univ of Calgary Canada
    Al-Wassai FirouzAbdullah Swami Ramanand Teerth Marathwada University, Nanded India
    Amel Gacem LARESI Algeria
    Amidi Ali Univ of Twente Netherlands
    Amini Omran University of Tehran Iran
    Anfinsen Stian Normann UiT The Arctic University of Norway Norway
    Arastoo Behrooz Semnan Agriculture and Natural Resources Research Center Iran
    Aref Mohammad Mohseni Islamic Azad University Iran
    Arefi Hossein University of Tehran Iran
    Ashapure Akash Texas A&M University, Corpus Christi USA
    Ashokkumar Lavanya Swansea University UK
    Ashraf Salman GNS Science New Zealand
    Asl Mohsen Ghamary Khaje Nasir Toosi University of Technology, Tehran Iran
    Audebert Nicolas Onera France
    Awad Mohamad Remote Sensing Centre, CNRS Lebanon
    Ayazi S.M. Univ of Tehran Iran
    Ayoub Francois Caltech, Pasadena, CA USA
    Bagga Pankaj TCP Research Solution and E2Matrix India
    Bagheri Hossein Technische Universität München, Signal Processing in Earth Observation Germany
    Bai Yanbing Tohoku University Japan
    Banerjee Biplab Instituto Italiano Di Tecnologia Italy
    Baronti Stefano Nello Carrara Institute of Applied Physics (IFAC), National Research Council (CNR) Italy
    Barut Onur Middle East Technical University Turkey
    Basuni Agus Corporate Indonesia
    Becek Kazimierz Wroclaw Unive of Technology Poland
    Bechtel Benjamin Universität Hamburg Germany
    Benediktsson Jon Atli Univ of Iceland Iceland
    Berger Christian Department for Earth Observation, Institute of Geography, Friedrich-Schiller-University Jena Germany
    Bernabe Sergio University of Madrid Spain
    Bhattacharjee Indraneel Trupti Sapphire Geoconsultants Mongolia
    Bielski Conrad JRC, Ispra Italy
    Biswas Asim McGill University Canada
    Biswas Biswajit Dept of Computer Science and Engineering, Univ of Calcutta India
    Bisyu Said Adam Hak Department of Energy and Mineral Resources of South Sulawesi Province Indonesia
    Blonda Palma Consiglio Nazionale delle Ricerche – Istituto di Studi sui Sistemi Intelligenti per l’Automazione CNR-ISSIA Italy
    Bodaghi Marzie Naghsh Click Data Processing International Company Iran
    Bodescu Florian Multidimension SRL, Bucharest Romania
    Boerner Wolfgang-Martin Dept of Electrical and Computer Engineering, Univ of Illinois at Chicago USA
    Borel Chris Riverside Research, OH USA
    Borne Frédéric CIRAD-Agricultural Research for Development,Montpellier France
    Bovolo Francesca Remote Sensing for Digital Earth – Fondazione Bruno Kessler Italy
    Bretschneider Timo R EADS Innovation Works Singapore Singapore
    Brown Myron Johns Hopkins University Applied Physics Laboratory USA
    Bruce Lori Mann Mississippi State Univ USA
    Bruckart Robert Corporate USA
    Bruzzone Lorenzo University of Trento Italy
    Campos-Taberer Manuel University of Valencia Spain
    Camps-Valls Gustau Univ of Valencia Spain
    Cao Zhimin Harbin Inst of Technology China
    Celik Turgay University of the Witwatersrand South Africa
    Chakraborty Srija Arizona State University USA
    Chamberland Martin Telops, Quebec Canada
    Chanussot Jocelyn GIPSA Lab, INP Grenoble France
    Charou Eleni Institute of Informatics & Telecommunications, National Center for Scientific Research ‘Demokritos’ Greece
    Chaudhary Sumit Kumar IIT (ISM) DHANBAD India
    Chen Chen University of Texas at Dallas USA
    Chen Frederick W. MIT Lincoln Labs USA
    Chen Haonan NOAA/Earth System Research Lab USA
    Chen Kaiqiang Institude of Electronics, Chinese Academy of Sciences China
    Chen Rubia Lotus (H&R) Inc. Taiwan ROC
    Chen Yushi Harbin Institute of Technology China
    Chetty Girija University of Canberra Australia
    Chini Marco Luxembourg Inst of Science and Technology Luxembourg
    Choi Hyunho Hanyang University South Korea
    Choi Jaewan Chungbuk National University Korea
    Choi Myungjin Korea Aerospace Research Institute Korea
    Choukade Yogesh SAI Consulting Engineers Pvt. Ltd. India
    Clausi David Univ of Waterloo Canada
    Clemente-Colón Pablo US National Ice Center USA
    Conte Roberto Univ of Salerno Italy
    Cossu Roberto European Space Agency UK
    Courty Nicolas IRISA/UBS France
    Crawford Melba Purdue University USA
    Csatho Beata M. Dept of Geology, Univ of Buffalo USA
    Dabiri Zahra University of Salzburg Austria
    Daniel Sandrine Capgemini S.A. France
    Das Kakali Department of Science and Technology, Government of West Bengal India
    Davis Curt Univ of Missouri USA
    Del Frate Fabio Univ of Rome, Tor Vergata Italy
    Dell’Acqua Fabio Dept of Electrical Computer and Biomedical Engineering, Univ of Pavia Italy
    Demirkesen Can Inst of Science and Engineering, Galatasaray Univ, Ortakoy, Istanbul Turkey
    de Morsier Frank Picterra Switzerland
    Demuzere Matthias UGent Belgium
    Dhumal Rajesh Dr. B. A. M. University, Aurangabad India
    Djiknavorian Pascal Laval University Canada
    Dlamini Wisdom M. Swaziland National Trust Commission Swaziland
    Doan Huong T. X. ESRI Vietnam Company Ltd. Vietnam
    Dobigeon Nicolas University of Toulouse, IRIT/INP-ENSEEIHT France
    Dong Yanfang Institute of Earthquake Science, China Earthquake Administration China
    dos Santos Jefersson Alex Universidade Federal de Minas Gerais Brazil
    dos Santos Valdenira Ferreira Pesquisadora do IEPA/CPAq Brazil
    Du Bo LIESMARS, Wuhan Univ China
    Du Jenny Q. Dept of Electrical and Computer Engineering, Mississippi State Univ USA
    Duzgun Sebnem Middle East Technical University, Ankara Turkey
    Ediriwickrema Jayantha US Environmental Protection Agency USA
    El Mezouar Miloud Chikr RCAM Laboratory, College of Engineering, Djillali Liabes University, Sidi-Bel-Abbes Algeria
    Elshamli Ahmed University of Guelph Canada
    Esfahlan Amin Ghasemi Basir, Inc Iran
    Eslami Mehrdad K.N.Toosi University of Technology, Tehran Iran
    Fabrizi Roberto Deimos Imaging Spain
    Falco Nicola University of Iceland Iceland
    Fang Leyuan Hunan University China
    Faria Fabio Augusto Federal University of Sao Paulo Brazil
    Farnaz Aneela National University of Science and Technology Pakistan
    Fauvel Mathieu University of Toulouse – INP ENSAT France
    Femiani John C. Arizona State University USA
    Figueroa-Villanueva Miguel A. Univ of Puerto Rico, Mayaguez Puerto Rico
    Flusser Jan Institute of Information Theory and Automation Czech Republic
    Foody Giles Univ Nottingham UK
    Freedman Ellis Serious Science LLC USA
    Furuya Takashi Tohoku University Japan
    Gader Paul Univ of Florida USA
    Gaikwad Bhanupriya Shivaji M.G.M Jawaharlal Nehru Engineering College India
    Gamba Paolo Univ of Pavia Italy
    Gangodagamage Chandana Los Alamos National Lab, NM USA
    Gao Lang China University of Geosciences (Wuhan) China
    Gao Lianru Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences China
    García Isis Galván Instituto Politécnico Nacional Mexico
    Garzelli Andrea Univ of Siena Italy
    Gasiewski Albin Univ of Colorado, Boulder USA
    Gercek Deniz Kocaeli University Turkey
    Ghahremani Morteza Tarbiat Modares University Iran
    Ghamchili Mehdi Unkown Iran
    Ghamisi Pedram DLR Germany
    Ghasrodashti Elham Kordi Dept. of Electrical Engineering, Shiraz University Iran
    Ghassemian Hassan Tarbiat Modares University, Tehran Iran
    Ghosh Suddhasheel Indian Institute of Technology Kanpur India
    Gibson Carey Ontario Ministry of Natural Resources Canada
    Giordano Sébastien French National Institute of Geographic and Forest Information (IGN) France
    Gittins Christopher United Technologies Aerospace Systems, Boston, MA USA
    Gokaraju Balakrishna Univ of West Alabama USA
    Goodenough David Univ of Victoria Canada
    Gopalakrishnan Tharani GIS Solutions (Pvt) Ltd Sri Lanka
    GÖRMÜŞ ESRA TUNÇ Department of Geomatics Engineering, Karadeniz Technical University, Trabzon Turkey
    Guo Qing Associate Professor at Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences China
    Gupta Sharad Maulana Azad National Institute of Technology, Bhopal India
    Hadavand Ahmad University of Tehran Iran
    Hallabia Hind Centre de Recherche en Numérique de Sfax Tunisia
    Han Xiaobing Wuhan University China
    Hanaizumi Hiroshi Hosei University Japan
    Hang Chen LIESMARS Shen Zhen Research and Development Center China
    Hang Renlong Nanjing University of Information Science & Technology China
    Hänsch Ronny Technische Universität Berlin Germany
    Hardy Caroline University of Johannesburg South Africa
    Hasanlou Mahdi College of Engineering, University of Tehran Iran
    Hashmi Syed Ghulam Mohayud Din Kinnaird College For Women University, Lahore Pakistan
    Hassani Hadiseh University of Tehran Iran
    He Chu Wuhan Univ China
    He Mingyi Northwestern Polytechnical University China
    He Wei Wuhan University China
    Hellwich Olaf Technical Univ of Berlin Germany
    Heylen Rob University of Antwerp Belgium
    Hichri Haikel Salem Computer and Information Sciences, King Saud University Saudi Arabia
    Hong Danfeng German Aerospace Center (DLR) Germany
    Hou Qiang Chian University of Geosciences China
    Huang Bormin Space Science and Engineering Center, University of Wisconsin-Madison USA
    Huang HuaGuo Key Laboratory for Silviculture and Conservation of Ministry of Education China
    Huang Rui Shanghai University China
    Huang Xin Wuhan University China
    Huo Lian-Zhi Institute of Remote Sensing Applications, Chinese Academy of Sciences China
    Hüt Christoph Institute of Geography, University of Cologne Germany
    Iler Amber Integrity Applications Incorporated USA
    Ilias Theodorakopoulos Univ of Patras Greece
    Itoh Yuki University of Massachusetts Amherst USA
    Iwasaki Akira Space Application Lab, Univ of Tokyo Japan
    Jacobs Nathan University of Kentucky USA
    Jaganjac Indir Self Employed Bosnia and Herzegovina
    Janez Fabrice Onera France
    Javan Farzaneh Dadaras University of Tehran Iran
    Jawak Shridhar National Centre For Antarctic & Ocean Research India
    Jayapalan Vivekan PSNA College of Engineering and Technology India
    Jeune Bjorn Consulting Geologist Guyana
    Jha Vikram Chandra Deep Learning Research Private Limited India
    Ji Wenjun McGill University Canada
    Jun Liu Wuhan Univ China
    Kääb Andreas Univ of Oslo Norway
    Kalogirou Vasileios EU SatCen Spain
    Kanan Christopher Rochester Institute of Technology USA
    Kandasamy Sivasathivel Canada Center for Remote Sensing Canada
    Kang Xudong Hunan University China
    Karami Azam University of Antwerp Belgium
    Karim Shahid Government College University, Lahore Pakistan
    Karimi Saeed University of Guilan Iran
    Kasabalis Dimitris Yloriki Ltd Greece
    Kasampalas Dimitrios unknown unknown
    Kasapoglu Necip Gokhan Universtiy of Tromsø Norway
    Kaya Gulsen Taskin Istanbul Technical University Turkey
    Kellenberger Benjamin University of Zurich Switzerland
    Kerekes John Rochester Institute of Technology USA
    Keshavarz Ahmad Persian Gulf University Iran
    Khan Haris Ahmad Institute of Space Technology, Islamabad Pakistan
    Khan J Abdullah Yogi Vemana University, Kadapa India
    Khan Jollozy Abdullah Yogi Vemna University India
    Khan Muhammad Murtaza National University of Sciences and Technology, Islamabad Pakistan
    Khobragade Anand N Maharashtra Remote Application Centre (MRSAC) India
    Khodadadzadeh Mahdi University of Trento Italy
    Khopkar Parag Dept of Geogrpahy, University of Pune, Ganeshkhind India
    Kianejad Seyed Abdollah Univ of Tafresh Iran
    Kiani Kamel Kerman Graduate University of Technology Iran
    Kim Yongmin Seoul National Univ South Korea
    King Roger Mississippi State Univ USA
    Kiyasu Senya Nagasaki Univ Japan
    Kleynhans Waldo Council for Scientific and Industrial Research South Africa
    Koeniguer Elise Onera France
    Kouchi Hamze Salimi Khaje Nasir Toosi Univ of Technology, Tehran Iran
    Kovalevskaya Nelley Institute for Water and Environmental Problems Russia
    Kulkarni Rushikesh Symbiosis Institute of Technology, Pune India
    Kumar Anil Indian Institute of Remote Sensing/ISRO India
    Kumar Deepak Central University of Karnataka India
    Kumar Harish Indian Institute of Technology, Roorkee India
    Kumar Mallenahalli Naresh National Remote Sensing Centre (ISRO) India
    Kumar S.S. Noorul Islam University India
    Kumar Uttam NASA Ames Research Center USA
    Kumar Vinay Indian Institute of Remote Sesning Dehradun India
    Kurte Kuldeep R. Indian Institute of Technology, Bombay India
    Kwon Heesung US Army Research Laboratory USA
    Lakshmi Venkat Univ of South Carolina USA
    Lamghari Soufiane ENSTA Bretagne Lab-STICC UMR CNRS France
    Lampropoulos George A.U.G. Signals Ltd., Toronto Canada
    Lanaras Charis ETH Zurich Switzerland
    Landgrebe David Purdue Univ USA
    Lang Oliver Astrium GEO-Information Services Germany
    Latourette Kevin Lockheed Martin, CO USA
    Lee Lily MIT Lincoln Labs USA
    Lee Matthew A. Mississippi State Univ. USA
    Lefevre Sébastien Univ. Bretagne-Sud France
    Lemoigne Jacqueline NASA Goddard Space Flight Center USA
    Le Saux Bertrand Onera France
    Lévesque Josée Valcartier Research Center Canada
    Liao Wenzhi Ghent University Belgium
    Licciardi Giorgio Antonino GIPSA Lab, Grenoble Institute of Technology, Université Joseph Fourier and Université Stendhal France
    Li Dongyang Vhinese Academy of Sciences, Beijing China
    Li Peijun Peking University China
    Li Shutao Hunan University China
    Liu Bin Shanghai Jiao Tong University China
    Liu Dehong Mitsubishi Electric Research Laboratories, Cambridge, MA USA
    Liu Jie Peking University China
    Liu Rong Wuhan University China
    Liu Sicong Tongji University China
    Liu Yansong Rochester Institute of Technology USA
    Liu Yishu South China Normal University China
    Liu Peng Institute of Remote Sensing and Digital Earth, CAS China
    Li Wenbo Chinese Academy of Sciences, Hefei China
    Li Xiaodong Chinese Academy of Sciences, Wuhan China
    Li Xinghua Wuhan University China
    Lobzenev Vyacheslav Innovative Center, Moscow Russia
    Lombardo Pierfrancesco Sapienza, University of Rome Italy
    Longbotham Nathan Digital Globe, Longmont, CO USA
    Long David Brigham Young University, UT USA
    Lopez Sebastian University of Las Palmas de Gran Canaria Spain
    Lovergine Francesco Paolo ISSIA CNR – Bari Italy
    Lugari Alessandro Institute of Informatics and Telematics of CNR, IIT-CNR Italy
    Lunga Dalton Council for Scientific and Industrial Research (CSIR) Meraka Institute South Africa
    Luo Renbo Ghent University Belgium
    Luther Charles Office of Naval Research (retired) USA
    Lwin Aung Beihang University China
    Maboudi Mehdi Islamic Azad University Iran
    Madhusoodanan Dhanusham Indian Institute of Space Science and Technology (IIST) India
    Maggiori Emmanuel INRIA France
    Mahgoub Ahmed Agrisource Data USA
    Mahmoudi Fatemeh Tabib Univ of Tehran Iran
    Makrogiannis Sokratis Delaware State University USA
    Mallet Clément The French National Geographic Institute, IGN France
    Marcello Javier University Las Palmas of Gran Canaria Spain
    Marpu Prashanth Reddy Masdar Institute of Science and Technology UAE
    Marwaha Richa Indian Institute of Remote Sensing India
    Masson Eric University of Lille Sciences and Technologies France
    Matasci Giona Integrated Remote Sensing Studio – UBC, Vancouver Canada
    Medina Reynaldo USAA USA
    Melgani Farid Univ of Trento Italy
    Meng Lingfei Academic USA
    Merentitis Andreas AGT International, Darmstadt Germany
    Michael Nkenfack Hydrogeocartech Cameroon
    Migliazzi Mauro GLOBI Hi-Tech Srl Italy
    Minaei Masoud University of Vienna Austria
    Mirnazari Javad Universiti Teknologi Malaysia Malaysia
    Moghaddam Negin Fouladi Monash University Australia
    Mohammadzadeh Ali K. N. Toosi University of Technology, Faculty of Geodesy & Geomatics Engineering Iran
    Moon Wooil Unive of Manitoba Canada
    Moraru Simion Cooperative Trade University of Moldova Romania
    Moser Gabriele Univ of Genoa Italy
    Mou Lichao German Aerospace Center (DLR) Germany
    Munishi Subira Eva University of Dar Es Salaam Tanzania
    Mura Mauro Dalla GIPSA Lab, Grenoble Institute of Technology France
    Nagne Ajay Dr. B. A. M. University, India
    Neagoe Victor-Emil Polytechnic University of Bucharest Romania
    Nisa Zaib Government College University Faisalabad Pakistan
    Nishii Ryuei Kyushu University Japan
    Nokay Hasan Piksel Teknoloji Inc Turkey
    Örenbaş Halit Yildiz Technical University Turkey
    Osadciw Lisa Ann Syracuse Univ USA
    Ostrowski Juliusz PCI Geomatics, Inc Canada
    Mangalraj P. Indian Institute of Information Technology India
    Pacifici Fabio Digital Globe, Inc., CO USA
    Palsson Frosti University of Iceland Iceland
    Palubinskas Gintautas German Aerospace Center (DLR) Germany
    Pandey Prem C. University of Leicester UK
    Panditrao Satej Indian National Center for Ocean Information Services, INCOIS India
    Pan Han School of Aeronautics and Astronautics, Shanghai Jiao Tong University China
    Parente Mario Univ of Massachusetts USA
    Paris Claudia University of Trento Italy
    Park Joong Yong Optech Inc, Ontario Canada
    Park Taejin Boston University USA
    Parvez Shahid University of the Punjab Pakistan
    Pascazio Vito Parthenope University of Naples Italy
    Patil Ganesh Unknown unknown
    Patil Sudhir GIS Dept- Reliance Jio Infocom Ltd. India
    Peijun Du Nanjing Univ China
    Pejman Amirhossein University Of Tehran Iran
    Persson Henrik Geoanalysis Sweden AB Sweden
    Pervez Wasim National University of Science & Technology, Islamabad Pakistan
    Peterson Erica University of Toronto Institute for Aerospace Studies Canada
    Petrou Zisis Information Technologies Institute (CERTH-ITI) Greece
    Pichel William G. NOAA Center for Satellite Applications and Research USA
    Pinz Axel Graz University of Technology Austria
    Plaza Antonio Department of Technology of Computers and Communications, Escuela Politecnica de Caceres, University of Extremadura Spain
    Plaza Javier University of Extremadura Spain
    Polewski Przemyslaw Munich University of Applied Sciences Germany
    Polli Diego European Centre for Training and Research in Earthquake Engineering Italy
    Prasad Anup Krishna Department of Applied Geology, Indian School of Mines, Dhanbad India
    Prasad Saurabh Univ of Houston USA
    Pugh Mark L. Air Force Research Lab USA
    Qazi Waqas Ahmed Institute of Space Technology Pakistan
    Qi Hairong University of Tennessee, Knoxville USA
    Qiao Xiaojun National Engineering Research Center for Information Technology in Agriculture China
    Rafiei Akbar Tehran University Iran
    Rahmaty Alireza Academic Iran
    Rahmawati Novi Gadjah Mada University Indonesia
    Rahnemoonfar Maryam Texas A&M University-Corpus Christi USA
    Rajabhushanam C. Bharath Institute Of Higher Education and Research India
    Rajabi Roozbeh Tarbiat Modares University Iran
    Ramdani Fatwa Tohoku Univ Japan
    Ramesh Nityanand Adep National Institute of Technology Karnataka India
    Ramiya Anandakumar M. Indian Institute of Space Science and Technology India
    Ramnath Vinod Teledyne Optech USA
    Ramuhalli Pradeep Pacific Northwest National Laboratory USA
    Ranchin Thierry Mines ParisTech France
    Randrianarivo Hicham Onera France
    Rasti Behnood University of Iceland Iceland
    Reising Steven Colorado State University USA
    Richards John Australian National University Australia
    Romeo Saverio University of Perugia Italy
    Roos Daniel Rodrigues Institute of Advanced Studies (IEAV) Brazil
    Rostami Reza Islamic Azad University of Qazvin Iran
    Sadeghi Yaser University of Tehran Iran
    Salehi Abbas Khaje Nasir Toosi University of Iran Iran
    Salehi Bahram LookNorth, Inc, Newfoundland Canada
    Samanta Arindam AER, Inc USA
    Samat Alim Nanjing Univ China
    Santana Tiago Moreira Hübner Cançado Universidade Federal de Minas Gerais Brazil
    Santara Anirban IIT Kharagpur India
    Sanyang Sankung Gambian Dept of Forestry The Gambia
    Sanyogita Chandrakar Pandit Ravishankar Shukla University, Raipur India
    Sathyamoorthy Dinesh Science & Technology Research Institute for Defence (STRIDE), Ministry of Defence Malaysia
    Satir Onur Yuzuncu Yil University Turkey
    Saxena Nidhi Malaviya National Institute of Technology, Jaipur India
    Scheunders Paul University of Antwerp Belgium
    Schmitt Michael Technical University of Munich (TUM) Germany
    Searle Les True 3D, Inc, Queensland Australia
    Sedighi Amir Univ of Tabriz Iran
    Serpico Sebastiano DITEN-Univ. Of Genoa Italy
    Sghaier Khouloud University of Sousse, ISITCOM Tunisia
    Shah Vijay P. Mississippi State University USA
    Sharma Vivek Karlsruhe Institute of Technology Germany
    Shaunak De Indian Institute of Technology India
    Sherrah Jamie DSTO Edinburgh Australia
    Shi Aiye College of Computer and Information Engineering, Hohai University China
    Shkvarko Yuriy Center for Research and Advanced Studies of the National Polytechnic Institute, CINVESTAV-IPN Mexico
    Shokr Mohammed Environment Canada Canada
    Sidda Geethesh Kumar CA State Univ Fullerton USA
    Siddique Muhammad Adnan Earth Observation & Remote Sensing, ETH Zurich Switzerland
    Singh Alok Indian Space Research Organization India
    Sinica-Sinavskis Juris Institute of Electronics and Computer Science Latvia
    Sirmacek Beril Create 4D Netherlands
    Slamani M. Adel MHA Technologies, Inc, Washington, DC USA
    Small David Univ of Zurich Switzerland
    Smits Paul European Commission DG Joint Research Centre (JRC) Italy
    Solanky Vijay MANIT Bhopal India
    Solberg Anne H Schistad Dept of Informatics, Univ of Oslo Norway
    Speckert Glen SpeckTech Inc USA
    Sridharan Harini Oak Ridge National Lab USA
    Srinivas Umamahesh Apple Computer, Inc., Cupertino, CA USA
    Srinivasa Perumal Padma Saveetha Engineering College, Chennai India
    Srivastava Shivangi University of Zurich Switzerland
    Streltsov Simon LongShortWay Inc., Boston,MA USA
    Su Yuan-Fong National Science and Technology Center for Disaster Reduction (NCDR) Taiwan ROC
    Sui YinLing National University of Defense Technology China
    Sullivan William HySpecIQ USA
    Sun Xu Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences China
    Sze-To Ho Yin University of Waterloo Canada
    Tahrat Omar StereoPhot,S.a.r.l Algeria
    Talebi Nahr Siamak Department of Civil and Geomatics Engineering, Tafresh University Iran
    Talukdar Gautam Wildlife Institute of India India
    Tanaka Shojiro Shimane University Japan
    Tang Hong State Key Laboratory of Earth Surface Processes and Resource Ecology, Beijing Normal University China
    Tao Jianwei Shanghai Jiao Tong University China
    Tarabalka Yuliya INRIA Sophia Antipolis France
    Taud Hind Instituto Politécnico Nacional Mexico
    Tian Bingwei Kyoto Univ Japan
    Tilton James NASA Goddard Space Flight Center USA
    Tompkinson William Ordinance Survey UK
    Torres Tannia Mayorga Universidad Central del Ecuador Ecuador
    Trouvé Emmanuel Univ Savoie Mont Blanc France
    Truong-Hong Linh Urban Modelling Group, University College Dublin Ireland
    Tsagkatakis Grigorios Foundataion for Research and Technology Hellas Greece
    Tsatsoulis Costas Univ of North Texas USA
    Tsukamoto Naoko Tohoku University Japan
    Tuia Devis Wageningen University and Research The Netherlands
    Tupin Florence TelecomParisTech France
    Uddin Ahammad Shuzan Nazim Institute of Water Modeling Bangladesh
    ud-Din Nizam The Urban Unit, The Government of the Punjab Pakistan
    ul Islam Zaheer Institute of Geographical Information Systems, National University of Sciences & Technology Pakistan
    Upadhyay Ashish Indian Institute of Public Health – Gandhinagar India
    Ustuner Mustafa Yildiz Technical University Turkey
    Vahidi Hossein Khavaran Institute of Higher Education Iran
    Vakalopoulou Maria National Technical University of Athens Greece
    Velez-Reyes Miguel University of Texas, El Paso USA
    Villalon-Turrubiates Ivan Universidad Jesuita de Guadalajara Mexico
    Vohra Rubeena Bharati Vidyapeeth College of Engineering India
    Volpi Michele University of Zurich Switzerland
    Volpi Riccardo University College Cork Ireland
    Wanczura Angela Simon Frasier University Canada
    Wang Ting The Chinese University of Hong Kong Hong Kong
    Waske Bjorn Univ of Bonn Germany
    Wei Li Beijing University of Chemical Technology China
    Wei Qi School of Electronic Information Engineering, Beihang University China
    Wu Zebin Nanjing University of Technology China
    Xia Gui-Song State Key Lab. LIESMARS, Wuhan University China
    Xia Junshi University of Tokyo Japan
    Xiao Jiangtao University of Electronic Science and Technology of China, Chengdu China
    Xu Kele Langevin Institute France
    Xu Yongyang China University of Geosciences China
    Yanfeng Gu Harbin Institute of Technology China
    Yang Chia-Hsiang Technisc Universität Darmstadt Germany
    Yang Wen Wuhan University China
    Yıldırım Alper Tubitak UEKAE/Iltaren, Ankara Turkey
    Ying Yang Michael University of Twente Germany
    Yokoya Naoto University of Tokyo Japan
    Yong Xu The Chinese University of Hong Kong Hong Kong
    Younan Nick Mississippi State Univ USA
    Yuan Qiang-Qiang Wuhan University China
    Yudin Ilya Digital Globe, Inc. USA
    Yue Chong Xidian University China
    Yuksel Seniha Esen Hacettepe University Turkey
    Yusuf Yuhendra Chiba University Japan
    Zakeri Fatemeh Tehran University Iran
    Zare Alina Univ of Florida USA
    Zen Jay School of Electronic Engineering in Xidian University China
    Zerubia Josiane INRIA Sophia Antipolis Méditerranée France
    Zhang Bing Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences China
    Zhang Hai Wuhan Univ China
    Zhang Hongyan State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University China
    Zhang Lefei State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University China
    Zhang Lei The Hong Kong Polytechnic University Hong Kong
    Zhang Yifan Northwestern Polytechnical University China
    Zhang Yun Univ of New Brunswick Canada
    Zhao Ji Wuhan University China
    Zhao Yongqiang Northwestern Polytechnical University China
    Zhiping Lin Nanyang Technological Univ Singapore
    Zhixiang Yang Hohai University China
    Zhong Yanfei Wuhan University China
    Zhong Zisha Chinese Academy of Sciences China
    Zhu Xiaoxiang German Aerospace Center (DLR) & Technical University of Munich (TUM) Germany
    Zingman Igor University of Konstanz Germany
    Zubair Muhammad Islamic Relief Worldwide Pakistan
    Unknown Abbas K. N. Toosi University of Technology Iran
  • Data Fusion Contest

    2019 IEEE GRSS Data Fusion Contest


    The contest ended March 22, 2019. See the results here.


    Large-Scale Semantic 3D Reconstruction


    The Contest: Goals and Organization



    The 2019 Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (GRSS), the Johns Hopkins University (JHU), and the Intelligence Advanced Research Projects Activity (IARPA), aims to promote research in semantic 3D reconstruction and stereo using machine intelligence and deep learning applied to satellite images.


    The global objective is to reconstruct both a 3D geometric model and a segmentation of semantic classes for an urban scene. Incidental satellite images, airborne lidar data, and semantic labels are provided to the community. The 2019 Data Fusion Contest will consist of four parallel and independent competitions, corresponding to four diverse tasks:

    • Track 1: Single-view Semantic 3D Challenge
    • Track 2: Pairwise Semantic Stereo Challenge
    • Track 3: Multi-view Semantic Stereo Challenge
    • Track 4: 3D Point Cloud Classification Challenge


    Scientific papers describing the best entries (as quantified by the scores of the confusion matrix and accuracy parameters) will be included in the Technical Program of IGARSS 2019, presented in an oral Invited Session, and published in the IGARSS 2019 Proceedings.


    Competition Phases


    The contest aims to promote innovation in semantic 3D reconstruction and stereo algorithms, as well as to provide objective and fair comparisons among methods. The ranking is based on quantitative accuracy parameters computed with respect to undisclosed test samples. Participants will be given a limited time to submit their semantic 3D maps after the competition started. The contest will consist of two phases:

    • Phase 1: Participants are provided with training data (which includes ground truth) and validation data (without ground truth) to train and validate their algorithms. Participants can submit prediction results for the validation set to DASE to get feedback on the performances. Top 10 submissions will be displayed on the leaderboard.
    • Phase 2: Participants receive the test data set (without the corresponding ground truth) and submit their semantic 3D maps within two weeks from the release of the test data set. In parallel, they submit a short description of the approach used. After evaluation of the results, eight winners are announced. Following this, they will have one month to write their manuscript that will be included in the IGARSS proceedings. Manuscripts are 4-page IEEE-style formatted. Each manuscript describes the addressed problem, the proposed method, and the experimental results.



    January 9th Contest opening: release of training and validation data
    February 7th Validation server with public leaderboard is open.
    March 7th Release of test data; test server is open.
    March 22th Submission of results deadline:
    the submission server is closed
    March 26th Short description of the approach is sent to iadf_chairs@grss-ieee.org (using IGARSS paper template)
    March 29th Winner announcement


    The Data


    In the contest, we provide Urban Semantic 3D (US3D) data, a large-scale public dataset including multi-view, multi-band satellite images and ground truth geometric and semantic labels for two large cities [1]. The US3D dataset includes incidental satellite images, airborne lidar, and semantic labels covering approximately 100 square kilometers over Jacksonville, Florida and Omaha, Nebraska, United States. For the contest, we provide train and test datasets for each challenge track including approximately twenty percent of the US3D data.


    • Incidental Satellite Images: For the contest, WorldView-3 panchromatic and 8-band visible and near infrared (VNIR) images are provided courtesy of DigitalGlobe. Source data consists of 26 images collected between 2014 and 2016 over Jacksonville, Florida, and 43 images collected between 2014 and 2015 over Omaha, Nebraska, United States. Ground sampling distance (GSD) is approximately 35 cm and 1.3 m for panchromatic and VNIR images, respectively. VNIR images are all pan-sharpened. Satellite images are provided in geographically non-overlapping tiles, where airborne lidar data and semantic labels are projected into the same plane. Unrectified images (for Tracks 1 and 3) and epipolar rectified image pairs (for Track 2) are provided as TIFF files.
    • Airborne LiDAR data are used to provide ground-truth geometry. The aggregate nominal pulse spacing (ANPS) is approximately 80 cm. Point clouds are provided in ASCII text files with format {x, y, z, intensity, return number} for Track 4. Training data derived from lidar includes ground truth above ground level (AGL) height images for Track 1, pairwise disparity images for Track 2, and digital surface models (DSM) for Track 3, all provided as TIFF files.
    • Semantic labels are provided as TIFF files for each geographic tile in Tracks 1-3 and ASCII text files in Track 4. Semantic classes in the contest include buildings, elevated roads and bridges, high vegetation, ground, water, etc.


    We provide all the above datasets for the training regions only. For the validation and test regions, only satellite images are provided in Tracks 1-3 and only lidar point clouds are provided in Track 4. The ground truth for the validation and test sets remains undisclosed and will be used for evaluation of the results. The training and test sets for the contest include dozens of images for each geographic 500m x 500m tile: 111 tiles for the training set; 10 tiles for the validation set; 10 tiles for the test set.


    Register for the contest to get data

    RGB images and ground truth for disparities and semantics for the training region are shown in Fig. 1. Point-clouds and 3D semantic labels for the training region are shown in Fig. 2.


    Fig. 1 From left to right: Stereo correspondence with seasonal appearance differences, ground truth disparities and semantic labels

    [1] Bosch, M. ; Foster, G. ; Christie, G. ; Wang, S. ; Hager, G.D. ; Brown, M. : Semantic Stereo for Incidental Satellite Images. Proc. of Winter Conf. on Applications of Computer Vision, 2019.


    Challenge Tracks


    Track 1: Single-view semantic 3D

    For each geographic tile, an unrectified single-view image is provided. The objective is to predict semantic labels and normalized DSM (nDSM) above-ground heights. Participants of Track 1 are intended to submit 2D semantic maps and AGL maps in raster format (similar to the tif file of the training set). Performance is assessed using the pixel-wise mean Intersection over Union (mIoU) for which true positives must have both the correct semantic label and height error less than a threshold of 1 meter. We call this metric mIoU-3.


    Track 2: Pairwise semantic stereo

    For each geographic tile, a pair of epipolar rectified images is given. The objective is to predict semantic labels and stereo disparities. Participants of Track 1 are intended to submit 2D semantic maps and disparity maps in raster format (similar to the tif file of the training set). Performance is assessed using mIoU-3 with a threshold of 3 pixels for disparity values.


    Track 3: Multi-view semantic stereo

    Given multi-view images for each geographic tile, the objective is to predict semantic labels and a DSM. Unrectified images are provided with RPC metadata already adjusted using the lidar so that registration is not required in evaluation and so that solutions can focus on methods for image selection, correspondence, semantic labeling, and multi-view fusion. Since this track relies on RPC metadata which may not be familiar to everyone, the baseline algorithm provided includes simple python code to manipulate RPC for epipolar rectification and triangulation. Participants of Track 3 are intended to submit 2D semantic maps and DSMs in raster format (similar to the tif file of the training set). Performance is assessed using mIoU-3 with a threshold of 1 meter for the DSM Z values.


    Track 4: 3D point cloud classification

    For each geographic tile, lidar point cloud data is provided. The objective is to predict a semantic label for each 3D point. Participants of Track 4 are intended to submit 3D semantic predictions in ASCII text files (similar to the text files of the training set). Performance is assessed using mIoU.


    Baseline methods

    Baseline solutions are provided for each challenge track to help participants get started quickly and better understand the data and its intended use. Deep learning models for image semantic segmentation (for Tracks 1, 2, and 3), point cloud semantic segmentation (for Track 4), single-image height prediction (for Track 1), and pairwise stereo disparity estimation (for Tracks 2 and 3) are provided. Each of these was implemented in Keras with TensorFlow. The models, python code to train them, and python code for inference are provided. A baseline semantic MVS solution (for Track 3) implemented in python is also provided to clearly demonstrate the use of RPC metadata for basic tasks such as epipolar rectification and triangulation.

    Register for the contest to get baselines


    Fig. 2 (Left) Point cloud data involved in the 2019 Data Fusion Contest and (right) 3D semantic labels



    Results, Awards, and Prizes:


    The following eight teams will be declared as winners:

    • The first and second ranked teams in Single-view Semantic 3D Challenge
    • The first and second ranked teams in Pairwise Semantic Stereo Challenge
    • The first and second ranked teams in Multi-view Semantic Stereo Challenge
    • The first and second ranked teams in 3D Point Cloud Classification Challenge

    The authors of the eight winning submissions will:

    • Present their manuscripts in an oral Invited Session dedicated to the Contest at IGARSS 2019
    • Publish their manuscripts in the Proceedings of IGARSS 2019
    • Be awarded IEEE Certificates of Recognition. The award ceremony will take place during the Technical Committees and Chapter Chairs Dinner at IGARSS 2019, Yokohama, Japan in July 2019

    The authors of the 1st ranked team in the four tracks will receive as a special prize at IGARSS 2019.

    Five selected teams, namely, the first ranked teams of Single-view Semantic 3D Challenge, Pairwise Semantic Stereo Challenge, and Multi-view Semantic Stereo Challenge, and the first and second ranked teams in 3D Point Cloud Classification Challenge, will Co-author journal papers (in a limit of 3 co-authors per submission), which will summarize the outcome of the Contest and will be submitted to IEEE JSTARS. To maximize impact and promote the potential of semantic 3D reconstruction and stereo applied to satellite images, the open-access option will be used for this journal submission.

    The costs for open-access publication and for the winners’ participation to the Technical Committees and Chapter Chairs Dinner at IGARSS 2019 will be supported by the GRSS. The winner team prize is kindly sponsored by the IGARSS 2019 Team.



    The rules of the game:

    • Data can be requested by registering for the Contest. Participants must read and accept the Contest Terms and Conditions.
    • Participants of the contest are intended to submit:
      • 2D semantic maps and nDSM/disparity/DSM maps in raster format (similar to the tif file of the training set) for Tracks 1, 2, and 3
      • 3D semantic predictions in ASCII text files (similar to the text file of the training set) for Track 4

      These results will be submitted to the Codalab competition websites for evaluation:
      Track 1: https://competitions.codalab.org/competitions/21120
      Track 2: https://competitions.codalab.org/competitions/21121
      Track 3: https://competitions.codalab.org/competitions/21131
      Track 4: https://competitions.codalab.org/competitions/21132

    • Ranking among the participants will be based on:
      • mIoU-3 for Tracks 1, 2, and 3
      • mIoU for Track 4
    • Institutional or business E-mail accounts should be used for registration.
    • One E-mail account is allowed for one team.
    • The maximum number of trials of one team for each classification challenge is ten in the test phase.
    • Deadline for classification result submission is March 22, 2019, 23:59 UTC – 12 hours (e.g., March 22, 2019, 7:59 in New York City, 13:59 in Paris, or 19:59 in Beijing). Submission server will be opened from March 7, 2019.
    • Each team needs to submit a short paper of 2 pages describing the used approach by March 26, 2019. Please send a paper to iadf_chairs@grss-ieee.org using the IGARSS paper template. One and only one submission originating from each team will be allowed to the Contest. Should multiple entries from the same team be received, then exclusively the best submission will be considered.
    • For the eight winners, internal deadline for full paper submission is April 26, 2019, 23:59 UTC – 12 hours (e.g., April 26, 2019, 7:59 in New York City, 13:59 in Paris, or 19:59 in Beijing). IGARSS Full paper submission is May 27, 2019.

    Failure to follow any of these rules will automatically make the submission invalid, resulting in the manuscript not being evaluated and disqualification from prize award.


    Participants to the Contest are requested not to submit an extended abstract to IGARSS 2019 by the corresponding conference deadline in January 2019. Only contest winners (participants corresponding to the eight best-ranking submissions) will submit a 4-page paper describing their approach to the Contest by April 26, 2019. The received manuscripts will be reviewed by the Award Committee of the Contest, and reviews sent to the winners. Then winners will submit the final version of the 4 full-paper to IGARSS Data Fusion Contest Invited Session by May 27, 2019, for inclusion in the IGARSS Technical Program and Proceedings.




    The IADF TC chairs would like to thank IARPA and the Johns Hopkins University Applied Physics Laboratory for providing the data and the IEEE GRSS for continuously supporting the annual Data Fusion Contest through funding and resources.



    Contest Terms and Conditions

    The data are provided for the purpose of participation in the 2019
    Data Fusion Contest. Participants acknowledge that they have read and
    agree to the following Contest Terms and Conditions:

    • The owners of the data and of the copyright on the data are DigitalGlobe, IARPA and Johns Hopkins University.
    • Any dissemination or distribution of the data packages by any registered user is strictly forbidden.
    • The data can be used in scientific publications subject to approval
      by the IEEE GRSS Image Analysis and Data Fusion Technical Committee and
      by the data owners on a case-by- case basis. To submit a scientific
      publication for approval, the publication shall be sent as an attachment
      to an e-mail addressed to iadf_chairs@grss-ieee.org.
    • In any scientific publication using the data, the data shall be
      identified as “grss_dfc_2019” and shall be referenced as follows: “[REF.
      NO.] 2019 IEEE GRSS Data Fusion Contest. Online:
    • Any scientific publication using the data shall include a section
      “Acknowledgement”. This section shall include the following sentence:
      “The authors would like to thank the Johns Hopkins University Applied Physics Laboratory and IARPA for providing the data used in this study, and the IEEE GRSS Image Analysis and Data Fusion Technical Committee for organizing the Data Fusion Contest.
    • Any scientific publication using the data shall refer to the following papers:
      • [Bosch et al., 2019] Bosch, M. ; Foster, G. ; Christie, G. ; Wang, S. ; Hager, G.D. ; Brown, M. : Semantic Stereo for Incidental Satellite Images. Proc. of Winter Conf. on Applications of Computer Vision, 2019.
      • [Le Saux et al., 2019] Le Saux, B. ; Yokoya, N. ; Hänsch, R. ; Brown, M. ; Hager, G.D. : 2019 Data Fusion Contest [Technical Committees], IEEE Geoscience and Remote Sensing Magazine, March 2019 https://doi.org/10.1109/MGRS.2019.2893783


  • Past Contests

    The 2018 Data Fusion Contest aimed to promote progress on fusion and analysis methodologies for multi-source remote sensing data. It consisted of a classification benchmark, the task to be performed being urban land use and land cover classification. The following advanced multi-source optical remote sensing data are provided to the community: multispectral LiDAR point cloud data (intensity rasters and digital surface models), hyperspectral data, and very high-resolution RGB imagery.
    Contest Rules
    Contest Results

    The 2017 IEEE GRSS Data Fusion Contest focused on global land use mapping using open data. Participants were provided with remote sensing (Landsat and Sentinel2) data and vector layers (Open Street Map), as well as a 17 classes ground reference at 100 x 100m resolution over five cities worldwide (Local climate zones, see Stewart and Oke, 2012): Berlin, Hong Kong, Paris, Rome, Sao Paulo. The task was to provide land use maps over four other cities: Amsterdam, Chicago, Madrid and Xi’an. The maps were to be uploaded on an evaluation server. Please refer to the links below to know more about the challenge, download the data and submit your results (even now that the contest is over).
    Contest Rules
    Contest Results

    The 2016 IEEE GRSS Data Fusion Contest, organized by the IADF TC, was opened on January 3, 2016. The submission deadline was April 29, 2016. Participants submitted open topic manuscripts using the VHR and video-from-space data released for the competition. 25 teams worldwide participated to the Contest. Evaluation and ranking were conducted by the Award Committee.
    Contest Rules
    Contest Results
    Paper: Mou, L.; Zhu, X.; Vakalopoulou, M.; Karantzalos, K.; Paragios, N.; Le Saux, B.; Moser, G. & Tuia, D., Multi-temporal very high resolution from space: Outcome of the 2016 IEEE GRSS Data Fusion Contest, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., in press.

    2015: The 2015 Contest was focused on multiresolution and multisensor fusion at extremely high spatial resolution. A 5-cm resolution color RGB orthophoto and a LiDAR dataset, for which both the raw 3D point cloud with a density of 65 pts/m² and a digital surface model with a point spacing of 10 cm, were distributed to the community. These data were collected using an airborne platform over the harbor and urban area of Zeebruges, Belgium. The department of Communication, Information, Systems and Sensors of the Belgian Royal Military Academy acquired and provided the dataset. Participants were supposed to submit original IGARSS-style full papers using these data for the generation of either 2D or 3D thematic mapping products at extremely high spatial resolution.
    Contest Rules
    Contest Results
    Paper: M. Campos-Taberner, A. Romero-Soriano, C. Gatta, G. Camps-Valls, A. Lagrange, B. Le Saux, A. Beaupère, A. Boulch, A. Chan-Hon-Tong, S. Herbin, H. Randrianarivo, M. Ferecatu, M. Shi- moni, G. Moser, and D. Tuia. Processing of extremely high resolution LiDAR and RGB data: Outcome of the 2015 IEEE GRSS Data Fusion Contest. Part A: 2D contest. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., 9(12):5547–5559, 2016.

    Paper: A.-V. Vo, L. Truong-Hong, D.F. Laefer, D. Tiede, S. d’Oleire Oltmanns, A. Baraldi, M. Shimoni, G. Moser, and D. Tuia. Processing of extremely high resolution LiDAR and RGB data: Outcome of the 2015 IEEE GRSS Data Fusion Contest. Part B: 3D contest. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., 9(12):5560–5575, 2016.

    2014: The 2014 Contest involved two datasets acquired at different spectral ranges and spatial resolutions: a coarser-resolution long-wave infrared (LWIR, thermal infrared) hyperspectral data set and fine-resolution data acquired in the visible (VIS) wavelength range. The former was acquired by an 84-channel imager covering the wavelengths between 7.8 to 11.5 μm with approximately 1-meter spatial resolution. The latter is a series of color images acquired during separate flight-lines with approximately 20-cm spatial resolution. The two data sources cover an urban area near Thetford Mines in Québec, Canada, and were acquired and were provided for the Contest by Telops Inc. (Canada). A ground truth with 7 landcover classes is provided and the mapping is performed at the higher of the two data resolutions.
    Contest Rules
    Classification Contest Results
    Paper Contest Results
    Paper: W. Liao, X. Huang, F. Van Coillie, S. Gautama, A. Pizurica, W. Philips, H. Liu, T. Zhu, M. Shimoni, G. Moser, D. Tuia. Processing of Multiresolution Thermal Hyperspectral and Digital Color Data: Outcome of the 2014 IEEE GRSS DataFusion Contest. IEEE J. Sel. Topics Appl. Earth Observ. and Remote Sensing, 8(6): 2984-2996, 2015.

    2013: The 2013 Contest involved two datasets – a hyperspectral image and a LiDAR derived Digital SurfaceModel (DSM), both at the same spatial resolution (2.5m). The hyperspectral imagery has 144 spectral bands in the 380 nm to 1050 nm region. The dataset was acquired over the University of Houston campus and the neighboring urban area. A ground reference with 15 land use classes is available.
    Contest Rules
    Classification Contest Results
    Paper Contest Results
    Paper: Debes, C. ; Merentitis, A. ; Heremans, R. ; Hahn, J. ; Frangiadakis, N. ; van Kasteren, T. ; Liao, W. ; Bellens, R. ; Pizurica, A. ; Gautama, S. ; Philips, W. ; Prasad, S. ; Du, Q. ; Pacifici, F. : Hyperspectral and LiDAR Data Fusion: Outcome of the 2013 GRSS Data Fusion Contest. IEEE J. Sel. Topics Appl. Earth Observ. and Remote Sensing, 7 (6) pp. 2405-2418.

    2012: The 2012 Contest was designed to investigate the potential of multi-modal/multi-temporal fusion of very high spatial resolution imagery in various remote sensing applications [6]. Three different types of data sets (optical, SAR, and LiDAR) over downtown San Francisco were made available by DigitalGlobe, Astrium Services, and the United States Geological Survey (USGS), including QuickBird, WorldView-2, TerraSAR-X, and LiDAR imagery. The image scenes covered a number of large buildings, skyscrapers, commercial and industrial structures, a mixture of community parks and private housing, and highways and bridges. Following the success of the multi-angular Data Fusion Contest in 2011, each participant was again required to submit a paper describing in detail the problem addressed, method used, and final results generated for review.
    Paper: Berger, C.; Voltersen, M.; Eckardt, R.; Eberle, J.; Heyer, T.; Salepci, N.; Hese, S.; Schmullius, C.; Tao, J.; Auer, S.; Bamler, R.; Ewald, K.; Gartley, M.; Jacobson, J.; Buswell, A.; Du, Q.; Pacifici, F., “Multi-Modal and Multi-Temporal Data Fusion: Outcome of the 2012 GRSS Data Fusion Contest”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol.6, no.3, pp.1324-1340, June 2013.

    2011: A set of WorldView-2 multi-angular images was provided by DigitalGlobe for the 2011 Contest. This unique set was composed of five Ortho Ready Standard multi-angular acquisitions, including both 16 bit panchromatic and multispectral 8-band images. The data were collected over Rio de Janeiro (Brazil) in January 2010 within a three minute time frame with satellite elevation angles of 44.7°, 56.0°, and 81.4° in the forward direction, and 59.8° and 44.6° in the backward direction. Since there were a large variety of possible applications, each participant was allowed to decide a research topic to work on, exploring the most creative use of optical multi-angular information. At the end of the Contest, each participant was required to submit a paper describing in detail the problem addressed, the method used, and the final result generated. The papers submitted were automatically formatted to hide names and affiliations of the authors to ensure neutrality and impartiality of the reviewing process.
    Paper: F. Pacifici, Q. Du, “Foreword to the Special Issue on Optical Multiangular Data Exploitation and Outcome of the 2011 GRSS Data Fusion Contest”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 1, pp.3-7, February 2012.

    2009-2010: In 2009-2010, the aim of Contest was to perform change detection using multi-temporal and multi-modal data. Two pairs of data sets were available over Gloucester, UK, before and after a flood event. The data set contained SPOT and ERS images (before and after the disaster). The optical and SAR images were provided by CNES. Similar to previous years’ Contests, the ground truth used to assess the results was not provided to the participants. Each set of results was tested and ranked a first time using the Kappa coefficient. The best five results were used to perform decision fusion with majority voting. Then, re-ranking was carried out after evaluating the level of improvement with respect to the fusion results.
    Paper: N. Longbotham, F. Pacifici, T. Glenn, A. Zare, M. Volpi, D. Tuia, E. Christophe, J. Michel, J. Inglada, J. Chanussot, Q. Du “Multi-modal Change Detection, Application to the Detection of Flooded Areas: Outcome of the 2009-2010 Data Fusion Contest”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 1, pp. 331-342, February 2012.

    2008: The 2008 Contest was dedicated to the classification of very high spatial resolution (1.3 m) hyper-spectral imagery. The task was again to obtain a classification map as accurate as possible with respect to the unknown (to the participants) ground reference. The data set was collected by the Reflective Optics System Imaging Spectrometer (ROSIS-03) optical sensor with 115 bands covering the 0.43-0.86 μm spectral range.
    Paper: G. Licciardi, F. Pacifici, D. Tuia, S. Prasad, T. West, F. Giacco, J. Inglada, E. Christophe, J. Chanussot, P. Gamba, “Decision fusion for the classification of hyperspectral data: outcome of the 2008 GRS-S data fusion contest”, IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 11, pp. 3857-3865, November 2009.

    2007: In 2007, the Contest theme was urban mapping using synthetic aperture radar (SAR) and optical data, and 9 ERS amplitude data sets and 2 Landsat multi-spectral images were made available. The task was to obtain a classification map as accurate as possible with respect to the unknown (to the participants) ground reference, depicting land cover and land use patterns for the urban area under study.
    Paper: F. Pacifici, F. Del Frate, W. J. Emery, P. Gamba, J. Chanussot, “Urban mapping using coarse SAR and optical data: outcome of the 2007 GRS-S data fusion contest”, IEEE Geoscience and Remote Sensing Letters, vol. 5, no. 3, pp. 331-335, July 2008.

    2006: The focus of the 2006 Contest was on the fusion of multispectral and panchromatic images [1]. Six simulated Pleiades images were provided by the French National Space Agency (CNES). Each data set included a very high spatial resolution panchromatic image (0.80 m resolution) and its corresponding multi-spectral image (3.2 m resolution). A high spatial resolution multi-spectral image was available as ground reference, which was used by the organizing committee for evaluation but not distributed to the participants.
    Paper: L. Alparone, L. Wald, J. Chanussot, C. Thomas, P. Gamba, L. M. Bruce, “Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data fusion contest”, IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 10, pp. 3012–3021, Oct. 2007.

    For any information about past Data Fusion Contests, released data, and the related terms and conditions, please write to iadf_chairs@grss-ieee.org.