EARTHVISION 2024

EARTHVISION 2024

June 17th, Seattle, USA

Aims and Scope

Earth Observation (EO) and remote sensing are ever-growing fields of investigation where computer vision, machine learning, and signal/image processing meet. The general objective of the domain is to provide large-scale and consistent information about processes occurring at the surface of the Earth by exploiting data collected by airborne and spaceborne sensors. Earth Observation covers a broad range of tasks, from detection to registration, data mining, and multi-sensor, multi-resolution, multi-temporal, and multi-modality fusion and regression, to name just a few. It is motivated by numerous  applications such as location-based services, online mapping services, large-scale surveillance, 3D urban modeling, navigation systems, natural hazard forecast and response, climate change monitoring, virtual habitat modeling, food security, etc. The sheer amount of data calls for highly automated scene interpretation workflows. 

Earth Observation and in particular the analysis of spaceborne data directly connects to 34 indicators out of 40 (29 targets and 11 goals) of the Sustainable Development Goals defined by the United Nations ( https://sdgs.un.org/goals  ). The aim of EarthVision to advance the state of the art in machine learning-based analysis of remote sensing data is thus of high relevance. It also connects to other immediate societal challenges such as monitoring of forest fires and other natural hazards, urban growth, deforestation, and climate change.

A non exhaustive list of topics of interest includes the following:

  • Super-resolution in the spectral and spatial domain

  • Hyperspectral and multispectral image processing

  • Reconstruction and segmentation of optical and LiDAR 3D point clouds

  • Feature extraction and learning from spatio-temporal data 

  • Analysis  of UAV / aerial and satellite images and videos

  • Deep learning tailored for large-scale Earth Observation

  • Domain adaptation, concept drift, and the detection of out-of-distribution data

  • Data-centric machine learning

  • Evaluating models using unlabeled data

  • Self-, weakly, and unsupervised approaches for learning with spatial data

  • Foundation models and representation learning in the context of EO

  • Human-in-the-loop and active learning

  • Multi-resolution, multi-temporal, multi-sensor, multi-modal processing

  • Fusion of machine learning and physical models

  • Explainable and interpretable machine learning in Earth Observation applications

  • Uncertainty quantification of machine-learning based prediction from EO data

  • Applications for climate change, sustainable development goals, and geoscience

  • Public benchmark datasets: training data standards, testing & evaluation metrics, as well as open source research and development.

All manuscripts will be subject to a double-blind review process. Accepted EarthVision papers will be included in the CVPR2024 workshop proceedings (published open access on the Computer Vision Foundation website) and submitted to IEEE for publication in IEEEXplore. Publication in IEEEXplore will be granted only if the paper meets IEEE publication policies and procedures.

Important Dates

All deadlines are considered end of day anywhere on Earth.

March 8, 2024Submission deadline 
April 5, 2024Notification to authors 
April 12, 2024Camera-ready deadline 
June 17, 2024Workshop 

Organizers

  • Ronny Hänsch, German Aerospace Center, Germany,
  • Devis Tuia, EPFL, Switzerland,
  • Jan Dirk Wegner, University of Zurich & ETH Zurich, Switzerland,
  • Bertrand Le Saux, ESA/ESRIN, Italy
  • Loïc Landrieu, ENPC ParisTech, France
  • Charlotte Pelletier, UBS Vannes, France
  • Hannah Kerner, Arizona State University, USA

Technical Committee

  • Akhil Meethal, ETS Montreal
  • Alexandre Boulch, valeo.ai
  • Amanda Bright, National Geospatial-Intelligence Agency
  • Ankit Jha, Indian Institute of Technology Bombay
  • Begum Demir, TU Berlin
  • Bertrand Le Saux, ESA/Phi-lab
  • Biplab Banerjee, Indian Institute of Technology
  • Caleb Robinson, Microsoft
  • Camille Couprie, Facebook
  • Camille Kurtz, Université Paris Cité
  • Christian Heipke, Leibniz Universität Hannover
  • Christopher Ratto, JHUAPL
  • Claudio Persello, University of Twente
  • Clement Mallet, IGN, France
  • Dalton Lunga, Oak Ridge National Laboratory
  • Damien Robert, IGN
  • Daniel Iordache, VITO, Belgium
  • David Rolnick, McGill University
  • Diego Marcos, Inria
  • Dimitri Gominski, University of Copenhagen
  • Elliot Vincent, École des ponts ParisTech/Inria
  • Emanuele Dalsasso, EPFL
  • Esther Rolf, Google Research
  • Ewelina Rupnik, Univ Gustave Eiffel
  • Ferda Ofli, Qatar Computing Research Institute
  • Flora Weissgerber, ONERA
  • Franz Rottensteiner, Leibniz Universitat Hannover, Germany
  • Gabriel Tseng, NASA Harvest
  • Gabriele Moser, Università di Genova
  • Gedeon Muhawenayo, Arizona State University
  • Gemine Vivone, CNR-IMAA
  • Gencer Sumbul, EPFL
  • Georgios Voulgaris, University of Oxford
  • Gülşen Taşkın, İstanbul Teknik Üniversitesi
  • Gustau Camps-Valls, Universitat de València
  • Hamed Alemohammad, Clark University
  • Helmut Mayer, Bundeswehr University Munich
  • Jacob Arndt, Oak Ridge National Laboratory
  • Joëlle Hanna, University of St. Gallen
  • Jonathan Prexl, University of the Bundeswehr Munich
  • Konstantin Klemmer, Microsoft Research
  • Linus Scheibenreif, University of St. Gallen
  • Loic Landrieu, ENPC
  • M. Usman Rafique, Kitware Inc.
  • Manil Maskey, NASA MSFC
  • Marc Rußwurm, École Polytechnique Fédérale de Lausanne
  • Marco Körner, Technical University of Munich
  • Mareike Dorozynski, Institute of Photogrammetry and Geoinformation
  • Martin Weinmann, Karlsruhe Institute of Technology
  • Mathieu Aubry, École des ponts ParisTech
  • Matt Leotta, Kitware
  • Matthieu Molinier, VTT Technical Research Centre of Finland Ltd
  • Michael Schmitt, University of the Bundeswehr Munich
  • Miguel-Ángel Fernández-Torres, Universitat de València
  • Myron Brown, JHU
  • Nicolas Audebert, CNAM
  • Nicolas Gonthier, IGN
  • Nicolas Longepe, ESA
  • Nikolaos Dionelis, ESA
  • Patrick Ebel, ESA
  • Raian Maretto, University of Twente
  • Redouane Lguensat, IPSL
  • Ribana Roscher, Forschungszentrum Jülich
  • Ricardo Torres, Norwegian University of Science and Technology (NTNU)
  • Roberto Interdonato, CIRAD
  • Saurabh Prasad, University of Houston
  • Scott Workman, DZYNE Technologies
  • Seyed Majid Azimi, Ternow AI GmbH
  • Sophie Giffard-Roisin, Univ. Grenoble Alpes
  • Sudipan Saha, Indian Institute of Technology Delhi
  • Sylvain Lobry, Université Paris Cité
  • Tanya Nair, Floodbase
  • Teng Wu, Univ Gustave Eiffel
  • Thibaud Ehret, Centre Borelli
  • Valerio Marsocci, Conservatoire national des arts et métiers
  • Veda Sunkara, Cloud to Street
  • Vincent Lepetit, Université de Bordeaux
  • Wei He, Wuhan University
  • Yifang Ban, KTH Royal Institute of Technology
  • Zhijie Zhang, University of Arizona
  • Zhuangfang Yi, Regrow

Sponsors

 

 

 

Affiliations

       

Submissions

1. Prepare the anonymous, 8-page (references excluded) submission using the ev2024-template and following the paper guidelines. 

2. Submit at cmt3.research.microsoft.com/EarthVision2024.

Policies

A complete paper should be submitted using the EarthVision templates provided above.

Reviewing is double blind, i.e. authors do not know the names of the reviewers and reviewers do not know the names of the authors. Please read Section 1.7 of the example paper earthvision.pdf for detailed instructions on how to preserve anonymity. Avoid providing acknowledgments or links that may identify the authors.

Papers are to be submitted using the dedicated submission platform: cmt3.research.microsoft.com/EarthVision2024. The submission deadline is strict.

By submitting a manuscript, the authors guarantee that it has not been previously published or accepted for publication in a substantially similar form. CVPR rules regarding plagiarism, double submission, etc. apply.

Program

TBD 

Keynote 1 – Marta Yebra, Australian National University

Abstract

Bio

Dr. Yebra is an Associate Professor in Environmental Engineering at the Australian National University (ANU) and the Director of the Bushfire Research Centre of Excellence supported by Anu and Optus which is undertaking advanced interdisciplinary research to develop an innovative system that aims to detect bushfires as soon as they start and suppress them within minutes. Her research focuses on developing applications of remote sensing to the management of fire risk and impact. Yebra led the Australian Flammability Monitoring System development, which informs landscape flammability across Australia in near real-time. It is now designing Australia’s first satellite mission to help forecast vulnerable areas where bushfires are at the highest risk of starting or burning out of control. She has served on a number of advisory government bodies including the Australian Space Agency’s Earth Observation Technical Advisory Group (Since 2019) and the Australian Capital Territory Multi Hazards Council (Since 2021). Her talk will address how remote sensing technologies provide access to dynamic and real-time information that helps fire managers to better understand the likelihood of a catastrophic bushfire, the whereabouts and intensity of active fires and accurately assess wildfire scale and environmental impacts.

TBD 

Keynote 2 – Dan Morris, Google AI for Nature and Society

Abstract

Bio

Dan Morris is a researcher in the Google AI for Nature and Society program, where he works on AI tools that help conservation scientists spend less time doing boring things and more time doing conservation. This includes tools that accelerate urban forest canopy assessments and image-based wildlife surveys. Prior to joining Google, he directed the AI for Earth program at Microsoft, and prior to that he spent approximately a zillion years in the medical devices group at Microsoft Research, working on signal processing and machine learning tools for wearable devices that supported cardiovascular monitoring, fitness tracking, and gesture interaction. He received his PhD from Stanford, where he worked on haptics and physical simulation for virtual surgery.

EV24 will feature a panel discussion on “Unlocking Unlabeled Data: Approaches, Challenges, and Future Work in Remote Sensing Foundation Models”

This panel at the EarthVision workshop at CVPR 2024 will discuss various approaches to developing self-supervised “foundation” models for remote sensing data. The discussion will highlight diverse motivations, approaches, and considerations to model design as well as the challenges that remain for future work in this area.

Moderator: Hannah Kerner (Arizona State University)
Speakers: Esther Rolf (Harvard University / CU Boulder), Ritwik Gupta (Berkeley), Begüm Demir (TU Berlin)

CVPR 2024

CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.

Learn More: CVPR 2024