June 18th, Vancouver, Canada

Aims and Scope

Earth Observation (EO) and remote sensing are ever-growing fields of investigation where computer vision, machine learning, and signal/image processing meet. The general objective of the domain is to provide large-scale and consistent information about processes occurring at the surface of the Earth by exploiting data collected by airborne and spaceborne sensors. Earth Observation covers a broad range of tasks, from detection to registration, data mining, and multi-sensor, multi-resolution, multi-temporal, and multi-modality fusion and regression, to name just a few. It is motivated by numerous  applications such as location-based services, online mapping services, large-scale surveillance, 3D urban modeling, navigation systems, natural hazard forecast and response, climate change monitoring, virtual habitat modeling, food security, etc. The sheer amount of data calls for highly automated scene interpretation workflows. 

Earth Observation and in particular the analysis of spaceborne data directly connects to 34 indicators out of 40 (29 targets and 11 goals) of the Sustainable Development Goals defined by the United Nations (  ). The aim of EarthVision to advance the state of the art in machine learning-based analysis of remote sensing data is thus of high relevance. It also connects to other immediate societal challenges such as monitoring of forest fires and other natural hazards, urban growth, deforestation, and climate change.

A non exhaustive list of topics of interest includes the following:

  • Super-resolution in the spectral and spatial domain

  • Hyperspectral and multispectral image processing

  • Reconstruction and segmentation of optical and LiDAR 3D point clouds

  • Feature extraction and learning from spatio-temporal data 

  • Analysis  of UAV / aerial and satellite images and videos

  • Deep learning tailored for large-scale Earth Observation

  • Domain adaptation, concept drift, and the detection of out-of-distribution data

  • Evaluating models using unlabeled data

  • Self-, weakly, and unsupervised approaches for learning with spatial data

  • Human-in-the-loop and active learning

  • Multi-resolution, multi-temporal, multi-sensor, multi-modal processing

  • Fusion of machine learning and physical models

  • Explainable and interpretable machine learning in Earth Observation applications

  • Applications for climate change, sustainable development goals, and geoscience

  • Public benchmark datasets: training data standards, testing & evaluation metrics, as well as open source research and development.

All manuscripts will be subject to a double-blind review process. Accepted EarthVision papers will be included in the CVPR2023 workshop proceedings (published open access on the Computer Vision Foundation website) and submitted to IEEE for publication in IEEEXplore. Publication in IEEEXplore will be granted only if the paper meets IEEE publication policies and procedures.

Important Dates

March 9, 2023Submission deadline 
March 30, 2023Notification to authors 
April 6, 2023Camera-ready deadline 
June 18, 2023Workshop 


  • Ronny Hänsch, German Aerospace Center, Germany,
  • Devis Tuia, EPFL, Switzerland,
  • Jan Dirk Wegner, University of Zurich & ETH Zurich, Switzerland,
  • Bertrand Le Saux, ESA/ESRIN, Italy
  • Nathan Jacobs, Univ. of Kentucky, USA
  • Loïc Landrieu, IGN, France
  • Charlotte Pelletier, UBS Vannes, France
  • Hannah Kerner, Arizona State University, USA
  • Beth Tellman, University of Arizona, USA

Technical Committee

  • Amanda Bright, National Geospatial-Intelligence Agency
  • Armin Hadzic, DZYNE Technologies
  • Caleb Robinson, Microsoft
  • Camille Kurtz, Université Paris Cité
  • Christian Heipke, Leibniz Universität Hannover
  • Claudio Persello, University of Twente
  • Clement Mallet, IGN, France
  • Dalton Lunga, Oak Ridge National Laboratory
  • Damien Robert, IGN
  • Daniel Iordache, VITO, Belgium
  • Dimitri Gominski, University of Copenhagen
  • Elliot Vincent, École des ponts ParisTech / Inria
  • Ewelina Rupnik, IGN France
  • Flora Weissgerber, ONERA
  • Franz Rottensteiner, Leibniz Universitat Hannover, Germany
  • Gabriele Moser, Università di Genova
  • Gaetan Bahl, NXP
  • Gellert Mattyus, Continental ADAS
  • Gordon Christie, JHU
  • Gülşen Taşkın, İstanbul Teknik Üniversitesi
  • Gustau Camps-Valls, Universitat de València
  • Hamed Alemohammad, Clark University
  • Helmut Mayer, Bundeswehr University Munich
  • Hoàng-Ân Lê, IRISA
  • Jacob Arndt, Oak Ridge National Laboratory
  • Javiera Castillo Navarro, EPFL
  • Jing Huang, Facebook
  • Jonathan Giezendanner, University of Arizona
  • Jonathan Sullivan, University of Arizona
  • Krishna Regmi, University of Oklahoma
  • Kuldeep Kurte, Oak Ridge National Laboratory
  • Luc Baudoux, National French mapping agency (IGN)
  • M. Usman Rafique, Kitware Inc.
  • Manil Maskey, NASA MSFC
  • Marc Rußwurm, École Polytechnique Fédérale de Lausanne
  • Martin Weinmann, Karlsruhe Institute of Technology
  • Martin R. Oswald, ETH Zurich
  • Mathieu Bredif, IGN
  • Matt Leotta, Kitware
  • Matthieu Molinier, VTT Technical Research Centre of Finland Ltd
  • Michael Mommert, University of St. Gallen
  • Michael Schmitt, Bundeswehr University Munich
  • Miguel-Ángel Fernández-Torres, Universitat de València
  • Minh-Tan Pham, IRISA
  • Myron Brown, JHU
  • Nicolas Audebert, CNAM
  • Philipe Ambrozio Dias, Oak Ridge National Laboratory
  • Redouane Lguensat, IPSL
  • Ribana Roscher, University of Bonn
  • Ricardo da Silva Torres, Wageningen University and Research
  • Roberto Interdonato, CIRAD
  • Rodrigo Caye Daudt, ETH Zurich
  • Rohit Mukherjee, The University of Arizona
  • Roman Loiseau, École des ponts ParisTech
  • Ryan Mukherjee, BlackSky
  • Sara Beery, Caltech
  • Saurabh Prasad, University of Houston
  • Scott Workman, DZYNE Technologies
  • Subit Chakrabarti, Floodbase
  • Sudipan Saha, Indian Institute of Technology Delhi
  • Sylvain Lobry, Université Paris Cité
  • Tanya Nair, Floodbase
  • Valérie Gouet-Brunet, LASTIG/IGN-UGE
  • Veda Sunkara, Cloud to Street
  • Vincent Lepetit, Université de Bordeaux
  • Vivien Sainte Fare Garnot, IGN
  • Yakoub Bazi, King Saud University - Riyadh KSA
  • Yifang Ban, KTH Royal Institute of Technology
  • Zhijie Zhang, University of Arizona
  • Zhuangfang Yi, Regrow


Reliable, large-scale biomass estimation is a big challenge for the African continent. If solved accurately and cost-efficiently, it can help develop the entire African continent by enabling use cases like reforestation, sustainable agriculture or green finance. For this reason, the organizers of the African Biomass Challenge (GIZ, BNETD, data354, University of Zurich, ETH Zurich, the University of Queensland) have decided to launch one of the largest African AI and data science competitions, whose ultimate goal is to accurately estimate aboveground biomass on any part of the continent using remote sensing data. For this first version of the challenge, they have put together a dataset consisting of ESA Sentinel-2 images, NASA GEDI data and ground truth biomass collected in different cocoa plantations in Côte d'Ivoire. All AI practitioners and enthusiasts are invited to take part in the competition organized on Zindi.










1. Prepare the anonymous, 8-page (references excluded) submission using the ev2023-template and following the paper guidelines. 

2. Submit at


A complete paper should be submitted using the EarthVision templates provided above.

Reviewing is double blind, i.e. authors do not know the names of the reviewers and reviewers do not know the names of the authors. Please read Section 1.7 of the example paper earthvision.pdf for detailed instructions on how to preserve anonymity. Avoid providing acknowledgments or links that may identify the authors.

Papers are to be submitted using the dedicated submission platform: The submission deadline is strict.

By submitting a manuscript, the authors guarantee that it has not been previously published or accepted for publication in a substantially similar form. CVPR rules regarding plagiarism, double submission, etc. apply.


Additionally to presentations of the accepted papers and the featured ABC Challenge, we are excited to have the following keynote speakers at EarthVision 2023:

Marta Yebra, Australian National University

Remote sensing technologies to support wildfire management and secure our future


Dr. Yebra is an Associate Professor in Environmental Engineering at the Australian National University (ANU) and the Director of the ANU - Optus Bushfire Research Centre of Excellence which is undertaking advanced interdisciplinary research to develop an innovative system that aims to detect bushfires as soon as they start and extinguish them within minutes. Her research focuses on developing applications of remote sensing to the management of fire risk and impact. Yebra led the Australian Flammability Monitoring System development, which informs landscape flammability across Australia in near real-time. It is now designing the Australia’s first satellite mission to help forecast vulnerable areas where bushfires are at the highest risk of starting or burning out of control. She has served on a number of advisory government bodies including the Australian Space Agency’s Earth Observation Technical Advisory Group (Since 2019) and the Australian Capital Territory Multi Hazards Council (Since 2021). Dr Yebra has been awarded several awards for her contributions to bushfire management, including the Bushfire and Natural Hazards CRC's Outstanding Achievement in Research Utilization award (2019) and the Inaugural Max Day Fellowship by the Australian Academy of Science (2017).
As climate change worsens fire weather and increases the forest areas burned by fires worldwide, high-tech solutions are critical to change how wildfires are battled. Remote sensing technologies provide access to dynamic and real time information that help fire managers to better understand the likehood of a catastrophic bushfire, the whereabouts and intensity of active fires and accurately assess wildfire scale and environmental impacts. This remote sensing derived information is critical to plan, prepare and response to future wildfires. On my talk, I will provide an overview of the technological trends including sensors integration, planed satellite earth observing missions and state of the art modelling approaches.


Anima Anandkumar, Caltech, NVIDIA

Neural Operators for Weather Forecasting and Climate-Change Mitigation


Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods.

Predicting extreme weather events in a warming world at fine scales is a grand challenge facing climate scientists. We depend on reliable predictions to plan for the disastrous impact of climate change and develop effective adaptation strategies. Deep learning (DL) offers novel methods that are potentially more accurate and orders of magnitude faster than traditional weather and climate models for predicting extreme events. The Fourier Neural Operator (FNO), a novel deep-learning method, has shown promising results for predicting complex systems, such as spatiotemporal chaos, turbulence, and weather phenomena. Our results show promise for large-scale DL potentially competing with state-of-the-art numerical weather prediction. In addition, we use FNO to model the multiphase flow systems used in Carbon Capture and Storage (CCS) systems, and we can provide highly accurate gas saturation and pressure buildup predictions under diverse reservoir conditions, geological heterogeneity, and injection schemes. The predictions are 700,000 times faster than traditional numerical simulations, with spatial resolutions exceeding most typical models run with existing simulators. The work presented here is a significant step toward building a reliable, high-fidelity, high-resolution digital twin of Earth for weather modeling and climate change mitigation.

Diane Larlus, NAVER LABS Europe

Lifelong learning of visual representations


Diane Larlus is a principal research scientist at NAVER LABS Europe and leads a Chair on Lifelong representation learning within the MIAI research institute of Grenoble. Her current research focuses on self-supervised learning, continual learning and continual adaptation, and on visual search.


Computer vision has found its way towards an increasingly large number of concrete applications in the robotic, remote sensing, or medical domains. One reason for this success is the development of large and powerful deep learning architectures which produce visual features that are generic enough to be applied directly to - or be the starting point of - a large variety of target computer vision tasks. Yet, training such generic architectures requires large-scale data, associated to extensive human annotations, and heavy computational resources. In this presentation, we will discuss methods that allow to reduce the training cost of transferable visual representations.

CVPR 2023

CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.

Learn More: CVPR 2023