EARTHVISION 2025

EARTHVISION 2025

June 11th, Nashville, TN, USA

Aims and Scope / Call for Papers

Earth Observation (EO) and remote sensing are ever-growing fields of investigation where computer vision, machine learning, and signal/image processing meet. The general objective of the domain is to provide large-scale and consistent information about processes occurring at the surface of the Earth by exploiting data collected by airborne and spaceborne sensors. Earth Observation covers a broad range of tasks, from detection to registration, data mining, and multi-sensor, multi-resolution, multi-temporal, and multi-modality fusion and regression, to name just a few. It is motivated by numerous  applications such as location-based services, online mapping services, large-scale surveillance, 3D urban modeling, navigation systems, natural hazard forecast and response, climate change monitoring, virtual habitat modeling, food security, etc. The sheer amount of data calls for highly automated scene interpretation workflows. 

Earth Observation and in particular the analysis of spaceborne data directly connects to 34 indicators out of 40 (29 targets and 11 goals) of the Sustainable Development Goals defined by the United Nations ( https://sdgs.un.org/goals  ). The aim of EarthVision to advance the state of the art in machine learning-based analysis of remote sensing data is thus of high relevance. It also connects to other immediate societal challenges such as monitoring of forest fires and other natural hazards, urban growth, deforestation, and climate change.

A non exhaustive list of topics of interest includes the following:

  • Super-resolution in the spectral and spatial domain

  • Hyperspectral and multispectral image processing

  • Reconstruction and segmentation of optical and LiDAR 3D point clouds

  • Feature extraction and learning from spatio-temporal data 

  • Analysis  of UAV / aerial and satellite images and videos

  • Deep learning tailored for large-scale Earth Observation

  • Domain adaptation, concept drift, and the detection of out-of-distribution data

  • Data-centric machine learning

  • Evaluating models using unlabeled data

  • Self-, weakly, and unsupervised approaches for learning with spatial data

  • Foundation models and representation learning in the context of EO

  • Human-in-the-loop and active learning

  • Multi-resolution, multi-temporal, multi-sensor, multi-modal processing

  • Fusion of machine learning and physical models

  • Explainable and interpretable machine learning in Earth Observation applications

  • Uncertainty quantification of machine-learning based prediction from EO data

  • Applications for climate change, sustainable development goals, and geoscience

  • Public benchmark datasets: training data standards, testing & evaluation metrics, as well as open source research and development.

All manuscripts will be subject to a double-blind review process. Accepted EarthVision papers will be included in the CVPR2024 workshop proceedings (published open access on the Computer Vision Foundation website) and submitted to IEEE for publication in IEEEXplore. Publication in IEEEXplore will be granted only if the paper meets IEEE publication policies and procedures.

Important Dates

All deadlines are considered end of day anywhere on Earth.

March 3, 2025Submission deadline 
March 31,  2025Notification to authors 
April 7, 2025Camera-ready deadline 
June 11, 2025Workshop 

Program

9:00-9:15

Welcome and Awards Announcement

9:15-10:00

Keynote 1 – Kristof Van Tricht, VITO

“From the Research Lab to a Global Map: Balancing Innovation with Operational Reality”

Abstract

Over the past decade, the field of Earth Observation has witnessed a remarkable transformation in how large-scale applications like global crop type mapping are approached. From carefully crafted, expert-driven systems to pipelines powered by geospatial foundation models, the pace of innovation has reshaped both expectations and possibilities. WorldCereal exemplifies this shift—evolving from a classification system built around stable, interpretable feature engineering to embracing pretrained models and their self-supervised learning capabilities. Yet, this evolution brings a persistent tension: staying at the forefront of scientific and technological progress while maintaining the operational robustness required to deliver global, real-world results. This talk reflects on that journey from a practitioner’s perspective, and the considerations that guide adoption in an ever-accelerating landscape.

Bio

Kristof Van Tricht is a senior researcher at VITO Remote Sensing, where he specializes in agricultural applications. With a focus on both global and local agriculture monitoring, he leverages advanced remote sensing technologies to address food security challenges. His work spans the development and deployment of crop mapping algorithms, and the development of the CropSAR technology, which fuses Sentinel-1 and Sentinel-2 observations to provide cloud-free data streams. His research projects cover a wide spectrum, from regional farming studies to large-scale initiatives like the ESA WorldCereal project, always aiming to bridge the gap between cutting-edge machine learning technologies, the reality of large-scale operationalization, and the science of remote sensing.

10:00-10:30

Morning Coffee Break

10:30-11:15

Keynote 2 –  Sherrie Wang, Massachusetts Institute of Technology

“Scaling Ground Truth to Advance Earth Observation”

Abstract

The rapid progress of machine learning, exemplified by breakthroughs like large language models, has been powered by massive, diverse datasets that capture human knowledge—such as the vast troves of text data available on the internet. We hypothesize that similarly transformative potential exists in Earth observation (EO), where deep learning models can unlock insights from complex real-world imagery, provided they have access to large, richly labeled datasets. However, creating such datasets presents unique challenges in scale and specificity. This talk explores a few approaches to scaling ground truth for EO applications, including using roadside imagery for crop type mapping, leveraging simulations for GHG plume detection, and mining OpenStreetMap annotations for object detection. Together, these examples illustrate how innovating methods for dataset creation can drive new insights and capabilities in EO.


Bio

Sherrie Wang is an Assistant Professor at MIT in the Department of Mechanical Engineering and Institute of Data, Systems, and Society. Her research uses novel data and computational algorithms to monitor our planet and enable sustainable development. Her focus is on improving agricultural management and mitigating climate change, especially in low- or middle-income regions of the world. To this end, she frequently uses satellite imagery, crowdsourced data, LiDAR, and other spatial data. Due to the scarcity of ground truth data in these regions and the noisiness of real-world data in general, her methodological work is geared toward developing machine learning methods that work well with these constraints.

11:15-11:45

Posters spotlights I

  • “SARFormer – An Acquisition Parameter Aware Vision Transformer for Synthetic Aperture Radar Data”
    Jonathan Prexl (University of the Bundeswehr Munich); Michael Recla (University of the Bundeswehr Munich)*; Michael Schmitt (University of the Bundeswehr Munich)
  • “Better Coherence, Better Height: Fusing Physical Models and Deep Learning for Forest Height Estimation from Interferometric SAR Data”
    Ragini Mahesh (German Aerospace Center – DLR)*; Ronny Hänsch (German Aerospace Center -DLR)
  • “Hybrid AI–Physical Modeling for Penetration Bias Correction in X-band InSAR DEMs: A Greenland Case Study”
    Islam Mansour (German Aerospace Center (DLR) & ETH Zurich)*; Georg Fischer (German Aerospace Center (DLR); Ronny Hänsch (German Aerospace Center (DLR); Irena Hajnsek (German Aerospace Center (DLR) & ETH Zurich )
  • “Explainable Physical PolSAR Autoencoders for Soil Moisture Estimation”
    Nikita Basargin (DLR)*; Alberto Alonso-González (UPC); Irena Hajnsek (DLR)
  • “TerraMesh: A Planetary Mosaic of Multimodal Earth Observation Data”
    Benedikt Blumenstiel (IBM Research Europe)*; Paolo Fraccaro (IBM Research Europe); Valerio Marsocci (European Space Agency Φ-lab); Johannes Jakubik (IBM Research Europe); Stefano Maurogiovanni (Julich Supercomputing Centre); Mikolaj Czerkawski (European Space Agency Φ-lab); Rocco Sedona (Julich Supercomputing Centre); Gabriele Cavallaro (Julich Supercomputing Centre); Thomas Brunschwiler (IBM Research Europe); Juan Bernabe Moreno (IBM Research Europe); Nicolas Longepe (European Space Agency Φ-lab)
  • “LADI v2: Multi-label Dataset and Classifiers for Low-Altitude Disaster Imagery”
    Samuel Scheele (MIT Lincoln Laboratory); Katherine Picchione (NASA); Jeffrey Liu (MIT Lincoln Laboratory)*
  • “S-EO: A Large-Scale Dataset for Geometry-Aware Shadow Detection in Remote Sensing Applications”
    Elías Masquil (Universidad de la República)*; Roger Marí (Eurecat); Thibaud Ehret (AMIAD ); Enric Meinhardt-Llopis (ENS Paris-Saclay); Pablo Musé (Universidad de la República); Gabriele Facciolo (ENS Paris-Saclay)
  • “WikiRS: Learning Ecological Representation of Satellite Images from Weak Supervision with Species Observations and Wikipedia”
    Valerie Zermatten (EPFL)*; Javiera Castillo-Navarro (CNAM); Pallavi Jain (INRIA); Diego Marcos (INRIA); Devis Tuia (EPFL)
  • “SSL4Eco: A Global Seasonal Dataset for Geospatial Foundation Models in Ecology”
    Elena Plekhanova (Swiss Federal Institute for Forest, Snow and Landscape Research, WSL)*; Damien Damien (University of Zurich); Johannes Dollinger (University of Zurich); Emilia Arens (University of Zurich); Philipp Brun (Swiss Federal Institute for Forest, Snow and Landscape Research, WSL); Jan Dirk Wegner (University of Zurich); Niklaus E. Zimmermann (Swiss Federal Institute for Forest, Snow and Landscape Research, WSL)
  • “Mapping biodiversity at very-high resolution in Europe”
    Cesar Leblanc (INRIA); Lukas Picek (INRIA / University of West Bohemia)*; Rémi Palard (CIRAD); Benjamin Deneu (WSL); Maximilien Servajean (LIRMM); Pierre Bonnet (CIRAD); Alexis Joly (INRIA)

11:45-13:15

Lunch Break

13:15-13:45

Posters spotlights II

  • “Distribution Shifts at Scale: Out-of-distribution Detection in Earth Observation”
    Burak Ekim (University of the Bundeswehr)*; Girmaw Abebe Tadesse (Microsoft AI for Good Research Lab); Caleb Robinson (Microsoft AI for Good Research Lab); Gilles Hacheme (Microsoft AI for Good Research Lab ); Michael Schmitt (University of the Bundeswehr Munich); Rahul Dodhia (Microsoft AI for Good Research Lab); Juan M. Lavista Ferres (Microsoft AI for Good Research Lab)
  • “Bridging Classical and Modern Computer Vision: PerceptiveNet for Tree Crown Semantic Segmentation”
    Georgios Voulgaris (University of Oxford)*
  • “CoDEx: Combining Domain Expertise for Spatial Generalization in Satellite Image Analysis”
    Abhishek Kuriyal (Ecole nationale des ponts et chaussées)*; Elliot Vincent (École des ponts ParisTech / Inria / IGN); Mathieu Aubry (Ecole nationale des ponts et chaussées); Loic Landrieu (Ecole nationale des ponts et chaussées)
  • “Task-Informed Meta-Learning for Remote Sensing”
    Gabriel Tseng (NASA Harvest)*; Hannah Kerner (Arizona State University); David Rolnick (McGill University)
  • “AerOSeg: Harnessing SAM for Open-Vocabulary Segmentation in Remote Sensing Images”
    Saikat Dutta (IIT Bombay)*; Akhil Vasim (IIT Bombay); Siddhant Gole (IIT Bombay); Hamid Rezatofighi (Monash University); Biplab Banerjee (IIT Bombay)
  • “Visual Question Answering on Multiple Remote Sensing Image Modalities”
    Hichem Boussaid (Université Paris Cité (LIPADE))*; Lucrezia Tosato (Université Paris Cité (LIPADE), DTIS ONERA The french aerospace lab); Flora Weissgerber (DTIS, Onera The french aerospace lab); Camille Kurtz (Université Paris Cité (LIPADE)); Laurent Wendling (Université Paris Cité (LIPADE)); Sylvain Lobry (Université Paris Cité (LIPADE))
  • “FrogDogNet: Fourier frequency Retained visual prompt Output Guidance for Domain Generalization of CLIP in Remote Sensing”
    Hariseetharam Gunduboina (Indian Institute of Technology, Bombay)*; Muhammad Haris Khan (Mohamed bin Zayed University of Artificial Intelligence); Biplab Banerjee (Indian Institute of Technology, Bombay)
  • “REJEPA: A Novel Joint-Embedding Predictive Architecture for Efficient Remote Sensing Image Retrieval”
    Shabnam Choudhury (Indian institute of Technology Bombay)*; Yash Salunkhe (Indian institute of Technology Bombay); Sarthak Mehrotra (Indian institute of Technology Bombay ); Biplab Banerjee (Indian institute of Technology Bombay)
  • “Scale-Invariant Implicit Neural Representations For Object Counting”
    Siyuan Xu (Texas a&m university)*; Yucheng Wang (Texas a&m university); Xihaier Luo (Brookhaven National Laboratory); Byung-Jun Yoon (Texas a&m university); Xiaoning Qian (Texas a&m university)
  • “s2p-hd: Gpu-Accelerated Binocular Stereo Pipeline for Large-Scale Same-Date Stereo”
    Tristan Amadei (ENS Paris-Saclay, Centre Borelli)*; Enric Meinhardt-Llopis (ENS Paris-Saclay, Centre Borelli); Carlo de Franchis (Kayrros; ENS Paris-Saclay, Centre Borelli); Jeremy Anger (ENS Paris-Saclay, Centre Borelli); Thibaud Ehret (AMIAD); Gabriele Facciolo (ENs Paris-Saclay, Centre Borelli)

13:45-14:15

Best papers presentations

  • Detecting Looted Archaeological Sites from Satellite Image Time Series
    Elliot Vincent (Ecole nationale des ponts et chaussées / Inria / IGN)*; Mehraïl Saroufim (Iconem); Jonathan Chemla (Iconem); Yves Ubelmann (Iconem); Philippe Marquis (DAFA, French archaeological delegation in Afghanistan); Mathieu Aubry (Ecole nationale des ponts et chaussées); Jean Ponce (Ecole normale supérieure-PSL / New York University)
  • Panopticon: Advancing Any-Sensor Foundation Models for Earth Observation
    Ando Shah (UC Berkeley)*
    Leonard Waldmann (Technical University of Munich); Yi Wang (Technical University of Munich); Nils Lehmann (Technical University of Munich); Adam Stewart (Technical University of Munich); Zhitong Xiong (Technical University of Munich); Stefan Bauer (Helmholtz Munich); Xiaoxiang Zhu (Technical University of Munich); John Chuang (UC Berkeley)

14:15-15:15

Embed2Scale Challenge

15:15-15:45

Afternoon coffee break

15:45-16:30

Keynote 3 – Patrick Beukema, Allen.ai

“Closing the Gap: Building Planetary AI Systems for Real-World Users”

Abstract

Earth observation (EO) data is indispensable for tackling urgent global challenges such as rising sea levels, wildfires, food security, climate change, and disaster relief. Yet building EO-based models that are both high-performing and globally applicable remains technically difficult. These models often lack robustness across diverse real-world tasks and are rarely updated or maintained. We will describe how AI2 delivers performant, resource-efficient, and reliable EO models in near real-time using a diverse set of sources—from NASA and ESA satellites to streaming GPS data. We’ll explore what it means to build research from a production mindset, and how this approach helps close the gap between the lab and real-world impact by centering users from the start. Finally, we will introduce Earth System, a new initiative at AI2 focused on large-scale, highly multimodal planetary foundation models, and the infrastructure needed to tune and deploy them at scale. Earth System will be a fully open-source platform designed to support the entire lifecycle of planetary modeling and deployment—enabling resource-efficient research, scalable development, and impactful real-world applications.

Bio

Patrick Beukema is a Principal Engineer and Scientist leading AI efforts within the Applied Science division at the Allen Institute for Artificial Intelligence (Ai2). He directs initiatives focused on large-scale, mixed-modality planetary modeling, supporting real-world applications in climate science, wildfire management, agriculture, maritime intelligence, conservation, and ecosystem health. Patrick holds a Ph.D. in neuroscience from the Center for the Neural Basis of Cognition (Carnegie Mellon and University of Pittsburgh), where he studied neural plasticity and neural decoding. Prior to Ai2, he worked across startups, tech, and academia on topics including recurrent neural networks, causal inference, and applied machine learning. Affiliation: Allen Institute for Artificial Intelligence (Ai2).

16:30-17:30

Poster Session (poster boards #419 – #443)

19:00-on

Workshop dinner

Challenge

This year’s data challenge is organized by the Embed2Scale consortium and revolves around Lossy Neural Compression for Geospatial Analytics. Specifically, participants are asked to develop an encoder to compress SSL4EO-S12 data cubes down to 1024 features, a.k.a. embeddings, that are then evaluated on a hidden set of applications, a.k.a. downstream tasks as in the literature of foundation models.

You will have three weeks in March 2025 to develop your encoder and test the quality of your embeddings by interacting with this challenge portal Eval.AI submitting embeddings for data available at HuggingFace. The 1st week of April will allow you three submissions in three days based on a separate dataset made public on HuggingFace two days before. This phase determines the final leaderboard and challenge winner who will present their solution at the workshop in Nashville or remotely as per availability.

Important Dates:

  • Development phase: Mar 10-31

  • Testing phase: Apr 3-5

  • Winner presentation: At the EarthVision WS

Webpage: eval.ai/web/challenges/challenge-page/2465

Challenge Data in HuggingFace

Development Phase ended, Final Submission for Ranking ahead!

The development phase closed. Now get ready to compress approx. 90GB from huggingface.co/datasets/embed2scale/SSL4EO-S12-downstream/tree/main/data_eval (available for download Apr 1) for submission starting Apr 3 through Apr 5.
Your scores will determine the final ranking to find our winners (cf. below). We are excited about your solution, all the best luck!

About CVPR EARTHVISION winners

In addition to the winner according to github.com/DLR-MF-DAS/embed2scale-challenge-supplement?tab=readme-ov-file#leader… we decided to also invite the solution with the highest `q_mean` score to present their solution at the CVPR EarthVision workshop. Each of the winning teams will receive a cash prize of 1k EUR as support to come to Nashville, TN, USA for the presentation. On April 7, 2025 we’ll get in touch with the two winning teams through the email ID they provided to Eval.AI .

And even should you not win the thing …

… we are so grateful you participated! We are also very happy to link the code to your solution. If you’d like so, pls open a corresponding issue in github.com/DLR-MF-DAS/embed2scale-challenge-supplement/issues.

 

Organizers

  • Ronny Hänsch, German Aerospace Center, Germany,
  • Devis Tuia, EPFL, Switzerland,
  • Jan Dirk Wegner, University of Zurich & ETH Zurich, Switzerland,
  • Nathan Jacobs, Washington University in St. Louis, USA
  • Loïc Landrieu, ENPC ParisTech, France
  • Charlotte Pelletier, UBS Vannes, France
  • Hannah Kerner, Arizona State University, USA

Technical Committee

  • Aayush Dhakal, Washington University in St Louis
  • Aimi Okabayashi, Université Bretagne Sud
  • Alex Levering, VU Amsterdam
  • Alexandre Xavier Falcao, IC-UNICAMP
  • Amanda Bright, National Geospatial-Intelligence Agency
  • Anastasia Schlegel, DLR
  • Ankit Jha, The LNM Institute of Information Technology, Jaipur
  • Antoine Bralet, UBS/IRISA
  • Begum Demir, TU Berlin
  • Biplab Banerjee, Indian Institute of Technology, Bombay
  • Caleb Robinson, Microsoft
  • Camille Couprie, Facebook
  • Camille Kurtz, Université Paris Cité
  • Christian Heipke, Leibniz Universität Hannover
  • Christopher R Ratto, JHUAPL
  • Claudio Persello, University of Twente
  • Clement Mallet, IGN, France
  • Conrad M Albrecht, German Aerospace Center
  • Corentin Dufourg, Univ. Bretagne Sud / IRISA
  • Dalton Lunga, Oak Ridge National Laboratory
  • Damien Robert, University of Zurich
  • Daniel Iordache, VITO, Belgium
  • Diego Marcos, Inria
  • Dimitri Gominski, University of Copenhagen
  • Elliot Vincent, Ecole nationale des ponts et chaussées / Inria / IGN
  • Emanuele Dalsasso, EPFL
  • ESTHER ROLF, Harvard University
  • Ewelina Rupnik, Univ Gustave Eiffel, LASTIG, ENSG-IGN
  • Ferda Ofli, Qatar Computing Research Institute, HBKU
  • Franz Rottensteiner, Leibniz Universitat Hannover, Germany
  • Gabriel Tseng, NASA Harvest
  • Gabriele Moser, Università di Genova
  • Gedeon Muhawenayo, Arizona State University
  • Gemine Vivone, CNR-IMAA
  • Gencer Sumbul, EPFL
  • Georgios Voulgaris, University of Oxford
  • Guillaume Astruc, ENPC/IGN
  • Gülşen Taşkın, İstanbul Teknik Üniversitesi
  • Gustau Camps-Valls, Universitat de València
  • Helmut Mayer, Bundeswehr University Munich
  • Islam Mansour, German Aerospace Center (DLR) & ETH Zurich
  • Jacob Arndt, Oak Ridge National Laboratory
  • Jakob Gawlikowski, German Aerospace Center (DLR)
  • Javiera Castillo Navarro, Cnam
  • Joëlle Hanna, University of St. Gallen
  • Jonathan Giezendanner, MIT
  • Linus M. Scheibenreif, University of St. Gallen
  • Lukas Drees, University of Zurich
  • M. Usman Rafique, Kitware Inc.
  • Marc Rußwurm, Wageningen University
  • Mareike Dorozynski, Institute of Photogrammetry and Geoinformation
  • Martin Weinmann, Karlsruhe Institute of Technology
  • Matt Leotta, Kitware
  • Michael Mommert, Stuttgart University of Applied Sciences
  • Michael Schmitt, University of the Bundeswehr Munich
  • Miguel-Ángel Fernández-Torres, Universidad Carlos III de Madrid
  • Minh-Tan Pham, IRISA-UBS
  • Myron Z Brown, JHU
  • Nicolas Audebert, IGN
  • Nikolaos Dionelis, ESA
  • Ragini Bal Mahesh, German Aerospace Center – DLR
  • Ramesh S. Nair, Planet Labs PBC
  • Roberto Interdonato, CIRAD
  • Scott Workman, DZYNE Technologies
  • Sophie Giffard-Roisin, Univ. Grenoble Alpes
  • Srikumar Sastry, Washington University in St Louis
  • Subit Chakrabarti, Floodbase
  • Sylvain Lobry, Université Paris Cité
  • Tanya Nair, Floodbase
  • Tatsumi Uezato, Hitachi, Ltd
  • Teng Wu, Univ Gustave Eiffel, ENSG, IGN, LASTIG, F-94160 Saint-Mandé
  • Thibaud Ehret, AMIAD, Pôle recherche
  • Utkarsh Mall, Columbia University
  • Valerio Marsocci, KU Leuven
  • Yohann PERRON, LIGM ENPC
  • Zhijie Zhang, University of Arizona
  • Zhuangfang Yi, Regrow

Challenge Sponsors

 

Affiliations

         .

Submissions

1. Prepare the anonymous, 8-page (references excluded) submission using the ev2025-template and following the paper guidelines. 

2. Submit at cmt3.research.microsoft.com/EarthVision2025. Please submit via this link only.

Policies

A complete paper should be submitted using the EarthVision templates provided above.

Reviewing is double blind, i.e. authors do not know the names of the reviewers and reviewers do not know the names of the authors. Please read Section 1.7 of the example paper EarthVision2025.pdf for detailed instructions on how to preserve anonymity. Avoid providing acknowledgments or links that may identify the authors.

Papers are to be submitted using the dedicated submission platform: cmt3.research.microsoft.com/EarthVision2025.

The submission deadline is strict.

By submitting a manuscript, the authors guarantee that it has not been previously published or accepted for publication in a substantially similar form. CVPR rules regarding plagiarism, double submission, etc. apply.

The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.

CVPR 2025

CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.

Learn More: CVPR 2025