Conference Agenda

Overview and details of the sessions for this conference. Please select a date and a session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
S.6.5: URBAN & DATA ANALYSIS
Time:
Wednesday, 26/June/2024:
09:00 - 10:30

Session Chair: Prof. Mihai Datcu
Session Chair: Prof. Weiwei Guo
Room: Sala 1


58190 - EO Spatial Temporal Analysis & DL

Round table discussion

& Summary Preparation


Show help for 'Increase or decrease the abstract text size'
Presentations
09:00 - 09:45
Oral
ID: 253 / S.6.5: 1
Dragon 5 Oral Presentation
Data Analysis: 58190 - Large-Scale Spatial-Temporal Analysis For Dense Satellite Image Series With Deep Learning

Advancing Deep Learning for Satellite Imagery: Self-Supervised Techniques and Domain Adaptation for Land and Urban Monitoring

Daniela Faur1, Weiwei Guo2, Zenghui Zhang3, Mihai Datcu1

1National Univeristy of Science and Technology Politehnica of Bucharest, Romania; 2Tongji University; 3Shanghai Jiao Tong University

The goal of our project is to to create advanced deep learning algorithms and tools to harness the potential of dense satellite image time series (SITS). The designed algorithms target the automatic identification of patterns, relationships, and evolutionary dynamics, thereby facilitating a deeper and more straightforward understanding of the fundamental processes characterizing specific scenes and targets.

These techniques will be implemented in distinct scenarios; land monitoring for sustainable agriculture, linear deformation rates estimations for urban areas and urban evolution in support of intelligent and sustainable urban information services.

We have developed innovative self-supervised pre-training methods for SAR and optical remote-sensing imagery. We introduced a multi-embedding contrastive pre-training approach that is both time and data efficient, focusing on learning image representation across multiple embedding feature levels instead of relying on data augmentation. Additionally, we have developed a 3D-MAE self-supervised representation learning strategy that combines SAR and optical image data into a single 3D tensor to facilitate concurrent feature learning. Given the challenges in accurately aligning SAR and optical images spatially and temporally, we have designed a self-supervised learning framework for SAR, utilizing optical data to enhance its effectiveness.

We implemented new classification methods for domain adaptation that apply across various satellite imagery.. Owing to the differences between SAR and optical images and the variations in SAR images from different platforms, we investigated domain adaptation strategies. We then introduced a framework based on adversarial learning for domain adaptation, incorporating prototype regularization to improve the capability of data clusters in the target domain.

We tackled the task of semantic segmentation of agricultural fields, using both optical and radar modalities. Our experiments made use of the PASTIS dataset, containing over 2.4k 128 x 128 time series, each acquisition encapsulating 10 relevant Sentinel-2 (S2) bands (out of the 13 bands provided by Sentinel-2, bands B1, B9 and B10 were excluded). We have also experimented with it’s multimodal counterpart, namely PASTIS-R, containing the corresponding Sentinel-1 (S1) time-series, in both ascending and descending orbit. We tested multiple fusion techniques for S1 + S2 prediction, drawing conclusions regarding the best approach, while also proposing a technique to retrieve the most influential/non-influential timestamps, through a cross-attention module encapsulated in a mid-fusion strategy.

Moreover, a tomographic SAR processing involving images time series acquired by the Sentinel 1 satellites was constructed and exploited. A two-scale processing, consisting in an initial preprocessing at lower resolution, of the multitemporal dataset and the subsequent high resolution analysis, has been applied.

Additionally, we created a comprehensive OpenSARUrban benchmark dataset using Sentinel data and introduced a collaborative annotation tool for humans and machines. This tool is designed to enhance the accuracy and efficiency of interpreting remote sensing imagery.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: 2024 Dragon Symposium
Conference Software: ConfTool Pro 2.6.153+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany