Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 3rd May 2026, 04:00:20am CEST
|
Daily Overview |
| Session | ||
Session 5 - Learning from Raw SAR Data and Onboard Processing
| ||
| Presentations | ||
4:00pm - 4:15pm
ID: 141 Learning from Raw SAR Data: AI-Based Vessel and RFI Detection with the OpenSAR Insight Dataset 1UNIVERSITY OF ALCALA, Spain; 2Instituto Nacional de Técnica Aeroespacial (INTA), Madrid, Spain; 3Indra Deimos UK LTD, Oxford, United Kingdom; 4Indra Deimos SRL, Bucharest, Romania; 5ESA ϕ-lab, ESRIN, Frascati, Italy Recent advances in Artificial Intelligence (AI) are opening new possibilities for the automated exploitation of Synthetic Aperture Radar (SAR) data in time-critical Earth Observation applications. However, the development of robust AI models for SAR analysis remains strongly constrained by the limited availability of standardized, well-curated, and labelled datasets, particularly when considering the use of raw SAR data. This work presents recent results obtained using the OpenSAR Insight dataset, an AI-ready SAR dataset specifically designed to support machine learning research on Sentinel-1 Interferometric Wide Swath (IW) acquisitions. The dataset includes paired patches derived from both focused Level-1 products (GRD and SLC) and the corresponding raw Level-0 data, enabling the systematic evaluation of AI models at different stages of the SAR processing chain. The dataset provides annotated samples for multiple representative Earth observation tasks, including maritime vessel detection, radio frequency interference (RFI) identification, and water body detection. These tasks cover relevant application domains such as maritime domain awareness or integrity of SAR acquisitions monitoring. Using these datasets, deep learning methods have been evaluated, and the experiments demonstrate the capability of these models to extract relevant features from SAR data and provide initial baseline results. Beyond the specific results presented, this work highlights the importance of curated and traceable datasets linking raw and focused SAR data to enable the development, benchmarking, and reproducibility of AI methods for radar applications. These datasets open new research directions for AI-driven SAR exploitation and represent a key step toward future onboard processing concepts, where detection algorithms may operate earlier in the SAR processing chain to reduce latency and data transmission requirements. 4:15pm - 4:30pm
ID: 129 AI-Based SAR Image Formation from Sentinel-1 Level-0 Data Starion Group Italia S.p.A., Italy The growing of Earth Observation archives and the increasing demand for low-latency information products motivate research into alternative processing paradigms for Synthetic Aperture Radar (SAR). SAR measurements, acquired as complex radar echoes, must be processed into a focused image before they can be exploited. Conventional focusing algorithms such as the Range Doppler Algorithm (RDA) provide high fidelity results but involve computationally demanding processing and precise information about the satellite’s position, and timing. These constraints limit the data processing to ground segments and motivate investigations to reduce computational complexity to support onboard processing concepts. Building on previous work on AI-based SAR focused for Sentinel-1 raw data, we explore learning approaches for SAR image formation using StripMap Level-0 data. We implement a simplified RDA derived focusing chain to generate reference Single Look Complex (SLC) products and supervised training pairs. Level-0 echoes are processed through range compression and then paired with the corresponding RDA-focused SLC outputs to produce fixed-size complex training patches represented through real and imaginary channels. SAR Image formation is treated as a supervised learning problem, where neural networks are trained to convert range-compressed inputs to focused SLC outputs by using a fully convolutional encoder–decoder with a U-Net. The training employs the Huber loss function to improve robustness to the high dynamic range of SAR signals and sparse bright scatterers. On a pilot dataset derived from 11 StripMap acquisitions producing 495 patches, the trained model achieves: accuracy of 0.65, Structural Similarity Index (SSIM) of 0.43, and Peak Signal-to-Noise Ratio (PSNR) of 22.5 dB. Given the limited dataset size, these results remain susceptible to overfitting, and broader generalisation is still under assessment. To support systematic experimentation and dataset scaling, we implement the dataset generation as an automated Python pipeline driven by YAML configuration files. Sentinel-1 Level-0 products are queried and downloaded from the Copernicus Data Space Ecosystem catalogue using spatio-temporal filters. The retrieved data are range-compressed, divided into patches and automatically selected on SAR-specific metrics such as land–water contrast and coastline boundary strength. The pipeline enables reproducible dataset construction and is currently being used to assemble a larger dataset from about one hundred StripMap acquisitions. These results confirm the feasibility of AI-based SAR image formation from real Level-0 data and highlight the potential of learning approaches to approximate conventional focusing. Future work will evaluate generalisation on larger datasets, extend experiments to additional acquisition modes and explore integration within advanced SAR processing pipelines. 4:30pm - 4:45pm
ID: 140 TOWER-CHECK: Toward On-Board SAR Damage Screening of Power Transmission Towers with a Chip-Based Processing Chain 1Planetek Italia s.r.l., Italy; 2Geophysical Applications Processing (GAP) s.r.l., Italy; 3Politenico di Bari, Bari, Italy; 4Università degli Studi di Bari Aldo Moro, Bari, Italy; 5Italian Space Agency (ASI), Rome, Italy Rapid situational awareness on the condition of high‑voltage power transmission towers and lines is essential to support emergency response and network restoration after extreme events, when ground or aerial inspections may be unsafe, slow, or limited in coverage. Building on the TOWER-CHECK processing chain, we present an end‑to‑end SAR-based workflow enabling near-real‑time screening of potential structural damages over wide areas, with a clear pathway toward future on‑board deployment. The workflow starts from multi-temporal COSMO-SkyMed Stripmap data (meter‑class resolution), exploiting repeated acquisition geometry for accurate co-registration and robust radiometric change analysis. Pre‑event radiometry is stabilized by temporally averaging multiple pre-event images, improving target‑to‑background contrast while preserving native resolution and avoiding the degradation typical of spatial speckle filtering. SAR focusing is performed while preserving the native radar geometry (range/azimuth), minimizing unnecessary resampling artefacts. Targets are processed through a chip‑based strategy: for each tower or line span, target‑adaptive bounding boxes are defined and refined from observed footprints, ensuring that pre‑ and post‑event chips represent the same scene portion. This reduces background variability and concentrates computation on the infrastructure elements of interest. For power lines, an explicit visibility constraint is introduced by selecting spans that satisfy an azimuth‑alignment condition (≈±10°), enabling consistent bright‑spot detection under favourable geometries. Each co‑registered chip pair is represented through a physically interpretable six‑dimensional radiometric feature vector, robust to changes in chip size and suitable for analysing scattering variations induced by tower collapse or removal. Two lightweight classifiers—Support Vector Machine (SVM) and Multi‑Layer Perceptron (MLP) are evaluated, with the MLP achieving excellent performance (overall accuracies up to 100%), enabling high‑throughput damage screening. To address the scarcity of real damaged samples, we incorporate a physics‑based SAR simulation framework to generate synthetic signatures of intact and collapsed towers for training and stress‑testing the classifiers. Finally, we discuss workflow readiness for low-latency and on-board processing concepts. Standard focusing algorithms (Range–Doppler and Omega‑K) and exploratory blind‑focusing approaches are tested. A dedicated study investigates the porting of processing blocks to an onboard computing platform using NASA’s open‑source cFS (core Flight System). Execution is evaluated on simulated (QEMU RISC‑V) and real hardware (PolarFire SoC FPGA Icicle, Intel Core Ultra 7 155H) to quantify resource usage and timing. The ultimate objective is to reduce latency and downlink volume while strengthening EO support to societal resilience. Funding: This research was carried out within the TOWER-CHECK project, “TOWER‑CHECK: Monitoraggio real‑time di tralicci con tecniche di IA a bordo di piattaforme satellitari SAR” (TOWER‑CHECK: real‑time monitoring of towers using on‑board AI techniques on SAR satellite platforms). The project was co-managed and funded by the Italian Space Agency (ASI) under Contract No. 2023-5-E.0 - Subcontract N. P22S2240-15-v0, CUP F93D23000200001. The work makes use of COSMO‑SkyMed products © of the Italian Space Agency (ASI), provided under an ASI license to use. 4:45pm - 5:00pm
ID: 142 AI-Enabled Detection of Transmission Tower Damage from Raw COSMO-SkyMed SAR Data 1University of Bari Aldo Moro, Bari, Italy; 2Politenico di Bari, Bari, Italy; 3Geophysical Applications Processing (GAP) srl, Bari, Italy; 4Planetek Italia, Bari, Italy; 5Italian Space Agency (ASI), Rome, Italy Extreme weather events driven by climate change pose increasing risks to critical infrastructure. Locating damaged transmission pylons, particularly in remote or hazardous areas, is costly and often unsafe. Synthetic Aperture Radar (SAR) sensors enable weather- and daylight-independent monitoring, making them well suited for post-disaster assessment. However, limitations in data downlink and ground-segment processing constrain the timeliness of SAR-based monitoring applications. On-board artificial intelligence (AI) offers a promising solution by enabling lightweight inference in orbit and transmitting high-level alerts or selected image patches for rapid verification by ground operators. Leveraging the high spatial resolution and operational maturity of COSMO-SkyMed X-band SAR data, this work investigates two AI-driven strategies for detecting damage to transmission towers directly from raw SAR observations. The first strategy explores whether AI can replace traditional SAR focusing algorithms by generating focused Single Look Complex (SLC) images from unfocused raw data. To this end, we developed and evaluated custom residual convolutional architectures featuring a residual compression encoder paired with multiple decoding schemes. The second strategy bypasses the focusing stage entirely, performing damage detection directly from raw SAR data and generating alerts suitable for downlink. Different CNN backbones (e.g. ResNet, MobileNet) are evaluated to classify structural damage in raw SAR patches. A major challenge in this domain is the scarcity of publicly available labeled SAR datasets (including both unfocused data and SLC products). To address this limitation, we developed a dedicated simulation environment capable of generating COSMO-SkyMed–like stripmap acquisitions of pylons under multiple structural damage scenarios. The proposed methods are subsequently evaluated on real COSMO-SkyMed data acquired over a test site in Italy. The investigation of the AI-based focusing approach highlights the challenges of applying deep learning to SAR image formation. Convolutional architectures must preserve fine structural details while compressing the large volumes of raw data required to reconstruct SLC patches, a requirement that differs significantly from typical computer vision tasks. Moreover, standard loss functions and evaluation metrics commonly used in vision applications proved only partially informative for this problem. These findings indicate that AI-based SAR focusing requires methods tailored to the physical characteristics of SAR signals. At the same time, the damage detection experiments show that structural differences between collapsed and removed pylons can be captured directly from raw SAR observations. When evaluated on real COSMO-SkyMed data, the models achieve moderate classification performance (F1-scores of about 70%), suggesting the feasibility of AI-based damage detection from SAR data. 5:00pm - 5:15pm
ID: 115 AI-Driven Feature Classification in Level-0 SAR Data 1Ubotica Ltd., Ireland; 2ESA-ESRIN, Italy The escalating requirement for real-time maritime situational awareness presents a significant challenge for conventional Synthetic Aperture Radar (SAR) processing chains. Traditionally, the substantial computational overhead of image formation (focusing) occurs on the ground, introducing latencies that degrade the utility of the data for time-sensitive security applications. This paper presents the results of the RAWSARAI project, which investigates the feasibility of bypassing the image formation step by applying Deep Learning models directly to raw, unfocused SAR phase history (Level-0) data. The primary motivation is the substantial reduction of end-to-end latency and downlink bandwidth requirements when raw SAR data can be meaningfully interrogated. We describe the development of a processing pipeline using Sentinel-1 Interferometric Wide (IW) data, where labeled Ground Range Detected (GRD) and Single Look Complex (SLC) vessel positions were mapped back to their raw data counterparts. Our efforts focused on training ResNet architectures to detect vessel signatures within the raw signal. Results indicate that for vessels exceeding certain length thresholds, Deep Learning models can achieve operationally relevant classification accuracies when distinguishing ship-containing raw-data from sea-only background. We further analyze the impact of ship size on the detectability of these raw-data signatures. The findings demonstrate that AI inference on raw data is significantly faster than traditional CFAR-based detection on focused imagery and provides a basis for the development of edge-native SAR sensors, capable of providing near-instantaneous maritime alerts for situational awareness. 5:15pm - 5:30pm
ID: 104 SAR-GONAUT: Optimizing AI-BAQ for Large-Scale Mission Scenarios 1German Aerospace Center (DLR), Germany; 2Radio Frequency Payloads and Technology Division, European Space Agency (ESA) Next-generation SAR systems will enhance their performance and capabilities through digital beamforming (DBF), wide bandwidths, and multiple acquisition channels, enabling wide-swath imaging at fine resolution beyond the limits offered by conventional SAR. However, these advances generate massive data volumes, imposing strict requirements on onboard memory and downlink capacity. Current and future missions such as Sentinel-1, NISAR, ROSE-L, and Sentinel-1 NG plan to acquire at finer temporal sampling (weekly), further increasing data storage and transmission requirements. The efficient quantization of SAR raw data is crucial, as it determines onboard memory usage and directly impacts SAR product quality. Both aspects must be carefully considered due to the limited acquisition capacity and onboard resources of spaceborne systems, while ensuring that the final SAR image meets mission performance requirements. Currently, Block-Adaptive Quantization (BAQ) is one of the most widely used compression methods for SAR raw data. Building on BAQ principles, newer algorithms have been developed to enhance performance and optimize resources (such as FDBAQ), possibly combined with non-integer data rates. However, FDBAQ optimizes bitrate solely from raw data statistics, without accounting for the actual impact on final SAR product quality. The Performance-Optimized BAQ (PO-BAQ) represents the first method for an optimization of the resource allocation depending on the final performance requirement defined for the resulting higher-level SAR/InSAR product: for this, a priori knowledge of the SAR backscatter statistics of the imaged scene is required in the form of a bitrate allocation map (BRM), increasing the complexity of the method. This does not consider local and seasonal changes that might occur at the time of the survey. The AI-BAQ is a deep learning-based approach for flexible and optimized onboard quantization which addresses these challenges by treating bitrate estimation as a deep supervised regression task, enabling the system to derive a BRM directly from the acquired SAR raw data matrix. Thanks to AI, a direct link between raw data and focused data domains can be established, allowing AI-BAQ to target a desired performance metric in the final SAR and InSAR products without requiring prior knowledge of the imaged scene. This work reports the results of the ESA SAR-GONAUT study: building on the initial AI-BAQ concept, the objective is to extend its capabilities to a global acquisition scenario, leveraging large-scale synthetic and TanDEM-X bistatic SAR data for training and validation. For this, we define an optimization closed-loop between network hyperparameter tuning, training strategy and performance evaluation, for the joint optimization of both bitrate estimation accuracy, overall data rate, and SAR product performance. The real dataset comprises about 500 experimental TanDEM-X bistatic acquisitions commanded over various land cover types (ice/snow, forest, urban, desert), while the synthetic dataset is consistently derived from an orbit simulator, a global X-band backscatter map produced from TerraSAR-X and TanDEM-X data, and SAR inverse processing. 5:30pm - 5:45pm
ID: 103 Physics-Informed Deep Learning for SAR image focusing in view of onboard SAR applications German Aerospace Center (DLR), Germany Onboard SAR processing is a key enabler for low-latency, autonomous Earth Observation (EO) missions. However, conventional SAR focusing methods are computationally intensive and hard to be optimized for real-time execution on resource-constrained platforms, limiting the feasibility of high-level application onboard, such as object detection, situational awareness, and real-time monitoring capabilities. This contribution proposes a physics-informed deep learning (DL)-based approach for the onboard focusing of SAR amplitude images. The method approximates the output of conventional focusing algorithms using a convolutional neural network (CNN), specifically designed by considering the theoretical principles underlying SAR image formation, and provides focused amplitude SAR images. We propose a comprehensive approach that explicitly links the design of the DL model architecture and training strategy to the SAR system acquisition parameters. In particular, we link the network receptive filed (RF) and the input patch size to the desired output resolution and we design the CNN architecture accordingly. We train the model using synthetic SAR raw data generated by inverse focusing, which assure one-to-one matching with the focused reference image and allow for sufficient flexibility in presence of different system parameters. The results are assessed with respect to both synthetic point-like targets and real SAR images from the TanDEM-X mission, through standard regression performance evaluation metrics and specific SAR-related parameters, derived, e.g., from impulse response function (IRF) analyses. Beyond image formation, the proposed framework has the potential to facilitate downstream tasks on board by providing images of sufficient quality for machine learning interpretation. Experimental results show that the generated images retain critical semantic information and are compatible with high-level onboard applications. Moreover, the developed DL model can also be used to extract highly informative representations from the raw data, acting as backbone for AI-based onboard high-level applications directly from SAR raw data. These findings highlight the potential of deep learning-based SAR processing as a foundation for next-generation intelligent SAR payloads. | ||

