Latin American GRSS and ISPRS Remote Sensing Conference
10 - 13 November 2025 • Iguazu Falls, Brazil
Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
OP07: Production-Economy: Sensors
| ||
| Presentations | ||
10:30am - 10:50am
End-to-End Shutter Design and Reconstruction for Dynamic Scene Imaging Universidad Industrial de Santander, Colombia High-quality video acquisition is essential for emerging applications such as remote sensing, autonomous driving, and augmented reality. However, the sequential readout inherent to CMOS sensors employing rolling shutters introduces severe geometric distortions, particularly during rapid scene motion. Traditional distortion-correction methods either rely on computationally intensive post-processing or necessitate costly global shutter hardware, resulting in significant trade-offs between accuracy, complexity, and cost. To address these limitations, we propose an end-to-end optimization framework integrating a custom optical layer represented by a compact, learnable M × N mask and a neural reconstruction network. By jointly optimizing optical acquisition and digital correction, our method significantly reduces distortion artifacts, achieving up to 4.3 dB PSNR improvement over a conventional rolling shutter fixed acquisition sequence in the test set. Our approach provides a practical and effective solution for high-speed, distortion-free scene capture 10:50am - 11:10am
AgriDOE: End-to-End DOE Optimization for Single-Shot Agricultural Spatial Classification Universidad Industrial de Santander, Colombia Precision agriculture leverages remote sensing and AI models to monitor crop conditions, but traditional RGB and panchromatic sensors are limited by their fixed optics and reduced spectral selectivity. These limitations hinder the accurate classification of materials in complex agricultural scenes. This paper proposes a compact and passive imaging system based on a diffractive optical element (DOE), co-optimized with a spatial classification model via an end-to-end (E2E) learning strategy. The DOE is parameterized through Zernike polynomials and embedded within a differentiable light propagation model based on angular spectrum theory. The classification network is trained using a single timestamp from the CALCROP21 dataset, ensuring realistic supervision while reducing temporal complexity. Two sensor configurations are evaluated, RGB and panchromatic, each compared under a traditional imaging system using a lens and an imaging system based on a DOE. The results show consistent gains in spatial classification accuracy for both configurations, with improvements of up to 19.7% in test accuracy when comparing the 1-band model with a trainable DOE against its non-trainable counterpart. Additional ablation studies highlight the impact of DOE initialization and demonstrate robustness to additive noise, confirming the effectiveness of task-specific DOE design for embedded agricultural vision systems. 11:10am - 11:30am
High-Accuracy Corridor Mapping Without GCPs: Assessing Precisions of DEMs Generated from UAS Photogrammetry with On-Site Pre-Calibration 1Federal University of Paraná, Brazil; 2The Ohio State University, USA; 3Tecsystem Company, Brazil High-resolution Digital Elevation Models (DEMs) are crucial for diverse geospatial analyses and UAS photogrammetry offers a cost-effective option for DEM acquisition. However, corridor mapping, due to its linear geometry, challenges the extraction of 3D information without relying on Ground Control Points (GCPs). While onboard GNSS-RTK can improve accuracy, robust camera calibration is critical to mitigate systematic vertical errors propagating in derived DEM. Existing research lacks sufficient investigation into feasible pre-calibration strategies for corridor mapping without GCPs. Therefore, this study addresses this gap by evaluating the precision of DEMs obtained from five photogrammetric experiments without GCPs: one on-the-job calibration and four GNSS-Assisted Aerial Triangulation using on-site pre-calibrations with different sub-blocks of images. For precision assessment of DEMs, a reference experiment with 17 GCPs and all available images was also carried out. Our results show that including oblique images in on-site pre-calibration with sub-blocks significantly reduced the critical correlation between focal length and Z object-space coordinates (from 99% to less than 20%). That outcome directly influenced focal length estimation and allowed mitigation of vertical bias in generated DEMs. The results demonstrate that on-site pre-calibration notably improved the accuracy and precision of vertical spatial data acquisition. These findings highlight on-site oblique pre-calibration with a sub-block of images as a feasible and robust strategy for producing high-resolution 3D models in UAS corridor mapping, significantly reducing reliance on GCPs. 11:30am - 11:50am
Accuracy of reconstruction with short baseline from a single-frame multispectral camera 1São Paulo State University (UNESP), at Presidente Prudente, São Paulo 19060-900, Brazil; 2Embrapa Agricultural Informatics, at Campinas, São Paulo, Brazil; 3Espirito Santo Federal Institute, at Vitória, Espírito Santo, Brazil Close-range acquisition of multispectral images is becoming widespread in many applications, such as forestry and agriculture. Some issues are still pending to be solved, such as the co-registration of the spectral bands generated with multilens cameras, since depth variations cause differential parallaxes depending on the lens position. Rigorous registration of this type of image requires pixel-wise parallax compensation, based on a depth model. This work presents a detailed evaluation of the accuracy of depth estimation with images acquired from a single-frame camera with a very short baseline. The methodology involves accurate camera calibration and three-dimensional reconstruction using Agisoft Metashape software. Field validation was conducted using a high-resolution laser scanner to produce reference data. Results show that even a single image frame, composed of six sub-images from different lenses, enables 3D reconstruction with reasonable accuracy, while a full image set achieves sub-centimetre accuracy. Comparison with a laser scanner reference point cloud confirms the spatial consistency of the reconstructed surface. The results demonstrate the viability of reconstruction with short baseline images, aiming for pixel-wise registration. 11:50am - 12:10pm
A Coarse-to-Fine Approach for Tree Point Cloud Registration Based on Relaxation Labeling 1Sao Paulo State University (UNESP), Brazil; 2Department of Cartography, São Paulo State University (UNESP) Recent advances in photogrammetry and remote sensing have highlighted the advantages of three-dimensional point cloud data for accurately reconstructing forest and agricultural environments. LiDAR (Light Detection and Ranging) systems represent the state of the art for acquiring 3D data, offering high geometric precision and adaptability to various platforms. Compared to traditional mapping methods, LiDAR enables greater spatial coverage and efficiency, particularly in large-scale applications. However, automatic registration of point clouds in complex and irregular environments, such as forests, remains a significant challenge due to occlusions, repetitive patterns, and low overlap between scans. This paper presents a coarse-to-fine registration approach designed explicitly for tree-dense environments. The method begins by processing each of the point clouds (Model and Scene), acquired from two stations, in order to generate the CHM (Canopy Height Model) for both point clouds, followed by slicing each of the trunks at breast height. Subsequently, the relaxation labeling algorithm is applied to match tree centroids to estimate an initial 2D transformation between the available scans based on probabilistic similarity and spatial relationships. This initial alignment is then refined using the Iterative Closest Point (ICP) algorithm to compute the final 3D transformation. Experiments using terrestrial laser scans of low overlap point clouds (< 30%) yielded a root mean square error (RMSE) of approximately 3 cm, without artificial targets. The camera-ready version will provide a more detailed explanation of this work. 12:10pm - 12:30pm
Dynamic Geometric Calibration on Visual Odometry Performance for Autonomous Drone Navigation 1Instituto Naciaonal de Pesquisas Espaciais, Brazil; 2Instituto Tecnológico de Aeronáutica, Brazil; 3Instituto de Estudos Avançados, Brazil This paper presents a novel approach based on deep learning techniques for dynamic geometric camera calibration in the context of autonomous aerial navigation. Unlike traditional offline calibration methods, the proposed technique leverages convolutional neural networks (CNNs) to adjust intrinsic parameters in real time, thereby adapting to environmental and mechanical variations that occur during flight. Experimental results demonstrate that this adaptive calibration framework significantly enhances the performance of monocular visual odometry by reducing cumulative trajectory errors in dynamic environments. The findings indicate that continuous calibration driven by deep learning models is a promising solution for improving the robustness and autonomy of vision-based navigation systems in unmanned aerial vehicles (UAVs). These results also reinforce the growing potential of deep learning to replace static and manual calibration procedures in embedded real-time applications. | ||

