Conference Agenda
| Session | ||
Session 4-a: 3DGeoInfo - Point Cloud Analysis and Algorithms
| ||
| Presentations | ||
Semantic segmentation of point clouds with the 3D medial axis transform 3D Geoinformation group, Department of Urbanism, Faculty of Architecture and the Built Environment, Delft University of Technology, Julianalaan 134, 2628BL Delft, The Netherlands Semantic segmentation of 3D point clouds is pivotal for urban modeling and autonomous systems, yet challenges like irregular data structure and complex geometry hinder accurate segmentation. This study explores integrating the 3D Medial Axis Transform (MAT)—a topological skeleton encoding shape geometry via maximally inscribed balls—into deep learning frameworks to enhance semantic reasoning. We propose a feature fusion approach embedding MAT-derived attributes (radii, separation angles, medial bisectors) into point-based (PointNet++) and graph-based (Superpoint Graph) networks, enabling explicit geometric context for local points and superpoint relationships. Experiments on diverse datasets (3DOM, SynthCity, SHREC) demonstrate that MAT-enhanced features, particularly radii and separation angles, improve mean intersection over union (mIoU) by 5.8–12.4% compared to baseline RGB-only models, especially for classes like grass and shrubs where appearance features are ambiguous. However, MAT-guided geometric partitioning requires careful regularization to avoid over-segmentation, and graph convolutions benefit most from mean MAT attributes for global structure modeling. This work establishes MAT as a valuable geometric prior for point cloud segmentation, highlighting its potential to bridge topological structure and data-driven learning. Point Cloud for 3D Land Administration System (LAS) Delft University of Technology (TU Delft), Netherlands, The As cities grow denser, Land Administration Systems (LAS) must evolve to represent complex, multi-level property ownership, particularly in apartment buildings. While Building Information Models (BIM) are commonly used for 3D representation, their availability remains limited for many buildings. This research explores the use of point clouds as an alternative means to represent 3D spatial units in LAS, focusing on the integration of cadastral floor plans and the Dutch national point cloud dataset (AHN). Three apartment cadastral drawings from different years in Rotterdam serve as case studies. The proposed methodology involves four main steps: (1) parsing floor plans using image processing to extract cadastral room boundary polygons; (2) generating synthetic point clouds by extruding floor plan polygons and aligning them with AHN; (3) storing these 3D spatial units in a PostgreSQL-based database following the ISO 19152:2024 Land Administration Domain Model (LADM); and (4) developing a web-based 3D LAS using Vue.js, Cesium, and FastAPI for visualization and interaction. Results show that room boundaries can be extracted and converted into 3D point clouds for integration into a cadastral database. The synthetic point clouds include room-level attributes and spatial identifiers, enabling interactive visualization and data management through a web interface. However, challenges such as misalignment due to occlusion in AHN data and inconsistent quality in older floor plan drawings affect the accuracy and automation of the process. This research demonstrates that point clouds can effectively serve as final 3D representations in land administration, providing a scalable solution in the absence of BIM models and minimizing the need for additional field surveys. Heterogeneous Point Clouds Matching using Supervoxel Signatures from a Deep Neural Network Autoencoder National Yang Ming Chiao Tung University, Taiwan Advancements in lidar systems have improved the performance of 3D data acquisition. Differences arise between the point clouds obtained by different lidar sensors, such as variations in point density, random error, and scanning patterns. This study presents a novel approach for automatic cross-sensor matching of lidar point clouds using a deep neural network autoencoder (DNN-AE) and supervoxel signatures. A compact representation called a supervoxel signature was formed by voxelizing and reprojecting the point clouds, generating multiscale supervoxels, and encoding them with a DNN-AE. The proposed method demonstrated high matching accuracy and tolerance to point density differences and random registration, showcasing its effectiveness in addressing the challenges associated with varying lidar sensor data. From the simulation results, the supervoxel signature had a matching correctness of 83.78% when the point density was 1/256 of the original one, and the tolerance to random errors reached the submeter level. In addition, the multiscale supervoxel signature was more reliable than the single-scale combination. In real-case cross-sensor matching, the performance of real cases reached a matching correctness of 80%. A Taxonomy of Point Cloud Search Technical University of Munich, Germany Point cloud analysis is rapidly evolving, targeting new applications and use cases with novel information retrieval needs that challenge existing solutions' scalability, robustness, and reusability to manage and process point cloud data. Analytical approaches to gain insights are increasingly based on machine learning and tend to turn away from data management solutions in favour of internalizing custom, dedicated workflow-specific query capabilities, satisfying their requirements. Unfortunately, these ad-hoc solutions often fail to scale well with big point cloud datasets, such as terrestrial laser scanning. To address these limitations, we propose a point cloud search taxonomy and use it to identify fundamental requirements for a scalable, robust, and reusable data management system for state-of-the-art point cloud retrieval and data analytics. Our findings build a foundational analysis serving as a basis for future holistic development of point cloud data management solutions to overcome current bottlenecks. | ||