Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
Session 2-a: 3DGeoInfo - 3D Building and City Modeling
| ||
| Presentations | ||
Impact of Rain on 3D Reconstruction with Multi-View Stereo, Neural Radiance Fields and Gaussian Splatting Karlsruhe Institute of Technology, Germany Image-based 3D reconstruction uncovers many applications in documenting the geometry of the environment. Nonetheless, the assumption that images are captured in clear air rarely holds in real-world settings where adverse weather conditions are inevitable. We are particularly interested in rain as dynamic occlusion which degrades image quality and can hinder complete and accurate 3D scene reconstruction of the underlying features. In this contribution we analyze the geometry behind rain reconstructed by traditional Multi-View Stereo (MVS) and radiance field methods, namely: Neural Radiance Fields (NeRFs), 3D Gaussian Splatting (3DGS) and 2D Gaussian Splatting (2DGS). To assess the impact of rain to the 3D reconstruction we consider occlusion masks with different mask coverage. The results demonstrate that although MVS shows lowest accuracy errors, the completeness declines with rain. NeRFs manifest robustness in the reconstruction with high completeness, while 2DGS achieves second best accuracy results outperforming NeRFs and 3DGS. We demonstrate that radiance field methods can compete against MVS, indicating robustness in the geometric reconstruction under rainy conditions. Virtual 3D City Model Generation in CityGML Center for Spatial Information Science(CSIS), The University of Tokyo, Japan As the urban digital transformation continues to advance, virtual 3D city models have become essential tools for urban planning, traffic management, environmental assessment, and virtual reality applications. Current research largely focuses on constructing high-fidelity city models based on the CityGML standard; however, challenges remain regarding data acquisition costs, complexity of generation processes, and customization capabilities. To address these issues, this study proposes an automated virtual city model generation method that integrates open data (such as OSM, DEM, and open-source LOD2 models) with the concept of digital cousin. This method efficiently generates 3D city models with varying levels of detail, from LOD 0 to LOD 2, by integrating and parameterizing multisource data, including relief, roads, city furniture, vegetation, and buildings. Moreover, it supports flexible user adjustments of key parameters, such as vegetation density, road width, traffic light intervals, building heights, and roof types. Compared with traditional methods that rely on expensive surveying data and labor-intensive manual operations, the proposed approach offers a low-cost, highly flexible, and scalable solution, thereby providing robust support for a wide range of urban simulation and decision-making applications. The code used in this study is as follows: Automatic Enrichment of Semantic 3D City Models using Large Language Models Chair of Geoinformatics, Technical University of Munich, 80333 Munich, Germany Semantic 3D city models have become an essential component of city planning and digital twin applications. While standards like CityGML have enabled the structured representation of buildings and infrastructure, publicly available CityGML datasets often lack critical semantic attributes such as construction year, usage type, refurbishment status or sometimes outdated building function. These gaps hinder the application of 3D models in areas like energy demand analysis or infrastructure planning. Meanwhile, much of the missing data can be found in alternative sources such as municipal records, OpenStreetMap, or other APIs. Yet, integrating this heterogeneous and often unstructured information into the CityGML schema remains a complex task that requires geospatial expertise and good knowledge of the CityGML data model. In this paper, we explore the use of Large Language Models (LLMs) to automatically extract and map relevant information from sources like PDFs, APIs and VGI (voluntarily contributed by individuals) platforms such as OpenStreetMap into CityGML, using spatial databases such as 3DCityDB to store and manage the enriched semantic data for both building and street use cases. We propose a framework based on two LLM agents, one for data enrichment and one for querying, which will enable non-experts to enrich and interact with 3D city models more effectively. Our approach aims to reduce reliance on domain-specific knowledge and make the usage of CityGML 3.0 accessible to everyone. Enriching LoD2 Building Models with Facade Openings Using Oblique Imagery 3D Geoinformation group, Department of Urbanism, Faculty of Architecture and the Built Environment, Delft University of Technology, Julianalaan 134, 2628BL Delft, The Netherlands High-fidelity 3D urban applications — including emergency response simulation, microclimate analysis, and heritage conservation — demand semantically enriched 3D building representations at Level of Detail 3 (LoD3) with parametric facade components. Current urban digital twins predominantly rely on LoD2 models (as exemplified by the nationwide 3D BAG dataset in the Netherlands) that lack critical architectural features such as windows and doors, constraining their analytical value and their utility for fine-grained applications. This study introduces a novel pipeline to bridge this gap, enabling the enrichment of LoD2 models with accurate opening information using aerial oblique imagery and deep learning. The approach addresses critical challenges in 3D-2D alignment by leveraging perspective projection for comprehensive facade extraction, least-squares registration to rectify systematic offsets, and Mask R-CNN for robust opening detection. Unlike conventional methods, it captures both inward and outward building faces by projecting all 3D facades onto multi-directional images, ensuring complete coverage of visible elements. Geometric scaling integrates detected openings into LoD2 models as watertight, semantically rich components, validated for structural consistency. By overcoming data misalignments and occlusion limitations, this methodology provides a scalable framework for large-scale LoD3 generation, enabling efficient upgrades of existing building models to support detailed spatial analysis in smart city contexts. | ||