
3D GeoInfo & SDSC 2025
20th 3D GeoInfo Conference | 9th Smart Data and Smart Cities Conference
02 - 05 September 2025 | Kashiwa Campus, University of Tokyo, Japan
Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
Session 11-b: SDSC - Walkability
| ||
| Presentations | ||
Can a Large Language Model Assess Urban Design Quality? Evaluating Walkability Metrics Across Expertise Levels 1Singapore-ETH Centre, Singapore; 2Takenaka Corporation, Osaka, Japan; 3Department of Architecture, National University of Singapore, Singapore Urban street environments are vital to supporting human activity in public spaces. The emergence of big data, such as street view images (SVI) combined with multi-modal large language models (MLLM), is transforming how researchers and practitioners investigate, measure, and evaluate semantic and visual elements of urban environments. Assessing Walkability in Sofia: A Multi-Metric Index for Pedestrian-Friendly Cities Sofia University "St. Kliment Ohridski", GATE Institute Despite the growing interest in urban walkability, a significant gap remains in assessing pedestrian accessibility at the neighbourhood level in Sofia, Bulgaria. This study aims to bridge this gap by developing a comprehensive walkability index tailored to Sofia’s urban environment. The index is constructed using ten key metrics that reflect six core aspects of pedestrian experience: connectivity, convenience, comfort, conviviality, coexistence, and commitment. The methodology employs geospatial analysis and computational modelling implemented in Python, leveraging libraries such as GeoPandas, Shapely, and NetworkX. The study assesses street connectivity using a link-to-node ratio, public transport coverage via shortest-path analysis, and network integration through the Pedestrian Route Directness Indicator (PRDI). Land use mix is evaluated using entropy-based calculations, while residential density considers household distribution within the built environment. Essential activities, pedestrian infrastructure, and convivial points are analysed based on proximity and spatial coverage. Traffic conditions are quantified through lane density, and the pedestrian-friendly network is assessed by mapping designated pedestrian-prioritized areas. OGC-AI: A Retrieval‑Augmented Large Language Model Interface for Open Geospatial Consortium Web Services 1Stuttgart University of Applied Sciences, Germany; 2Technical University Dresden, Germany The rapid growth of geospatial data presents challenges in terms of data management, integration, and analysis. Generated from a diverse array of sources including remote sensing platforms, in-situ sensor networks, volunteered geographic information (VGI), and Internet of Things (IoT) devices, this data holds immense potential for scientific discovery, environmental mon-itoring, urban planning, disaster management, and countless other domains. However, realizing this potential is critically dependent on effective data management, sharing, and utiliz-ation strategies. Interoperability - the ability of different sys-tems and organizations to access, exchange, and cooperatively use data—remains a cornerstone challenge. The Open Geo-spatial Consortium (OGC) has been instrumental in addressing this challenge for decades, developing a comprehensive suite of standards that define interfaces and encodings for publishing, discovering, and accessing geospatial data and processing ser-vices over the web (Sondheim et al., 1999, Castronova et al., 2013). Adherence to OGC standards is fundamental to imple-menting the FAIR data principles—ensuring data is Findable, Accessible, Interoperable, and Reusable—within the geospatial domain. These standards provide the syntactic and, to some extent, semantic foundation necessary for machines and hu-mans to interact reliably with distributed geospatial resources (Iv´anov´a et al., 2019). While these standards successfully en-code interoperability “on paper,” their real-world deployment remains uneven. Configuring, chaining, and querying hetero-geneous OGC endpoints typically demands specialist know-ledge of XML/JSON encodings, CRS transformations, pagin-ation, and query parameters—tasks that are non-trivial even for experienced GIS analysts and virtually prohibitive for domain experts in environmental science, public health, or urban plan-ning whose primary expertise lies elsewhere. Recent breakthroughs in instruction-tuned LLMs demonstrate near-human performance at tasks involving code synthesis, se-mantic search, and multi-modal reasoning. Their proven abil-ity to translate natural language questions into structured data-base queries suggests a promising avenue for lowering the entry barrier to standards-compliant geospatial infrastructures. How-ever, generic, publicly hosted models still fall short when faced with domain-specific ontologies or standard-specific paramet-erizations (e.g., bbox, limit, crs). Moreover, the privacy con-straints that sequester many government and corporate geodata holdings preclude the option of fine-tuning public models with proprietary examples. Most organizations expose data cata-logues only on their intranets for security or licensing reas-ons; in such settings, public Large Language Models (LLMs) have no prior knowledge about the services’ endpoint URLs, data schemas, or access tokens, and therefore cannot provide the “chat-with-your-data” functionality that users increasingly expect. To bridge this gap we present the OGC-AI, an AI middleware tool that couples organization-internal OGC endpoints with a private, retrieval-augmented LLM to deliver conversational ac-cess, visualization, and lightweight spatial analytics without ex-posing sensitive data or infrastructure details to external ser-vices. OGC-AI automatically harvests service metadata (cap-abilities documents, landing pages, OpenAPI descriptions, and sensor metadata) and indexes them in a vector database. At in-ference time, the system performs semantic similarity search on the incoming user prompt, injects the most relevant snippets into the model’s context window, and instructs the model to (1) formulate correct OGC-compliant queries, (2) execute those queries via a secure proxy, and (3) translate the machine re-sponse into human-readable answers, maps, or charts, support-ing the mixed legacy-and-modern service landscape typically found in long-standing Spatial Data Infrastructures. By sim-plifying data query, visualization, and basic analysis, OGC-AI promises to democratize access to valuable geospatial inform-ation, thereby fostering wider data utilization and advancing the goals of the FAIR data principles within the geoinformat-ics community. OGC-AI aims to significantly lower the barrier to entry for util-izing OGC-standardized geospatial data. By abstracting away the complexities of specific OGC request syntax and service protocols, it empowers a broader range of users, including do-main scientists, policy analysts, students, and citizens, to dir-ectly query and visualize geospatial information. For experi-enced GIS professionals, it can serve as a rapid assessment tool for exploring new services or performing quick data checks. Furthermore, it directly enhances the Accessibility and Usab-ility aspects of the FAIR principles for OGC-based data, com-plementing the Findability and Interoperability inherently pro-moted by the standards themselves. By enabling interaction with internal organizational data resources, OGC-AI provides a valuable capability currently unmet by public LLM platforms. This research contributes to the burgeoning field of Geospa-tial Artificial Intelligence (GeoAI) by demonstrating a practical application of LLMs to improve human-computer interaction within established geospatial data infrastructures. Future work will involve expanding the range of supported OGC standards, refining the natural language understanding for more complex geospatial queries, enhancing the analytical capabilities, and exploring robust methods for handling authentication and au-thorization for secured services. Pothole Detection and Dimension Estimation via Image Transformation and Scaling with Thai Road Data Integration Department of Civil Engineering, Faculty of Engineering, Chiang Mai University, Thailand Potholes on roads affect traffic safety and the overall quality of infrastructure. If left unrepaired, they can lead to increased maintenance costs and broader community impacts. Traditional inspection methods, such as visual surveys by human observers, still have limitations in terms of efficiency, accuracy, and safety. This study introduces a practical implementation combining deep learning and image processing techniques for pothole detection and dimension estimation, with additional road images collected from Thailand. The method employs the YOLOv8n-seg model, which performs instance segmentation to outline pothole boundaries. Training was conducted using a combination of open-source data and a small set of images captured from Thai roads to enhance contextual relevance. Inverse Perspective Mapping (IPM) was applied to estimate pothole dimensions and convert front-view images into bird's-eye views. The segmentation masks predicted by the model were then used to calculate the real size of each pothole. The results highlight the potential of integrating deep learning with image processing techniques to support road condition monitoring in terms of detection and damage assessment. | ||