Vision: reconstruct a model of the world that permits online level-of-detail extraction. The core idea in this project is to interactively integrate sensed 3D data of various sources and scales into a topologically clean surface. Our varying-scale model then permits online extraction of seamless levels of detail for rendering with minimal aliasing and popping artifacts. Our proposed algorithms permit easy digitization of large indoor or urban scenes by crowd-sourced 3D scanning with commodity mobile devices. The instant integration into the collective model also lets users better control the scan acquisition process by notifying where data is still missing. The acquired 3D scenes can then be used instantly by the general public for navigation on the same hand-helds and augmented with existing semantic information, e.g. from building information models or urban installations. At the same time, this supports the incidental inventorizing of urban or indoor furniture. The topologically clean output surface and the change detection permit easy processing of the geometry for common use cases such as autonomous navigation, environment learning, augmented reality and tracking changes. An example application is fusing and distributing scans from multiple autonomous vehicles (ground, air), for incidental map updating as well as guaranteed efficient collision detection and tracking changes for path planning.
Principal Investigator:
Institution:
Project title:
Co-Principal Investigator(s):
Michael Wimmer (TU Wien)
Status:
Ongoing (01.05.2020 – 30.04.2026)
GrantID:
10.47379/ICT19009
Funding volume:
€ 578,450
Keywords: Surface Reconstruction, Level of Detail, Denoising, Multi-scale, Topological Guarantees, Geometry Processing
Scientific disciplines: Computer graphics (70%) | Geometry (30%)