Abstract:
The integration of thermal information into photogrammetrically reconstructed 3D models is increasingly relevant for infrastructure diagnostics, energy efficiency analysis, and remote inspection. This paper presents a dual-path workflow for generating thermal-enhanced digital twins from UAV-captured imagery, combining high-resolution RGB and thermal data using both classical photogrammetry and state-of-the-art neural rendering frameworks. Two fusion strategies are proposed: (1) embedding thermal values into vertex properties of the 3D mesh, and (2) projecting thermographic textures onto the reconstructed model. A realworld case study of a university service building is used to validate the methodology using imagery captured by a DJI Mavic 3T drone. The reconstruction pipeline includes COLMAP for traditional Structure-from- Motion (SfM), and Neuralangelo, Instant Neural Graphics Primitives (NGP), and 3D Gaussian Splatting for neural rendering. The comparative analysis highlights trade-offs in reconstruction fidelity, processing time, and thermal data accuracy. Results show that texture-based thermal projection yields superior visual quality and flexibility, particularly when combined with neural methods. This study provides a modular, reproducible framework for enhancing digital twins with thermal sensing, bridging the gap between photogrammetry and emerging neural reconstruction techniques.
