Surface reconstruction from point cloud scans is crucial in 3D vision and graphics. Recent approaches focus on training deep-learning (DL) models to generate representations through learned priors. These models use neural networks to map point clouds into compact representations and then decode these latent representations into signed distance functions (SDFs). Such methods rely on heavy supervision and incur high computational costs. Moreover, they lack interpretability regarding how the encoded representations influence the resulting surfaces. This work proposes a computationally efficient and mathematically transparent Green Learning (GL) solution. We name it the lightweight pointcloud surface reconstruction (LPSR) method. LPSR reconstructs surfaces in two steps. First, it progressively generates a sparse voxel representation using a feedforward approach. Second, it decodes the representation into unsigned distance functions (UDFs) based on anisotropic heat diffusion. Experimental results show that LPSR offers competitive performance against state-of-theart surface reconstruction methods on the FAMOUS, ABC, and Thingi10K datasets at modest model complexity.
Companion
APSIPA Transactions on Signal and Information Processing Special Issue - Three-dimensional Point Cloud Data Modeling, Processing, and Analysis
See the other articles that are part of this special issue.