Incomplete Point clouds obtained from one-side scanning always result in structural loss in 3D shape representations, thus many learning-based methods are proposed to restore complete point clouds from partial ones. However, most of them only utilize global features of inputs to generate outputs, which might lose details. In this paper, a new method that utilizes both global and local features is proposed. First, Local features are extracted from inputs and analyzed under the conditions interpreted by global features. Second, conditional local feature vectors are deeply fused with each other via graph convolution and self-attention. Third, deeply-fused features are decoded for generating coarse point clouds. Last, global features extracted from inputs and coarse outputs are combined to generate fine outputs with high-density. Our network is trained and tested on eight categories of objects in ModelNet. The results show that our network is able to overcome instability in local feature awareness, restore complete point clouds with more details and smoother shapes, and outperform most of those existing methods both intuitively and quantitatively. Our source codes will be available at: https://github.com/wuhang100/LRA-Net.
|