Author:

Rahman, Md Rezaur
Supervisor:Prof. Gudrun Klinker
Advisor:Jadid, Adnane (@ne23kah)
Submission Date:[created]

Abstract

Semantic segmentation and reconstruction of 3D data are two fundamental problems considered by the computer vision researchers and have widespread applications in the fields such as scene understanding, animation, and augmented reality. Point clouds are a 3D geometric data structure that carries valuable information about local properties of objects and scenes. Performing segmentation operations on point clouds is a challenging job because of the inherent complex local properties of the 3D scene. Furthermore, reconstruction of 3D point clouds is another arduous task to achieve, due to the complex topology of the objects and randomly scattered points in a scene. Recently, methods for 3D reconstruction and segmentation on point clouds tend to leverage deep neural networks and achieve some good result. However, these methods rendered raw point clouds in a regular 3D grid-like structure for processing through the network. In this research, this issue has been sought to overcome, and three different networks have been designed to evaluate the possibility of segmentation and reconstruction tasks. The first network employs two joint alignment networks to extract local features for point clouds and concatenate the global and local features to predict per point semantic class. The second one is an extension of the first network, where it exploits the hierarchical feature learning strategy for learning local properties in a recursive manner on the partitioning of the point sets. The final network is a Generative Adversarial Network (GAN) based reconstruction model, where it uses a tree-structured Graph Convolutional Network as a generator for the GAN, to transcend the representation power for features.

Results/Implementation/Project Description

Conclusion

[ PDF (optional) ] 

[ Slides Kickoff/Final (optional)]