Author:

Michael Pabst
Supervisor:Prof. Gudrun Klinker
Advisor:

Linda Rudolph (@ge29tuw), Nikolas Brasch (@ga48xap)

Submission Date:[created]


Abstract

Traditional 2D screen-based conferencing systems restrict interactions to a flat representation, limiting the immersive experience. As mixed reality platforms advance, there is an increasing demand for spaces where users can interact as naturally as they would in person. Nevertheless, in mixed reality telepresence systems, the disparity between the participant's different physical environments can hinder effective collaboration. This thesis explores a new method to generate a synthetic mutual scene for remote telepresence participants. The system uses the telepresence user’s environment as input to generate a synthetic virtual scene, enabling collaboration among all users. The synthetic scene considers the spatial constraints of each user and ensures that personal privacy is not violated. This is achieved through an optimization-based method to find a common mutual space, combined with a deep learning approach to generate a suitable synthetic scene.

Results/Implementation/Project Description

Conclusion

[ PDF (optional) ] 

[ Slides Kickoff/Final (optional)]