Author:

Özgür Akyazı
Supervisor:Prof. Gudrun Klinker
Advisor:Adnane Jadid
Submission Date:09.04.2019

Abstract

Tracking is an important task in Augmented Reality and, variety of sensory data is used to achieve this. To extract more information about the scene, object or situation, these sensory data should be understood better in a sense, which is called Sensor Fusion. This field is well established since it has been studied for decades. Except one study, to the writer's knowledge, all studies use an analytical approach for solution. In this study, a fully automated deep learning architecture to fuse multiple-IMU data(acceleration and angular velocity) to get position and orientation is designed and evaluated.

Results/Implementation/Project Description

In the following part, you can see model improvements by training different models.

You can see the how the results evolved while changing the hyperparameters of the neural network. As you can see the learning rate is one of the most important one here. In order to have a structured report, each part of the neural network is improved one by one. However, to get the best performance out of the model, hyperparameters would better be improved all together, which results in a pretty large search space. In the PDF, you can see detailed explanation about this improvements.

Conclusion





Project PDF



Kick-off Presentation


Final Presentation