Author:

Supervisor:Prof. Gudrun Klinker
Advisor:
Submission Date:15.12.2020

Abstract

In this work, motion control of a physically simulated humanoid avatar using a neural network is explored. With two multi segmented human shaped avatars, one animated by designed animation and the other supposed to follow, we create a scenario in which the follower is controlled by a neural network. We recorded datasets and explored through supervised learning how the neural network as adaptive controller can learn to actuate the avatar. This required extensive hyperparameter tuning. Findings of the training sessions are documented and analysed. Two neural network architectures are proposed and evaluated.

Results/Implementation/Project Description

In this work a function has been identified, which maps from the current state of a virtual avatar with multiple Rigidbodies to actuation values for each of these Rigidbodies. With this function, we recorded datasets and explored if a neural network is able to solve this multidimensional regression problem. In documented multiple training sessions and evaluated them based on a performance measure. The Datasets were recorded in Unity. Neural network training was implemented in a python environment with Tensorflow. The model is converted to json and loaded on a Ubi Interact Server. The server communicates with Unity to post predictions on runtime.

Conclusion