Author:

Norma Puspitasari
Supervisor:Prof. Gudrun Klinker
Advisor:Nassim Eghtebas (@ga53xoy)
Submission Date:[created]

Abstract

Facial extraction data plays a crucial role in affective computing. Although open-source tools are available for extracting Facial Action Units (AUs) data, conducting a user study with real-time analysis of AUs poses challenges. Existing tools are typically designed as standalone applications or libraries for offline usage. This research addresses these limitations by optimizing the current integrated real-time analysis filter in a video-conferencing platform for research purposes, utilizing OpenFace as a case study. Areas for improvement include low frame rate (fps), mixed frames, and data loss. The methodology involves implementing RabbitMQ as a robust queue system to manage and process the frame and AUs data in asynchronous operation. Additionally, a post-processing feature is introduced, allowing users to access facial extraction data after conducting experiments on the platform. Eight experiments were conducted with a variety of filters, movements, and numbers of participants to gather diverse insights into the impacts of optimization. Subsequently, the recorded experiment videos were extracted through post-processing for validation and comparison with real-time AUs. The results indicate promising frame rates, but resource utilization and real-time AUs present challenges. Future work and limitations are discussed in the context of existing literature on this topic.

Results/Implementation/Project Description

Conclusion

[ PDF (optional) ] 

[ Slides Kickoff/Final (optional)]