首页 | 本学科首页   官方微博 | 高级检索  
     


Facial expression analysis with AFFDEX and FACET: A validation study
Authors:Sabrina Stöckli  Michael Schulte-Mecklenbeck  Stefan Borer  Andrea C. Samson
Affiliation:1.Institute of Marketing and Management, Department of Consumer Behavior,University of Bern,Bern,Switzerland;2.Max Planck Institute for Human Development,Berlin,Germany;3.Swiss Center for Affective Sciences,University of Geneva,Geneva,Switzerland;4.Department of Psychiatry and Behavioral Science,Stanford University School of Medicine,Stanford,USA
Abstract:The goal of this study was to validate AFFDEX and FACET, two algorithms classifying emotions from facial expressions, in iMotions’s software suite. In Study 1, pictures of standardized emotional facial expressions from three databases, the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP), the Amsterdam Dynamic Facial Expression Set (ADFES), and the Radboud Faces Database (RaFD), were classified with both modules. Accuracy (Matching Scores) was computed to assess and compare the classification quality. Results show a large variance in accuracy across emotions and databases, with a performance advantage for FACET over AFFDEX. In Study 2, 110 participants’ facial expressions were measured while being exposed to emotionally evocative pictures from the International Affective Picture System (IAPS), the Geneva Affective Picture Database (GAPED) and the Radboud Faces Database (RaFD). Accuracy again differed for distinct emotions, and FACET performed better. Overall, iMotions can achieve acceptable accuracy for standardized pictures of prototypical (vs. natural) facial expressions, but performs worse for more natural facial expressions. We discuss potential sources for limited validity and suggest research directions in the broader context of emotion research.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号