言语交流者的文化背景制约了个体通过言语与人声编码与解析情绪。本讲座将以嗓音情绪作为研究对象,通过实验心理学范式,结合眼动追踪、门控范式、声学分析、机器学习等方法探讨东西方文化群体在提取嗓音中情绪信息的时间进程上的差异,以及利用嗓音情绪信息解析跨通道视觉信息上的差异,并揭示方言经验对人工智能学习嗓音情绪信息解析时的贡献。
Culture constrains the way how individuals encode and decode emotion in speech. This talk will focus on recent cross-cultural studies on vocal emotion recognition in our lab drawing upon methodologies in experimental psychology, including eye-tracking, gating paradigms, acoustic analyses and machine learning. The findings revealed how eastern and western cultural members extract emotional signals from voices in different time courses, and integrate vocal-facial information in multimodal displays, and showcased how dialectal experience plays a role in training AIs to learn to recognize emotions from human voices in different dialects.