Browsing

Publication Tag: Affective Computing

An overview of all publications that have the tag you selected.

2023
11 citations
Quantifying the efficacy of an automated facial coding software using videos of parents
R. C. Burgess, I. Culpin, I. Costantini, H. Bould, I. Nabney, R. M. Pearson
Leveraging FaceReader technology, we discuss the implications of our findings in the context of future automated facial coding studies, and we emphasise the need to consider gender-specific influences in research.
2019
32 citations
Remote heart rate monitoring – Assessment of the Facereader rPPg by Noldus
S. Benedetto, C. Caldato, D. C. Greenwood, N. Bartoli, V. Pensabene
In this research, FaceReader software is used to explore remote heart rate monitoring – assessment of the facereader rppg by noldus, providing objective data on emotional responses and facial muscle activities.
2022
15 citations
Test–Retest Reliability in Automated Emotional Facial Expression Analysis: Exploring FaceReader 8.0 on Data from Typically Developing Children and Children with Autism
Z. Borsos, Z. Jakab, K. Stefanik, B. Bogdán, M. Gyori
Leveraging FaceReader technology, this study utilizes test–retest reliability in automated emotional facial expression analysis: exploring facereader 8.0 on data from typically developing children and children with autism to analyze facial expressions and emotional states. It provides insights into how automated facial coding can be applied to research and practical scenarios.
2023
8 citations
The cross-race effect in automatic facial expression recognition violates measurement invariance
Y. T. Li, S. Yeh, T. R. Huang
Emotion has been a subject undergoing intensive research in psychology and cognitive neuroscience over several decades. Recently, more studies of emotion have adopted automatic rather than manual methods for facial expression recognition to analyze images or videos of human faces. Compared to manual methods, these computer-vision-based methods can help objectively and rapidly analyze a large amount of data. These methods are also validated and believed to be accurate in their judgments. However, they often rely on statistical learning models (e.g., deep neural networks), which are intrinsically inductive and thus suffer from problems of induction. Specifically, that were trained primarily on Western faces may not generalize well and accurately to judge Eastern faces, then jeopardize measurement invariance of emotions in cross-cultural studies. To demonstrate such possibility, this study carries out cross-racial validation of two popular systems—FaceReader and DeepFace—using face datasets. Although both systems could achieve overall high accuracies in judgments by category on datasets, they performed relatively poorly, especially for negative emotions. While the results caution use of non-Western, they suggest measurements of happiness outputted by invariant across races, hence still utilized for positive psychology.
2021
24 citations
The Effectiveness of Facial Expression Recognition in Detecting Emotional Responses to Sound Interventions in Older Adults With Dementia
Y. Liu, Z. Wang, G. Yu
This research uses facial expression recognition software (FaceReader) to explore the influence of different sound interventions on the emotions of older people with dementia. The field experiment was carried out in a public activity space in an adult care facility. Three intervention sources were used, namely, music, stream, and birdsong. Data collected through the Self-Assessment Manikin Scale (SAM) were compared with (FER) data. FaceReader identified differences in emotional responses. Participants had significantly higher valence for all three interventions than without (p < 0.01). Indices of sadness, fear, disgust differed between interventions. For example, before the start of birdsong intervention, the index initially increased by 0.06 from 0 s to about 20 s, followed by a linear downward trend, an average reduction of 0.03 per s. In addition, arousal was lower when interventions began before, rather than concurrently with, the start of birdsong (p < 0.01). Moreover, stream interventions, there were significant days (p < 0.05 or p < 0.01). Furthermore, age and gender. Finally, comparison of SAM and FER results showed that music (first 80 s) helps predict dominance (r = 0.600), acoustic comfort (r = 0.545); birdsong (first 40 s) predicts pleasure (r = 0.770), acoustic comfort (r = 0.766); and music (r = 0.824), birdsong (r = 0.891).
2023
8 citations
The relationship between charitable giving and emotional facial expressions: Results from affective computing
A. Shepelenko, P. Shepelenko, A. Obukhova, V. Kosonogov, A. Shestakova
In this research, FaceReader software is used to explore the relationship between charitable giving and emotional facial expressions: results from affective computing, providing objective data on emotional responses and facial muscle activities.
2021
15 citations
Viewpoint Robustness of Automated Facial Action Unit Detection Systems
S. Namba, W. Sato, S. Yoshikawa
This study investigated the viewpoint robustness of automated facial action unit detection systems using static images obtained at various angles (0°, 15°, 30°, and 45°). Three automated systems (FaceReader, OpenFace, and Py-feat) were evaluated. The overall performance was best for OpenFace, followed by FaceReader, and then Py-feat. Performance significantly decreased at 45° compared to other angles, while Py-feat did not differ among four angles. OpenFace performance decreased as the target turned sideways. Prediction robustness varied with facial action components and systems.
2022
15 citations
What’s in a face: Automatic facial coding of untrained study participants compared to standardized inventories
T. T. A. Höfling, G. W. Alpers, B. Büdenbender, U. Föhl, A. B. M. Gerdes
Automatic facial coding (AFC) is a novel research tool to automatically analyze emotional expressions. AFC can classify expressions with high accuracy in standardized picture inventories. However, classification of untrained study participants is more error-prone. This discrepancy requires a direct comparison between these two sources. To this end, 70 participants were asked to express joy, anger, surprise, sadness, disgust, and fear in a typical laboratory setting. Recorded videos were scored with well-established software (FaceReader, Noldus Information Technology). These were compared with measures from pictures from trained actors (i.e., inventories). We report the probability estimates of specific emotion categories and, in addition, Action Unit (AU) profiles for each emotion. Based on this, we used a machine learning approach to determine relevant AUs for each emotion, separately for both datasets. First, misclassification was frequent in some emotions participants. Second, AU intensities were generally lower in all emotions. Third, although AU profiles overlapped substantially across datasets, there were also substantial differences in their profiles. This provides evidence that the application is not limited to expression but can be used to code emotions in untrained study participants.
2021
2 citations
Linking teachers’ facial microexpressions with student-based evaluation of teaching effectiveness: A pilot study using FaceReader™
R. Schlag, M. Sailer
This study seeks to investigate the potential influence of facial microexpressions on student-based evaluations and explore future possibilities using automated technologies in higher education. We applied a non-experimental correlational design if number of videotaped university lecturers’ recognized by FaceReader™ serves as predictor for positive results student evaluation teaching effectiveness. Therefore, we analyzed five lectures with automatic recognition software. Additionally, each video was rated between 8-16 students, rating instrument based Murray’s (1983) factor analysis. The software could detect more than 5.000 microexpressions. Although emotions bear ‘overall performance’ rating, ‘emotions’ is not predicting b=.05, t(37)=.35, p > .05. demonstrates that ratings are affected variables just showed sympathy well estimated age lecturer predicted ratings.
2025
3 citations
An Artificial Intelligence Model for Sensing Affective Valence and Arousal from Facial Images
H. Nomiya, K. Shimokawa, S. Namba, M. Osumi, W. Sato
Artificial intelligence (AI) models can sense subjective affective states from facial images. Although recent psychological studies have indicated that dimensional aspects of valence and arousal are systematically associated with facial expressions, no AI model has been developed to estimate these from facial images based on empirical data. We developed a recurrent neural network-based model trained on our database containing participant valence/arousal ratings from video clips. Leave-one-out cross-validation supported the validity of the model for predicting subjective states. We further validated the effectiveness of the model by analyzing a dataset of facial expressions and arousal ratings from videos. The predicted second-by-second states, with a prediction performance comparable to FaceReader, a commercial facial expression analysis software, were used to estimate different affective states using a different approach. We constructed a graphical user interface to show real-time video and predicted affective states, and the model is the first distributable affective sensing model for facial images/videos. We anticipate it will be an AI model for sensing affective valence and arousal from facial images and have many practical uses, such as in mental health monitoring and marketing research.

Get your free example report

Get your free whitepaper