Papers
arxiv:2409.10545

ResEmoteNet: Bridging Accuracy and Loss Reduction in Facial Emotion Recognition

Published on Sep 1, 2024
Authors:
,
,
,
,

Abstract

ResEmoteNet, a deep learning architecture combining Convolutional, Squeeze-Excitation, and Residual Networks, achieves high accuracy in facial emotion recognition across multiple datasets.

AI-generated summary

The human face is a silent communicator, expressing emotions and thoughts through its facial expressions. With the advancements in computer vision in recent years, facial emotion recognition technology has made significant strides, enabling machines to decode the intricacies of facial cues. In this work, we propose ResEmoteNet, a novel deep learning architecture for facial emotion recognition designed with the combination of Convolutional, Squeeze-Excitation (SE) and Residual Networks. The inclusion of SE block selectively focuses on the important features of the human face, enhances the feature representation and suppresses the less relevant ones. This helps in reducing the loss and enhancing the overall model performance. We also integrate the SE block with three residual blocks that help in learning more complex representation of the data through deeper layers. We evaluated ResEmoteNet on four open-source databases: FER2013, RAF-DB, AffectNet-7 and ExpW, achieving accuracies of 79.79%, 94.76%, 72.39% and 75.67% respectively. The proposed network outperforms state-of-the-art models across all four databases. The source code for ResEmoteNet is available at https://github.com/ArnabKumarRoy02/ResEmoteNet.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2409.10545 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2409.10545 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2409.10545 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.