Multi-Facial Emotion Recognition Using Fusion CNN on Static and Real-Time Inputs: A Deep Learning Approach 36-65
Main Article Content
Abstract
Facial emotion recognition is a pivotal component in the domain of affective computing, aiming to bridge the gap between human emotional expression and machine interpretation. This study introduces a deep learning-driven framework for multi-facial emotion recognition, leveraging diverse data modalities including static images, video frames, and webcam inputs. The model was trained and evaluated using the CK+ dataset with a systematic data split for training, testing, and validation to ensure robustness. A Fusion Convolutional Neural Network (Fusion CNN) was proposed to optimize feature extraction and improve classification accuracy across heterogeneous input sources. The implementation was realized using Python with OpenCV and Keras libraries, while statistical validation, including chi-square tests and regression analysis, was conducted in R to assess model consistency and accuracy. Among the various models tested, the Fusion CNN demonstrated superior performance with an accuracy of 72.16%, surpassing traditional CNN and RNN architectures. The results underscore the potential of the proposed approach in advancing real-time emotion recognition systems, with future scope for integration into intelligent user interfaces and assistive applications.