Affective computing is a multidisciplinary field focused on developing systems that recognize and interpret human emotions to improve human-machine interactions. Facial expressions are critical for conveying emotions nonverbally, and machine learning—especially deep learning techniques like Convolutional Neural Networks (CNNs)—has significantly advanced the analysis of these expressions.
Several large, publicly available facial emotion datasets support the training and evaluation of FER systems, including FER2013, CK+, JAFFE, AffectNet, EmotioNet, RAF-DB, and Oulu-CASIA. These datasets vary in size, diversity, annotation quality, and complexity, with some providing detailed facial action unit annotations.
Traditional facial recognition methods like PCA-based Eigenfaces and Fisherfaces faced challenges with lighting and expression variability. Deep learning models such as DeepFace and FaceNet revolutionized the field by achieving near-human accuracy and efficient face embeddings, though generalizing across diverse datasets remains a challenge.
Modern FER implementations rely heavily on CNNs, which automatically learn hierarchical features from images, enabling more accurate and subtle emotion recognition. Training these models requires large, diverse, and well-annotated datasets to ensure robustness and minimize biases across different demographics.
Conclusion
Inthisstudy,weexaminedtheapplicationof deep learning methods, particularly Convolutional Neural Networks (CNNs), along with the Haar Cascade algorithm for recognizingfacialemotions(FER).Through theintegrationofthesetechnologies,we demonstrated that emotion recognition systems can achieve high accuracy and efficiency, especially when combined with the robust face detection capabilities provided by Haar Cascade. The research highlights the growing potential of deep learning in enhancing FER systems and opensupavenuesforfutureadvancementsin the field.
References
[1] E.Sariyanidi,H.Gunes,etA.Cavallaro, « Automatic Analysis of Facial Affect: A Survey of Registration, Representation, and Recognition,IEEETrans.Pattern Anal. Mach.Intell., Oct.2014, doi: 10.1109/TPAMI.2014.2366127.
[2] C.-N.Anagnostopoulos,T.Iliou,etI. Giannoukos, « Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011 », Artif. Intell. Rev., vol. 43,no. 2, pp. 155-177, févr. 2015, doi: 10.1007/s10462-012-9368-5.
[3] D. K. Jain, P. Shamsolmoali, et P. Sehdev, « Extended deep neural network for facial emotion recognition », Pattern Recognit. Lett., vol. 120, p.69-74,avr.2019,doi: 10.1016/j.patrec.2019.01.008.
[4] D.H.Kim,W.J.Baddar,J.Jang,et Y. M. Ro, « Multi-Objective Based Spatio-Temporal Feature Representation Learning Robust to Expression Intensity Variations for FacialExpressionRecognition», IEEETrans.Affect.Comput.,vol. 10,no.2,p.223-236,Apr.2019,doi:10.1109/TAFFC.2017.2695999.
[5] W. Zhao et al., “Face Recognition: LiteratureSurvey,”ACMComput. Surv.,vol.35,no.4,pp.399–458,2003.
[6] K.Delac,RecentAdvances in Face Recognition. 2008.
[7] A. S. Tolba, A. H. El-baz, and A. A. El-Harby, “Face Recognition: A Literature Review,” Int. J. Signal Process.,vol.2,no.2,pp.88–103,2006.
[8] C. Geng and X. Jiang, “Face recognition using SIFT features,” in Proceedings - International Conference on Image Processing, ICIP, pp. 3313–3316, 2009.
[9] S.J.Wang,J.Yang,N.Zhang,andC. G. Zhou, “Tensor Discriminant Color Space for Face Recognition,” IEEE Trans.ImageProcess.,vol.20,no.9,pp. 2490–501, 2011.
[10] S.N.Borade,R.R.Deshmukh,andS. Ramu, “Face recognition using fusion of PCA and LDA: Borda count approach,” in 24th Mediterranean Conference on Control and Automation, MED 2016, pp. 1164–1167, 2016.
[11] M.A.TurkandA.P.Pentland,“Face Recognition Using Eigenfaces,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 72–86,
[12] 1991.
[13] M.O.Simón,“ImprovedRGB-D-T- based face recognition,” IET Biometrics,vol.5,no.4,pp.297– 303,Dec.2016.
[14] O. Dniz, G. Bueno, J. Salido, and F. DeLaTorre,“Facerecognitionusing Histograms of Oriented Gradients,” Pattern Recognit. Lett., vol. 32, no. 12, pp. 1598–1603, 2011.
[15] J.Wright,A.Y.Yang,A.Ganesh, S.S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,”IEEETrans.Pattern Anal.Mach.Intell.,vol.31,no.2,pp. 210–227, 2009.
[16] C.Zhou,L.Wang,Q.Zhang,andX. Wei, “Face recognition based on PCA image reconstruction and LDA,” Opt. - Int. J. LightElectronOpt.,vol.124,no. 22, pp. 5599–5603, 2013.
[17] Z.Lei,D.Yi,andS.Z.Li,“Learning Stacked Image Descriptor for Face Recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 26, no. 9,pp.1685–1696, Sep. 2016.
[18] P. Sukhija, S. Behal, and P. Singh, “Face Recognition System Using Genetic Algorithm,” in Procedia Computer Science, vol. 85, 2016.
[19] S. Liao, A. K. Jain, and S. Z. Li, “Partial face recognition:t-free approach,” IEEE Trans. Pattern Anal.Mach.Intell.,vol.35,no.5,pp. 1193–1205, 2013.
[20] Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Learning Deep Representation for Face Alignment with Auxiliary Attributes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 5, pp. 918–930, 2016.
[21] G. B. Huang, H. Lee, and E. Learned-Miller, “Learning hierarchicalrepresentationsforface verification with convolutional deep belief networks,” in Proceedings of the IEEE Computer Society ConferenceonComputerVisionand PatternRecognition,pp.2518–2525, 2012.
[22] S. Lawrence, C. L. Giles, Ah Chung Tsoi, and D. Back, “Face recognition: a convolutional neural- network approach,” IEEE Trans. NeuralNetworks,vol.8,no.1,pp.98–113, 1997.
[23] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face Recognition,”inProceedingsofthe British Machine Vision Conference 2015, pp. 41.1-41.12, 2015.
[24] Z.P.Fu,Y.N.Zhang,andH.Y. Hou, “Survey of deep learning in face recognition,” in IEEE International Conference on Orange Technologies,ICOT2014,pp.5–8,2014.
[25] X.Chen,B.Xiao,C.Wang,X.Cai,Z. Lv, and Y. Shi, “Modular hierarchical feature learning with deep neural networks for face verification,” Image Processing (ICIP),201320thIEEEInternational Conference on. pp. 3690–3694, 2013.