Authors: Anurag Srivastava, Hemant Singh Chauhan, Abhishek Kumar Singh, Albasit Khan, Dr. H. R. Singh
DOI Link: https://doi.org/10.22214/ijraset.2022.43351
Certificate: View Certificate
In this project emotion detection using its facial expressions are going to be detected. These expressions are often derived from the live feed via system’s camera or any pre-existing image available within the memory. Emotions possessed by humans will be recognized and contains a vast scope of study within the computer vision industry upon which several research have already been done. The work has been implemented using Python (3.10), Open-Source Computer Vision Library (OpenCV) and NumPy. The run time video (testing dataset) is being compared to training dataset and thus emotion is predicted. the target of this paper is to develop a system which might analyze the image and run time video and predict the expression of the person. The study proves that this procedure is workable and produces valid results.in this project we\'ve got make change to the accuracy of the running project by using the various models of python and deep learning.
I. INTRODUCTION TO IMAGE PROCESSING
In order to urge an enhanced image and to extract some useful information out of it, the tactic of Image Processing will be used. it's a really efficient way through which a picture are often converted into its digital form subsequently performing various operations thereon. this can be a method like signal processing, within which the input given may be a 2D image, which may be a collection of numbers starting from 0 to 255 which denotes the corresponding pixel value.
A. Conversion of Color Image to Gray Scale
There are two methods by which we can convert a color image to a gray scale image [8]:
In this method, mean is taken of the three colours i.e. Red, Blue & Green present in an exceedingly color image. Thus, we get
Grayscale= (R+G+B)/3;
But what happens sometimes is rather than the grayscale image we get the black image. this is often because we within the converted image we get 33% each of Red, Blue & Green.
Therefore, to unravel this problem we use the second method called Weighted Method or Luminosity Method.
2. Weighted or Luminosity Method
To solve the matter in Average Method, we use Luminosity method. during this method, we decrement the presence of Red Color and increment the colour of Green Color and also the blue color has the share in between these two colors.
Thus, by the equation [8],
Grayscale= ((0.3 * R) + (0.59 * G) + (0.11 * B)).
We use this thanks to the wavelength patterns of those colors. Blue has the smallest amount wavelength while Red has the most wavelength.
II. REVIEW OF LITERATURE
III. INTRODUCTION TO OPENCV
OpenCV is Open Computer Vision Library. it's a free for all extensive library which consists of over 2500 algorithms specifically designed to hold out Computer Vision and Machine Learning related projects. These algorithms are often accustomed do different tasks like Face Recognition, Object Identification, Camera Movement Tracking, Scenery Recognition etc. It constitutes an outsized community with an estimate of 47,000 odd those that are active contributors of this library. Its usage extends to varied companies both, Private and Public.
A new feature called GPU Acceleration was added among the preexisting libraries. This feature can work out with almost every operations, even though it’s not completely advanced yet. The GPU is pass using CUDA and thus takes advantages from various libraries like NPP i.e., NVIDIA performance primitives. Using GPU is helpful by the actual fact that anyone can use the GPU feature without having a powerful knowledge on GPU coding. In GPU Module, we cannot change the features of a picture directly, rather we've to repeat the first image followed by editing it.
IV. STEPS INVOLVED IN INSTALLING PYTHON 2.7 AND THEREFORE THE NECESSARY PACKAGES
Let’s begin with a sample of image in either .jpg or .png format and apply the tactic of image processing to detect emotion out of the topic within the sample image. (The word ‘Subject’ refers to any living being out of which emotions is extracted).
The array declared in a very program features a dimension which is named as axis.
The number of axis present in an array is understood as rank.
For e.g. A= [1,2,3,4,5].
In the given array A 5 elements are present having rank 1, due to one-dimension property.
Let’s take another example for better understanding.
B= [[1,2,3,4], [5,6,7,8]]
In this case the rank is 2 because it's a 2-dimensional array. First dimension has 2 elements, and also the second dimension has 4 elements. [10]
a. Glob Based on the rules specified by Unix Shell, the Glob module perceives the pattern and with regard to it, generates a file. It generates full path name.
Wildcards
These wildcards are wont to perform various operations on files or a component of directory. Only two wildcards are useful from the various functional wildcards[5]: -
b. Random: Random Module picks or chooses a random number or part from a given list of elements. This module only have functions which have access to such operations.
V. DIFFERENT EMOTIONS THAT CAN BE DETECTED OUT OF AN VIDEOS
VI. STEPS TO USE EMOTION RECOGNITION USING OPENCV-PYTHON:
1) The expected output of this project is that the accuracy of the project to capture facial The expression of someone using some tools of AI like python, machine learning, CNN , open cv etc. 2) The most purpose of this project is to create significant contribution to the environment and help people to recognise the face expression that may be easily understand the human feelings 3) Deep learning classification has been successfully applied to several EEG tasks, including motor imagery, seizure detection, mental workload, sleep stage scoring, event related potential, and emotion recognition tasks. the look of those deep network studies varied significantly over input formulization and network design. 4) Several public datasets were analysed in multiple studies, which allowed us to directly compare classification performances supported their design. Generally, CNN’s, RNN’s, and DBN’s outperformed other styles of deep networks, like SAE’s and MLPNN’s 5) Hybrid designs incorporating convolutional layers with recurrent layers or restricted Boltzmann machines showed promise in classification accuracy and transfer learning when put next against standard designs. 6) We recommend more in-depth research of those combinations, particularly the amount and arrangement of various layers including RBM’s, recurrent layers, convolutional layers, and fully connected layers.
[1] Murugappan, M., Rizon, M., Nagarajan, R., Yaacob, S., Zunaidi, I., Hazry, D.: Lifting scheme for human emotion recognition using EEG. In: International Symposium on Information Technology, ITSim 2008, vol. 2 (2008) [2] Plutchik, R.: Emotions and life: perspectives from psychology, biology, and evolution, 1st edn. American Psychological Association, Washington, DC (2003) [3] Petrantonakis, P.C., Hadjileontiadis, L.J.: Emotion recognition from EEG using higher order crossings. IEEE Transactions on Information Technology in Biomedicine 14(2), 186–197 (2010) [4] Canli, T., Desmond, J.E., Zhao, Z., Glover, G., Gabrieli, J.D.E.: Hemispheric asymmetry for emotional stimuli detected with fMRI. NeuroReport 9(14), 3233–3239 (1998) [5] Chanel, G., Kronegg, J., Grandjean, D., Pun, T.: Emotion assessment: Arousal evaluation using EEG’s and peripheral physiological signals (2006) [6] Ekman, P.: Basic emotions. In: Dalgleish, T., Power, M. (eds.) Handbook of Cognition and Emotion. Wiley, New York (1999) [7] Grocke, D.E., Wigram, T.: Receptive Methods in Music Therapy: Techniques and Clinical Applications for Music Therapy Clinicians, Educators and Students, 1st edn. Jessica Kingsley Publishers (2007)
Copyright © 2022 Anurag Srivastava, Hemant Singh Chauhan, Abhishek Kumar Singh, Albasit Khan, Dr. H. R. Singh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET43351
Publish Date : 2022-05-26
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here