• Home
  • Submit Paper
  • Check Paper Status
  • Download Certificate/Paper
  • FAQs
  • Contact Us
Email: ijraset@gmail.com
IJRASET Logo
Journal Statistics & Approval Details
Recent Published Paper
Our Author's Feedback
 •  ISRA Impact Factor 7.894       •  SJIF Impact Factor: 7.538       •  Hard Copy of Certificates to All Authors       •  DOI by Crossref for all Published Papers       •  Soft Copy of Certificates- Within 04 Hours       •  Authors helpline No: +91-8813907089(Whatsapp)       •  No Publication Fee for Paper Submission       •  Hard Copy of Certificates to all Authors       •  UGC Approved Journal: IJRASET- Click here to Check     
  • About Us
    • About Us
    • Aim & Scope
  • Editorial Board
  • Impact Factor
  • Call For Papers
    • Submit Paper Online
    • Current Issue
    • Special Issue
  • For Authors
    • Instructions for Authors
    • Submit Paper
    • Download Certificates
    • Check Paper Status
    • Paper Format
    • Copyright Form
    • Membership
    • Peer Review
  • Past Issue
    • Monthly Issue
    • Special Issue
  • Pay Fee
    • Indian Authors
    • International Authors
  • Topics
ISSN: 2321-9653
Estd : 2013
IJRASET - Logo
  • Home
  • About Us
    • About Us
    • Aim & Scope
  • Editorial Board
  • Impact Factor
  • Call For Papers
    • Submit Paper Online
    • Current Issue
    • Special Issue
  • For Authors
    • Instructions for Authors
    • Submit Paper
    • Download Certificates
    • Check Paper Status
    • Paper Format
    • Copyright Form
    • Membership
    • Peer Review
  • Past Issue
    • Monthly Issue
    • Special Issue
  • Pay Fee
    • Indian Authors
    • International Authors
  • Topics

Ijraset Journal For Research in Applied Science and Engineering Technology

  • Home / Ijraset
  • On This Page
  • Abstract
  • Introduction
  • Conclusion
  • References
  • Copyright

Facial Emotion Based Content Recommendation

Authors: Prajwal Vaibhav Shah, Aniket Sonawane, Kaustubh Gaikwad

DOI Link: https://doi.org/10.22214/ijraset.2022.42220

Certificate: View Certificate

Abstract

Current state of emotion of a person is highly related to what entertainment content he/she want to listen or watch. An emotion-based content recommendation will help the user to not only to get content according to their current state of mind but also reduce the efforts of managing a playlist for music and help them reduce their stress level by recommending them appropriate content for stress relief. Emotion of a person can be determined using his/her facial expression. This facial expression can be detected using a machine learning model, we have developed a model using xception architecture. An application which will access the camera of the device and take image of the persons face, it will connect to the ML Kit stored on the cloud (Firebase) which will analyze the image and detect the mood of the user, from that mood it will connect to API of a music and movies application (E.g., Spotify, Netflix, Disney Hotstar, etc.), though which we will recommend the content. The application will also verify from the user for his/her taste of music and customize the recommendation accordingly. The user will be prompted for change in emotion after specific intervals if he/she likes to change the content.

Introduction

I. INTRODUCTION

A. Background/Context

Facial expressions are the way to tell about our emotions. Computer frameworks based on full of feeling interaction may play an imperative part within another era of computer vision frameworks. Confront feeling can be utilized in ranges of security, amusement, and human machine interface s. (HMI). A human can express his/her emotion through lip and eye. This work describes the development of Emotion Based Content Recommendations, which is a computer application meant for users to minimize their endeavors in overseeing expansive playlists, moreover what to observe motion pictures to observe particularly when they require the substance based on their current. This will also help the person in stress, tension, depression, sadness to give provide them a song. The proposed model will extract user’s facial expressions and features to determine the current mood of the user. Once the emotion Is detected, playlist of songs suitable to the mood of the user will be presented to him. It aims to provide better enjoyment to the music lovers in music listening. In the model, following moods are included: Happy, Sad, fearful, surprised, angry and neutral. The system involves the image processing and facial detection processes. The input to the model is still images of user which are further processed to determine the mood of user. The system will capture the image of the user at the start of the application. The images are captured using webcam. The image captured previously will be saved and passed to the rendering phase. The mood of the user may not be same after some time; it may or may not change. Thus, the image is captured after every decided interval of time. And then that image will be forwarded to the next phase.

B. Aim and Objective

The fundamental plan behind this project is to play songs according on the user's moods. Its goal is to bring feeling awareness to user-favorite objects. Users within the current system need to manually decide the songs since willy-nilly contend songs might not match the user's mood. To do so, the user should 1st classify the songs into varied emotions, and then manually choose a precise feeling before enjoying the songs. Exploitation associate Emotion-based music player will assist you avoid these problems. The music is going to be contend from the required folders supported the feeling.

II. THEORETICAL DESCRIPTION

A. Theoretical Description

In this case, we'd create a online application so that the user could log in to their account and access the features. A camera or an existing photo will be required of the user in order to submit it and receive recommendations based on it. For of essential image processing, we will utilize OpenCV and Haar Cascade, which will aid in the extraction face characteristics such as eyes and lips, as well as feature-point recognition. These features will be given to a pretrained xception model which predict the user’s mood. This would be connected to the cloud, where a list of songs and other items would be saved according to the user's mood, and it would also accept user comments on the recommendations for future enhancements.

B. Resources Required

Hardware Requirements: Mobile, laptop or desktop with a camera and internet facility

Software Requirements: for desktop or laptop Windows 7 or later, Python 3.7 or later, tkinter

III. ALGORITHM

The complete project has been divided into two major parts i.e. Doctor and patient side. Patient side: Data is acquired via various sensors; temperature, pulse rate and SpO2 sensor. That are connected to RPi. Data is analyzed on raspberry and then. Simultaneously updated on cloud. In case of anomaly in the data readings of vital signs, an alert is sent via SMS to doctor and hospital staff. Report are generated in RPi which can be analyzed by things board  to the supervising doctor.

Doctor Side: It will first check whether user is admin, doctor

or, patient on authentication. If user is admin it will show following options are show all doctors, show all patients, remove doctor permanently and remove patient permanently If user is doctor it will show following options are see all requested patient, see all patients of my specialty, see my patients, see patients report and remove patient. Here doctor can also live monitor the patients via thingsboard. If user is Patient it will show following options are add Report, see all doctors for my problem, Request Doctor, Remove Request, See My Doctor Stats. This complete process is carried out via a graphical user interface which is installed    in the lobby of hospital or clinic.

IV. SYSTEM DESIGN

A. Block Wise Design

In Fig. 4.1 It is the flow chart for the application will get the emotion of the user and recommend the content.

  1. GUI Of System: To display the user’s live feed, from which the emotion is retrieved, the list of recommended content and also allowing the user to go to the respective webpages for the content.
  2. Emotion Detection Model: Using a haar cascade model to extract the face from a live feed video and giving it to ML model build on xception architecture that will be used to predict the emotion of user. In xception architecture there are mainly three flows as entry flow, middle flow and exit flow. The data is 48x48 grayscale image which need to be normalized (The values are between 0 to 255 which need to be drop down to 0 to 1 for this we divide all the values of image by 255), and extend dimension of numpy array of image data (Making the image data three dimension numpy and make it like a 48x48x48 image i.e., a RGB image which can be done using and extend_dim and repeat feature of numpy which will add the 3rd dimension to numpy array and then repeat it three time to make it 48x48x3 image).

Then the data before giving it to the xception model is given to ImageDataGenerator, this ImageDataGenerator will randomly do operation on the image like zoom, rotate at random angle between 0-to-180-degree, shift image horizontally or vertically, flip image horizontally or vertically.

The input to tis layers is of size (48, 48, 1), i.e., 49x48 grayscale image. There are two parts to the entry flow model.

First part has three layers repeated twice (see Fig. 3.1)

(Convolution => Batch Normalization => Activation) * 2

Second part has eight layers repeated thrice (see Fig. 3.2)

Now in this there are separate convolution which is added at the end.

(Activation => Separable Convolution => Batch Normalization => Activation => Separable Convolution => Batch Normalization => Max Pooling => Add (Convolution (Input) ) ) * 8

The input to tis layers is of size (3, 3, 256) from the entry flow model

This model has ten layers repeated eight times

Now in this there are separate convolution which is added at the end.

(Activation => Separable Convolution => Batch Normalization => Activation => Separable Convolution => Batch Normalization => Activation => Separable Convolution => Batch Normalization => Add (Input) ) * 8

The input to tis layers is of size (3, 3, 256) from the middle flow model

Now in this there are separate convolution which is added at the end.

Activation => Separable Convolution => Batch Normalization => Activation => Separable Convolution => Batch Normalization => Max Pooling => Add (Input => Conv) => Activation => Separable Convolution => Batch Normalization => Activation => Separable Convolution => Batch Normalization => Global Average Pooling => Dense

The input to tis layers is of size (3, 3, 256) from the middle flow model

Now in this there are separate convolution which is added at the end.

Activation => Separable Convolution => Batch Normalization => Activation => Separable Convolution => Batch Normalization => Max Pooling => Add (Input => Conv) => Activation => Separable Convolution => Batch Normalization => Activation => Separable Conv => Batch Normalization => GAP => Dense

V. IMPLEMENTATION AND TESTING

There are two side i.e., patient and doctor. In patient side sensors will measure all the vital parameters like temperature, pulse and oxygen level. And all these data are uploaded on the things board and all these data can be live  monitored by the doctor. And in doctor side, GUI is available through which it will help to manage patients  and doctor details.

VI. RESULT

The following are the end results of the conetent recommendation application application. It shows the video block from which the users live video feed is takem and used the haar cascade model to extract or detect the the face in the live video which is then given to the pretrained model discussed above to get the recommendation for thr

VII. FUTURE SCOPE

This system will provide the content based on their mood i.e., facial expressions this system can be improved and added with more functionalities as follows, improve accuracy for the emotion detection to match the exact mood every time, recommend based on the surrounding like gym, outdoor activities. integrate it into wearable to predict their mood using their heart pulse. can be used to determine mood of physically challenged & mentally challenged people, connecting with random people according to the mood, parental control mode to track their children mood or stress level, recommending contents from social media such as YouTube, Reddit, Instagram, etc. based on mood.

Conclusion

The developed application uses the user\'s camera to take pictures using a good architecture, capture emotions, and recommend music according to mood and choice. This reduces the effort of users to create and manage playlists and bring new content to the market to keep up with songs. Providing the best or appropriate songs according to the user\'s current emotions, and by providing songs to relieve stress and sadness, it provides better enjoyment for music listeners. It not only helps the user, but also systematically categorizes the songs.

References

[1] Kaggle - https://www.kaggle.com/ahmedmoorsy/facial-expression/ [2] Kosti, Ronak, Jose M. Alvarez, Adria Recasens, and Agata Lapedriza. \"Context based emotion recognition using emotic dataset.\" IEEE transactions on pattern analysis and machine intelligence 42, no. 11 (2019): 2755-2766. [3] Kosti, Ronak, Jose M. Alvarez, Adria Recasens, and Agata Lapedriza. \"Emotion recognition in context.\" In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1667-1675. 2017. [4] R Vemulapalli, A Agarwala, “A Compact Embedding for Facial Expression Similarity”, CoRR, abs/1811.11283, 2018. [5] UIBVFED: Virtual facial expression dataset Miquel Mascaró Oliver, Esperança Amengual Alcover Published: April 6, 2020. [6] Nikhil Zaware, Tejas Rajgure, Amey Bhadang, D.D. Sakpal “EMOTION BASED MUSIC PLAYER” International Journal of Innovative Research & Development, Volume 3, Issue 3, 2014.

Copyright

Copyright © 2022 Prajwal Vaibhav Shah, Aniket Sonawane, Kaustubh Gaikwad. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

ijraset42220Prajwal

Download Paper

Authors : Prajwal Shah

Paper Id : IJRASET42220

Publish Date : 2022-05-04

ISSN : 2321-9653

Publisher Name : IJRASET

DOI Link : Click Here

About Us

International Journal for Research in Applied Science and Engineering Technology (IJRASET) is an international peer reviewed, online journal published for the enhancement of research in various disciplines of Applied Science & Engineering Technologies.

Quick links
  • Privacy Policy
  • Refund & Cancellation Policy
  • Shipping Policy
  • Terms & Conditions
Quick links
  • Home
  • About us
  • Editorial Board
  • Impact Factor
  • Submit Paper
  • Current Issue
  • Special Issue
  • Pay Fee
  • Topics
Journals for publication of research paper | Research paper publishers | Paper publication sites | Best journal to publish research paper | Research paper publication sites | Journals for paper publication | Best international journal for paper publication | Best journals to publish papers in India | Journal paper publishing sites | International journal to publish research paper | Online paper publishing journal

© 2022, International Journal for Research in Applied Science and Engineering Technology All rights reserved. | Designed by EVG Software Solutions