The sudden emergence of absolute unknown and unpredicted situation was, the extensive spread of the inconclusive virus, which was eventually named “Corona” virus; shortly coined as COVID-19. The outbreak of this virus lead to many uncertainties; because of which precautions were made as legal rules. The major pandemic precautions were mandatory wearing of masks, social distancing, regular use of sanitizers. And in all public places the body temperature check was done as a basic symptom check for the presence of virus in the body. Both the pandemic and its precautions have brought in massive crisis and catastrophic situations. The technical crisis majorly dealt with, was the failure of face recognition systems. As for to adopt the bygone methods like passwords or fingerprints, was a matter of physical contact which also is a basic prohibitions rule to avoid the spread of the virus.
In the past couple years, Covid-19 has drastically effected human lifestyle. Wearing of mask has become a primary measure of safety against the virus. The partial exposure of face; due to the mask makes the systems working on face recognition highly ineffective. Many systems such as community access control, face access control, facial attendance, facial security checks at train stations, etc. are facing failure. This situation demands urgent need of improvisation.
In these desperate times of pandemic, the masks being the key measure for prevention of the spread of the virus; thus resulting failure of all the systems working on facial recognition. The facial recognition considers all the facial features of a person i.e., the shape of the face, the length of the nose, distance between the eyes and many more. In case of removing the mask even just for the recognition period which might be less than a minute is still of high risk. Almost 50% of the facial features are hidden due to masks in this inevitable period of SARS- CoV. As the facial recognition becomes unreliable, a system which could work on the precise recognition of only that feature not hiding while mask being wore, could be used as a replacement for the failing systems.
III. EXISTING SYSTEM
In this constantly evolving and updating world of technology; we went from manual maintenance of every single data entry to digital entry later to passwords for security further to fingerprints and ultimately to face recognition.
The system of facial recognition is proved highly efficient. This system uses the facial feature of a person to distinguish one from another and verifies a person’s identity. Initially the face is being detected and in the next step the face analysis takes place; in this step the key features such as the distance between the eyes, the eye socket depth, length from forehead to chin, cheekbones shape, the lips, chin and ear contour and its geometry is studied. The dlib library module is used for recognition, which uses 68 points on face of the person, which in turn forms a unique pattern for each face. Later the image is converted to data, if the person’s face is already fed than, the match is found. Due to the current pandemic situation, the mask needs to be momentarily removed for the facial recognition which is a high risk. As the virus takes merely few seconds to spread from one to another. There are systems that detect mask but for facial identification removal of mask becomes both inevitable and also poses great amount of threat. The existing systems are proven ineffective in this potential crisis.
IV. PROPOSED SYSTEM
The system proposed is improvised version of facial recognition, in this system the issues faced due to the mask blocking the visibility of entire face is being solved. This system identifies a person majorly through the iris of eyes; also takes into consideration the length and the width of the forehead, the eyebrow shape, the depth of their eye socket, the distance between eyes etc.
As the project is based on covid-19 management, it stepwise performs the following tasks:
Initially, when the person stands in front of the webcam of the laptop or a computer; the person’s iris is being analyzed. After the analysis the match is made and that person’s identity and details are spotted. Later the system checks for the mask on the person’s face, if the person has worn a mask; the body temperature of that person is being checked; if the temperature is ranges normal then the sanitizer is being dispensed.
In case of the person not wearing mask, or the body temperature being higher than usual the process stops and the person’s name is not recorded in the system. On the other side if the conditions are met then the system marks the attendance of that recognized person and records it in system with name of that person and the time of their entry
The below given flowchart is the working procedure of this proposed system:
The proposed system consists of two main phases; The training phase and the application phase.
The training phase: This is the primary phase of the system. In this phase, the face mask detector is basically being trained for its performance. The train is begun with loading of the mask dataset, after which the mask detector is trained for classifying the masked faces. And finally a model is being created; marking the end of the training phase.
The application phase: The model created in the initial phase is loaded here. The face and the mask is being detected in video stream. Then face being extracted, and checked for a match with our model. After which the results are shown if the mask is worn or not.
More than 1900 images are held in our dataset, which are of people with and without mask. The source to our dataset are kaggle and the repositories on the GitHub.
In this next step the data is being processed i.e.; converting all our images into arrays, this arrays help us to create our deep learning model.
A data preprocessing step is performed, the distinct images for the dataset are transfigured in 224px X 224px. In the following steps the images are being classified with and without masks into binary data. Furthermore, all the images are remodeled into arrays. After which the classified images are converted into categorical variables with in turn are affixed to NumPy array; to work upon our deep learning model.
Dividing our datasets into training data- 80% and the test data forming 20%. In case there happens a shortage of the images in our dataset; we are also availing the module called ImageDataGenerator. This module is capable of creating enormous quantity of images just from a single image by addition of properties like flip, rotate, shift and many more operations on images.
Employing the same CNN with a tiny bit of modification; which is use of MobileNet module and neglect the convolution (feature maps).
The procedure flow as follows; the input are our images which are passed into MobileNet, after which max pooling is used. ultimately the output is displayed
Alternatively utilizing the MobileNet in place of the standard feature maps; as there is no requirement of high accuracy. Wherein, the is a need of quick responsiveness for which MobileNet is convenient.
Two parts of the models being- baseModel in addition to headModel.
Input for the baseModel are passed as the RGB formatted images, which are further passed to the headModel. In the headModel, our data is being elapsed through the pooling and the dense layers to obtain an output. There will be two layers present in the output of our model because of the dyad prediction; i.e., with mask and without mask.
OpenCV is used for camera functionalities, together with DNN, which is a part of OpenCV; for face detection from the video.
This system has a vast spread application in this pandemic period. The process of facial recognition made efficient even with the mask on, which in turn solves half the crisis by avoiding any physical contact be it the password authentication or the fingerprint methods.
It also ensures a safer environment by the auto-check of masks on the face and also check for the body temperature to be in a normal range. Lastly dispenses sanitizer; making it one more level safer in case of any physical contacts at all.
In this paper, we have improvised the facial recognition system; in turn increasing the systems efficiency by almost 70-80% even in the heavy constraints of restricted visible parts of face through the mask. Also ensuring other Covid specific safety measures like detection of face mask, body temperature check and dispensing of sanitizer. This system can be the a virtuous and unerring replacement to our classic facial recognition system, avoiding all the inconveniences in this time of entanglements.
 J. Deng, J. Guo, N. Xue, S. Zafeiriou, “ArcFace: Additive Angular Margin Loss for Deep Face Recognition,” in CVPR, Jun. 2019, pp. 4685-4694.
 B. Liu, W. Deng, Y. Zhong, M. Wang, J. Hu, X. Tao, and Y. Huang, “Fair Loss: Margin-Aware Reinforcement Learning for Deep Face Recognition”, in ICCV, Oct. 2019, pp. 10051-10060.
 W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, “Sphereface: Deep hypersphere embedding for face recognition”, in CVPR, Jul. 2017, pp. 6738-6746.
 W. Liu, Y. Wen, Z. Yu, and M. Yang, “Large-margin softmax loss for convolutional neural networks”, in ICML, 2016, pp. 507-516.
 A. T. Tran, T. Hassner, I. Masi, and G. Medioni, “Regressing Robust and Discriminative 3D Morphable Models with a Very Deep Neural Network”, in CVPR, Jul. 2017, pp. 1493-1052.
 G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. University of Massachusetts, Amherst, TR 07-49, 2007. 529, 531?
 T.-K. Kim, O. Arandjelovic, and R. Cipolla. Boosted manifold principal angles for image set-based recognition. Pattern Recognition, 40(9):2475–2484, 2007. 530
 T. Ojala, M. Pietikainen, and D. Harwood. A comparativestudy of texture measures with classification based on feature distributions. Pattern Recognition, 29(1), 1996. 530, 531
 A. J. O’Toole, P. J. Phillips, S. Weimer, D. A. Roark, J. Ayyad, R. Barwick, and J. Dunlop. Recognizing people from dynamic and static faces and bodies: Dissecting identity with a fusion approach. Vision Research, 2011. 529
 D. Ramanan, S. Baker, and S. Kakade. Leveraging archival video for building face datasets. In ICCV, 2007. 529, 530
 B. Raytchev and H. Murase. Unsupervised face recognition from image sequences based on clustering with attraction and repulsion. In CVPR, 2001. 530
 G. Shakhnarovich, J. Fisher, and T. Darrell. Face recognition from long-term observations. In ECCV, 2002. 530