Peoples who are deaf/dumb use Sign language to convey their message to normal people. Sign language detector using cloud provides us an innovative, user friendly way of interaction with the computer which is more familiar to human beings. Sign language detector has a wide area of application including human machine interaction, sign language, immersive game technology etc. By keeping in mind, the similarities of human hand shape with four fingers and one thumb, this paper aims to present a real time system for sign language detector on the basis of detection of some meaningful shape-based features like orientation, center of mass (centroid), status of fingers, thumb in terms of raised or folded fingers of hand and their respective location in image. The approach introduced in this paper is totally depending on the shape parameters of the hand gesture. It does not consider any other means of hand gesture recognition like skin color, texture because these image-based features are extremely variant to different light conditions and other influences. To implement this approach, we have utilized a simple web cam which is working on 20 fps with 7 mega pixel intensity.
Many peoples around the world have issue with lack of hearing and loss of voice to share their thoughts with normal peoples. To have a system for such kind of peoples has become most important. With the evolution of new world and new technologies and education, it becomes very important for every single person around the world to contribute in this evolution. With the development of Sign Language Detector, it will help deaf/dumb peoples to convey their thought and opinions to the normal people which will help to add such kind of peoples to main stream.
In this research paper we are using 26 English alphabets and every alphabet represents a specific sign. Every sign we are using is processed dataset as grayscale image with high contrast, that makes the background behind the gesture of hand fully white and edges with black of the sign which provides more accuracy. We have taken multiple pictures of every single sign from different angles.
To demonstrate sign, we used below is a sample of processed image:
Every single sign we used has 156 images with different angles, total dataset consists of 4056 images and all are pre-processed.
Sign language Detector using cloud is implemented with the help of cloud services. When we talk about cloud services then there are many options to implement Sign language detector.
Following are options that cloud services provide to develop such system:
Sage Maker: Sagemaker is one of services offered by AWS which is totally based on cloud, It allows developers to easily build, train and deploy different model. For Sign language Detector it is one of the option through which we can use dataset and with the help of dataset we can train our model and deploy.
It follows below steps for our sign detection using cloud.
2. Rekognition: AWS Rekognition is cloud-based SaaS (Software as a Service). It basically compares live images with the reference images and provide the more accurate comparison. It can also be used to deploy Sign language detector with the help of the cloud services that is lambda, S3 Bucket. With the combination of different services of cloud this is also possible to use this approach.
II. PROPOSED SYSTEM
The aim of this research paper is to implement Sign language Detector using cloud that can improve performance of Sign language detector. With the evolution of cloud technologies, it opened many options to develop different systems on cloud platform. Our proposed system that is Sign language detector uses cloud services. Our proposed system will store all the dataset on the cloud storage and for sign detection from dataset is performed with the help of AWS Rekognition and lambda function to perform action on dataset that is stored in S3 bucket. This system will have zero down time as all the cloud service provider provides zero down time.
We are using pre-processed grayscale frames with black outer edges dataset show in fig 1 and fig 2 for better accuracy.
On cloud one event will trigger other events with the help of AWS lambda to automate the whole process of sign language detection.
III. SYSTEM DESIGN
Sign Language Detector using Cloud firstly capture the gesture shown in front of the camera and stores it into a S3 bucket, as soon as it is uploaded on S3 bucket it triggers lambda function which will communicate with AWS recognition and it fetches the captured frames and compare it with dataset that is stored in another S3 bucket and after comparison it will show the specific alphabet corresponding to that specific sign.
IV. RELATED WORK
There are different methods to perform hand gesture detection, some sign language detection requires use of hardware’s which is hand gloves, depth-cable sensors like Microsoft Kinect sensor.
This Microsoft Kinect sensor create depth frames, a gesture is recognised as a sequence of depth frames used.
Kartik Shenoy, Tejas dastane, Varun Rao, Devendra Vyavaharkar research paper Indian sign language (ISL) demonstrated gesture with one hand sign language detection and further updated with two hand sign language gesture. For their research work they used android smartphone, gestures and sign all are performed using ISL and the forwarded to server for pre processing of the images, pre processing includes face removal, stabilisation and skin colour segmentation to remove background details.
Deepali Naglot, Milind Kulkarni used Leap Motion Controller for Sign language recognition, Leap motion controller is a USB peripheral device, which helps the user to manage their computer system with hand gestures. The sensor used in it is a 3D non-contact motion sensor that recognises hands, fingers, bones. Dataset used here is 520 frames where each alphabet contains 20 frames of same alphabet sign.
Sai Myo Htet, Bawin Aye, Myo Min Hein developed Myanmar Sign Language classification using deep learning, which is based totally on skin colour and did not used sensor and gloves. They used enhancement for skin detection, that with enhance the captured picture with high contrast and colour corrections for better recognition at dark places also. In this proposed system it uses Viola and Jones algorithm, viola jones algorithm with the help of haars feature based cascade it detects face.
In the research paper we have discussed about some of possible ways to develop Sign language detector using cloud services. We have implemented the system with the help of AWS Rekognition, AWS Lambda, S3 bucket, Python. Our System uses a huge dataset that give a good accuracy and we are planning to add text prediction with this system so that it will require to show only few signs to complete a word. Text prediction will add more accuracy to our project and saves a lot of time that is required to form sentences to explain something to someone.
 Bhadra, R., & Kar, S. (2021, January). Sign Language Detection from Hand Gesture Images using Deep Multi-layered Convolution Neural Network. In 2021 IEEE Second International Conference on Control, Measurement and Instrumentation (CMI) (pp. 196-200). IEEE.
 Htet, S. M., Aye, B., & Hein, M. M. (2020, November). Myanmar Sign Language Classification using Deep Learning. In 2020 International Conference on Advanced Information Technologies (ICAIT) (pp. 200-205). IEEE.
 Zamora-Mora, J., & Chacón-Rivas, M. (2019, October). Real-Time Hand Detection using Convolutional Neural Networks for Costa Rican Sign Language Recognition. In 2019 International Conference on Inclusive Technologies and Education (CONTIE) (pp. 180-1806). IEEE.
 Kadam, S., Ghodke, A., & Sadhukhan, S. (2019, April). Hand Gesture Recognition Software Based on Indian Sign Language. In 2019 1st International Conference on Innovations in Information and Communication Technology (ICIICT) (pp. 1-6). IEEE.
 Prateek, S. G., Jagadeesh, J., Siddarth, R., Smitha, Y., Hiremath, P. S., & Pendari, N. T. (2018, October). Dynamic tool for American sign language finger spelling interpreter. In 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN) (pp. 596-600). IEEE.
 Shenoy, K., Dastane, T., Rao, V., & Vyavaharkar, D. (2018, July). Real-time Indian sign language (ISL) recognition. In 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT) (pp. 1-9). IEEE.
 Shivashankara, S., & Srinath, S. (2017, November). A review on vision based American sign language recognition, its techniques, and outcomes. In 2017 7th International Conference on Communication Systems and Network Technologies (CSNT) (pp. 293-299). IEEE.
 Kumar, N. (2017, October). Motion trajectory based human face and hands tracking for sign language recognition. In 2017 4th IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics (UPCON) (pp. 211-216). IEEE.
 Hassan, M., Assaleh, K., & Shanableh, T. (2016, December). User-dependent sign language recognition using motion detection. In 2016 International Conference on Computational Science and Computational Intelligence (CSCI) (pp. 852-856). IEEE.
 Naglot, D., & Kulkarni, M. (2016, August). Real time sign language recognition using the leap motion controller. In 2016 International Conference on Inventive Computation Technologies (ICICT) (Vol. 3, pp. 1-5). IEEE.