With regard to hearing and vocally impaired individualities, communication with others is a way longer struggle for them. They are unfit to speak with traditional individualities duly. They face difficulties in getting jobs and living a traditional life like others. In this paper, we are introducing a smart communication system for hearing and vocally impaired individuals and also for normal people. The overall delicacy of the system is 92.5, with both the hands involved. The main advantage of this system being proposed over the former system is that in the former system the signs can be detected by the camera only when the hands are covered in gloves whereas in this proposed system, we have tried our swish to overcome that disadvantage handed by the former system.
With regard to the hearing and vocally impaired individuals’ communication with others is a way longer struggle. They tend to find it difficult to communicate duly with traditional individuals and have been facing difficulties in getting jobs and live a normalized life as the traditional individuals. Addressing these issues of people who have hearing and vocal impairment through a single system is a tough job, as these individuals find it difficult in communicating with others who don’t understand sign language. Sign language is a linguistic and gesture related communication which helps the hearing and vocally impaired individuals to communicate with the traditional individuals. The main reason for us to take up this topic was our perspective of creating a world where there can be no differentiation between the traditional and hearing and vocally impaired individuals. We have tried our best to find solution to resolve this problem and all these activities are coordinated with the use of Raspberry Pi. Main objective of this effort is to explore the utility of two features extraction methods, namely hand contour and complex movements to solve the hand gesture recognition. Through this project we would like to introduce you all to a new world where no one is looked upon or made feel left out in this competitive era. A gesture could be a style of non-verbal communication within which visible bodily actions communicate specific messages, either in a situation or in unification with speech. Very few people who are not hearing and vocally impaired show a peculiar interest in learning sign language, due to which not many traditional individuals tend to communicate with the impaired individuals and technology can be one way to remove this hindrance and benefit these impaired people.
II. LITERATURE REVIEW
Mostly sign language recognition is done by two approaches image based and sensor-based approach. Currently Image-based method is studied in research area. Sign language translator smoothly implemented on the laptop or PC with the help of Image-based method, due to its flexibility, mobility and ease of use.
In this sign language, gestures captured by camera are translated into text and speech. A system with hand gloves is a major drawback of the present system.
Gesture recognition was first proposed by Myron W Krueger as a new form of interaction between human and computer on the middle of seventies. The research has gained more and more attention in the field of human machine interaction. Currently, there are several available techniques that are applicable for hand gesture recognition, which are either based on sensing devices or computer vision.
A typical widespread device-based example is data glove, which is developed by Zimmerman in 1987. In this system, user wears a data glove that is linked to the computer. The glove can measure the bending of fingers, the position and orientation of the hand in 3D space. Data glove is able to capture the richness of a hand’s gesture. Its successful example is real-time American Sign Language Recognition.
Sawant Pramada proposed an Intelligent Sign Language Recognition using Image Processing.
Sudarshana Chakma, Sushith Rai S, Sushmita Pal, Uzma Sulthana K proposed the design of the device which was based on embedded systems.
III. EXISTING SYSTEM
Hearing and vocally impaired individuals tend to use sign language in order to communicate with the world. Sign language being a gesture-based communication aid between the hearing and vocally impaired individuals and the traditional individuals it becomes vital for an individual to learn to express it in a proper way so that there are no misunderstandings. Technically, gesture recognition method was divided into two categories namely vision-based and sensor-based method. In sensor-based systems, gloves are used which can achieve the accurate positions of hand gesture. But wearing them continuously or when the system is being used is not conveniently possible and even the system becomes expensive due to the use of gloves.
IV. PROPOSED SYSTEM
The proposed system will mainly tend to consists of:
Captured by the webcam of laptop or PC i.e., the image of the hand of the user to process the sign that is being shown by the user using hand gesture.
Voice/speech that is given out by the speaker after processing of the gesture shown by the user and even a text will be displayed as an output with speech of the same. The device also recognizes the gestures of the users and will display the words related to it. And the device will also give out speech when something is typed onto it.
V. SYSTEM ARCHITECTURE
There are mainly three stages in the system architecture as mentioned below:
In this phase, the database will be trained in order to identify the gestures that are given by the user and to give the user the desired output.
In this phase, after the image of the sign is being captured it is then forwarded to the pre- processing stage in which the background of the image is analyzed and will be processed and the hand through which the sign is shown will be detected even with the contour of the hand as the system tries to identify whether the user is showing back of the hand or front of the hand and the palm. After which some of the features are being extracted from the captured image.
The detected features are the being matched with the particular sign and the features that are being similar in the database and once the features are being matched, they are further extracted from the database and displayed as the output on the screen with speech conversion through which the voice will be given out by the speakers and will also display the text of the particular sign that is being showed and will give the output to the user.
VI. SUB-SYSTEM ARCHITECTURE
Although sign language is one of the main communication mediums of the hearing and vocally impaired in terms of automatic recognition, gestures have the advantage of using a limited number of finger signs, corresponding to the letters/sounds in the alphabet. Although the ultimate aim is to have a system that translates the sign language to speech. The main function that is being performed by the sub-system is that it will recognize the number of fingers and will tend to compute the value in order to give a precise value which in turn makes the implemented system to capture the movements that are being done by the palm and the fingers for the accuracy of the results.
A. This system being a communication medium between the hearing and vocally impaired individuals and the traditional individuals will tend to help people live a common life as others without feeling left out.
B. Traditional individuals need not focus on learning the sign language in order to communicate with the impaired individuals.
C. Since, this proposed system can be accessed without any gloves the user need not carry any kind of gloves with himself/herself.
D. The system is efficient as it will tend to give the live results and the user need not record any kind of videos and then upload it for conversion.
E. Hearing and vocally impaired individuals need not low their self- esteem and can live their lives confidently without being ashamed of their impairments.
To conclude this system will basically act as artificial tongue to a hearing and vocally impaired individuals. As this system mainly tends to focus on the sign language which can be used to communicate with the hearing and vocally impaired individuals, the main barrier for these individuals to lead a normalized life like other traditional individuals is that not all the traditional individuals will have the complete knowledge of the sign language. This system will be effectively useful and will minimize the gap which will help the hearing and vocally impaired individuals to get jobs, to communicate with traditional beings and make their life normalized in the society, so that they need not be looked upon because of the impairments they possess and live their lives confidently.
 Pratibha Pandey, Vinay Jain, Hand Gesture Recognition for Sign Language recognition: A Review, International Journal of Science, Engineering and Technology Research (IJSETR), Volume 4, Issue 3, March 2015.
 Neethu P S, Dr. R Suguna, Dr. Divya Sathish, “Real Time Hand Gesture Recognition System”, taga journal vol-14, 2018.
 Aarthi M, Vijayalakshmi P, Sign Language to Speech Conversion. Department of ECE, SSN College of Engineering, 2016.
 Kavitha Sooda, Kathakali Majumder, Aishwarya Muralidharan, Ayesha Sultana and Aliya Farheen, “Design and Development of Hand Gesture System”, International Journal of Advanced Research in Computer Science, Vol. No. 7, 2016.
 Shraddha R. Ghorpade, Surendra
 K. Waghamare, “Full Duplex Communication System for Deaf & Dumb People”, International Journal of Emerging Technology and Advanced Engineering (IJETAE), Volume 5, Issue 5, May 2015, ISSN2250-2459.