Digitalization has become an essential aspect of modern-day long-distance communication and is used extensively in daily life. As a result, it is crucial that this technology is accessible to everyone, including differently-abled individuals. The objective of this study is to create an eye-controlled virtual keyboard that enables differently-abled individuals to use text features effectively. To achieve this, the study utilizes Dlib, OpenCV, and Convolutional Neural Networks (CNNs) to monitor eye movement and blinks, allowing the user to select desired keys on the virtual keyboard. By accurately predicting the eye\'s state using neural networks, the system operates the virtual keyboard with precision. Through this study, a highly efficient method for differently-abled individuals to communicate through text has been established, paving the way for future advancements and research in this field.
Human beings have an inherent need for communication, which has been facilitated by computer-mediated communication (CMC). CMC allows people to communicate across great distances, to an unlimited number of people at a low cost, and with ease in creating and sharing documents and other material. Although computers have evolved significantly in terms of power and potentiality, people still use keyboards and mouse to communicate and work with them. However, for individuals with severe physical disabilities such as paralysis and amputation, it is difficult to use computers for communication. According to the Christopher and Dana Reeve Foundation's study in 2013, , almost 1 out of 50 people live with paralysis, and their physical activities are often limited to eye blinking. This study aims to design an application that allows people with physical disabilities to use a virtual keyboard using their eye movement. While there have been advancements in the field, such as the Tobii Eye Tracker, these devices are not economically feasible for the general public. The application designed in this study provides a cheaper and more convenient alternative for differently-abled people, which can be operated using a simple laptop and a webcam.
II. RELATED WORKS
Eye tracking technology has come a long way in the past few decades, and it has become an important tool for researchers studying human behavior and interaction with technology. This technology involves tracking the position and movements of the eye to determine the direction of gaze. There are several different technologies used to track eye movements, including infrared-oculography, scleral search coil method, electrooculography, and video-oculography.
Currently, most eye tracking research for Human-Computer Interaction (HCI) is based on video-oculography because it is less invasive to the user. However, there are still some challenges to be addressed in this field, such as attention diverting problems and accuracy issues. Researchers have been working on developing new systems to address these problems.
One such system is the EASE (Eye Assisted Selection and Entry) system, designed by Wang et al. This system uses eye-tracking to assist with text entry, making it easier and more efficient for users. Another system developed by MacKenzie and Ashtiani uses eye-blinking to control a scanning ambiguous keyboard. Chau and Betke have also developed a system that detects eye blinks and analyzes their pattern and duration.
In addition to these systems, there have been studies exploring different eye-gazing techniques, algorithms, and models. Grauman and Magee have surveyed and described different types of eye blinks, while Kro´lak and Strumi??o have developed a system that allows for operation dependent on the eye without requiring muscle movements. The eye-blink controlled systems are able to distinguish between voluntary and involuntary blinks and interpret single voluntary blinks or their sequences.
Seki et al. have categorized vision-based eye blink detection techniques into two types: active and passive eye-blink detection. Active eye-blink detection relies on special illumination and uses the retro-reflective property of the eye to provide accurate results quickly and robustly. Passive eye blink detection techniques, on the other hand, do not require additional light sources and detect the blinking ratio from sequences of images within the visible spectrum.
Chakrabor-thy et al. have provided a system that uses similar groundworks to the present study to address issues with eye tracking. However, the current study aims to improve efficiency and accuracy by using a different methodology. As eye-tracking technology continues to evolve, it is likely that more systems and techniques will be developed to improve its accuracy and usability.
III. DATA ACQUISITION
To train the models for blink and gaze detection, a dataset was acquired through a webcam. The dataset contained approximately 5000 images captured from 10 individuals, which were then preprocessed and filtered manually. For gaze detection, the dataset was categorized into three classes based on the position of the eyeball: left gaze, right gaze, and center gaze. The data collection process involved instructing the individuals to follow a dot that moved randomly to left, right, and center positions on the screen, with the corresponding eye images collected as corresponding classes.
On the other hand, for blink detection, the "open percentage" of each image from the previously collected dataset was manually assigned. The "open percentage" is a measure of how open the eyes are, with 100% indicating wide open and 0% indicating completely closed. Any image with an open percentage below 10% was considered as a blink. It is important to note that the data collection process was performed using a script to ensure consistency across the dataset.
The methods used for accomplishment of eye- controlled virtual keyboard is shown in Fig. 1.
A. Video Capture
First of all, video is captured from the webcam, which has the actual resolution. From the captured video, we grab frames. The captured frame is passed for the grayscale conversion to reduce the computation power for further processing.
Individuals with physical disabilities often experience difficulties in carrying out physical tasks and participating in certain life activities, which can limit their ability to use information technology effectively. Therefore, it is crucial to develop adaptations that allow individuals with physical impairments to communicate fully with computers. Given that information technology is an essential aspect of modern society, particular attention should be paid to addressing communication barriers between physically handicapped individuals and computers.
To reduce disparities and address deficiencies, this study proposes the development of an \"Eye-controlled virtual keyboard\" for individuals with physical disabilities. This solution aims to enable physically challenged individuals to communicate with computers more effectively, allowing them to participate fully in information education and other areas of modern society.
 P. Chakraborty, D. Roy, M. Z. Rahman, and S. Rahman, “Eye Gaze Controlled Virtual Keyboard,” International Journal of Recent Technology and Engineering (IJRTE), 2019.
 M. L. Mele and S. Federici, “Gaze and eye-tracking solutions for psychological research”, Cognitive Processing, 2012
 M. Chau and M. Betke, “Real time eye tracking and blink detection with USB cameras”, 2005
 A. Sharma and P. Abrol, “Eye gaze techniques for human computer interaction: A research survey”, International Journal of Computer Application, 2013
 K. Grauman, M. Betke, J. Lombardi, J. Gips, and G. Bradski, “Communication via eye blinks and eyebrow raises: Video based human-computer interfaces”, Universal Access in the Information Society, np. 2(4), p. 359-473, 2003
 A. Kro´lak and P. Strumi??o, “Eye-blink detection system for human-computer interaction”, Universal Access in the Infor- mation Society, 2011
 M. Seki, M. Shimotani, and M. Nishida, “A study of blink detection using bright pupils” 1998-2001
 Y. LeCun and Y. Bengio, “Convolutional Networks for Images, Speech”, and Time-Series. 1997.
 D. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” International Conference on Learning Repre- sentations, 2014.
 B. Triggs and N. Dalal, “Histograms of Oriented Gradients for Human Detection,” in 2013 IEEE Conference on Computer Vision and Pattern Recognition, San Diego, California, 2005 pp. 886-893. doi: 10.1109/CVPR.2005.177
 A. Rosebrock, “Facial landmarks with dlib, OpenCV, and Python,” Pyimagesearch.com, 03-Apr-2017. [Online]. Available: https://www.pyimagesearch.com/2017/04/03/facial- landmarks-dlib-opencv-python/.
 J. Wang, S. Zhai, and H. Su, “Chinese Input with Keyboard and Eye-Tracking - An Anatomical Study”, 2001.
 B. S. Armour, E. A. Courtney-Long, M. H. Fox, H. Fredine, and A. Cahill, “Prevalence and Causes of Paralysis-United States,” 2013.