Automated Digital Presentation Control using Hand Gesture Technique

- In today's digital world Presentation using a slideshow is an effective and attractive way that helps speakers to convey information and convince the audience. There are ways to control slides with devices like mouse, keyboard, or laser pointer, etc. The drawback is one should have previous knowledge about the devices in order to manage them. Gesture recognition has acquired importance a couple of years prior and are utilized to control applications like media players, robot control, gaming. The hand gesture recognition system builds the use of gloves, markers and so on However, the utilization of such gloves or markers expands the expense of the system. In this proposed system, Artificial intelligence-based hand gesture detection methodology is proposed. Users will be able to change the slides of the presentation in both forward and backward directions by just doing hand gestures. Use of hand gestures cause connection simple, helpful, and doesn't need any additional gadget. The suggested method is to help speakers for a productive presentation with natural improved communication with the computer. Specifically, the proposed system is more viable than utilizing a laser pointer since the hand is more apparent and thus can better grab the attention of the audience.


INTRODUCTION
In recent decades hand gesture recognition is considered as a new technique of Human-Computer Interaction because of its automatic, natural and easiness without requiring input from devices like keyboard and mouse. For example, detection of the language spoken can be done by analysing lip movements; gaming is also using hand gestures [1]. Now a day's though there are number of techniques for hand gesture recognition are in existence like wearable devices like a ring, armband, glove, leap motion, and controllers-based motion recognition such as Wii-mote, and ordinary web-camera, stereo camera, and even using radar but still needs improvement [2]. So, the gesture recognition has gained tons of importance and wont to control various applications. Presentation software is one among the various which will be controlled by hand gestures.
Machine captures gesture and recognizes it to perform the task. Machine will capture the hand gesture through camera and recognize it. First it will remove the background from captured image and filters-out foreground. This recognized gesture is then used for verifying the sign of the gestures. Aim of this proposed work is to implement AI in hand gesture recognition system and to use it to control the digital presentations by just using hand gestures.

LITERATURE SURVEY
As per the analysis of many other techniques referred by the researchers, the main aim is to help speakers for an effective presentation with natural improved interaction with computer.
Damiete O. Lawrence and Dr. Melanie J. Ashleigh author presented "Impact of Human-Computer Interaction (HCI) on Users in Higher Educational System: Southampton University as A Case Study". In this paper, Human-Computer Interaction (HCI) perception and impact in the University of Southampton UK, an advanced literacy terrain was measured. The impact HCI in Southampton University has been positive and it's shown that getting familiar with HCI generalities ameliorate a stoner's commerce and effectiveness. In conclusion, it can be said that HCI has impacted the literacy terrain as it has impacted other corresponding surroundings [4]. Sebastian Raschka, Joshua Patterson and Corey Nolet author presented "Machine Learning in Python: Main Developments and Technology Trends in Data Science, Machine Learning, and Artificial Intelligence". They covered extensively used libraries and generalities, collected together for holistic comparison, with the thing of educating the anthology and driving the field of Python machine learning forward [9]. Xuesong Zhai, Xiaoyan Chu,Ching Sing Chai,Morris Siu Yung Jong, Andreja Istenic, Michael Spector, Jia-Bao Liu, Jing Yuan, and Yan Li author presented a Review of Artificial Intelligence (AI) in Education from 2010. This study handed a content analysis of studies aiming to expose how artificial intelligence (AI) has been applied to the education sector and explore the implicit exploration trends and challenges of AI in education [10].
Jadhav & Lobo proposed the idea were both static and dynamic gestures are used together to control power point presentation. To capture and recognize images Segmentation methodology is used. It also introduces the feature of motion detection to change slide [1]. Zhou Ren, Junsong Yuan, Jingjing Meng, and Zhengyou Zhang author presented "Robust Part-Based Hand Gesture Recognition Using Kinect Sensor". They presented a robust part-grounded hand gesture recognition system using the Kinect detector. A new distance metric, Finger-Earth Mover's Distance (FEMD), is used for diversity measure, which represents the hand shape as a hand with each cutlet part as a cluster and penalizes the empty cutlet-holes. More specifically, our FEMD grounded hand gesture recognition system achieves 93.2 mean delicacy and runs in 0.0750s per frame when using the thresholding corruption cutlet discovery system [6]. Harika et al. authors proposed and approached a method by using vision-based gesture recognition for computerassisted slide presentation. Techniques like Kalman filter, HSL colour model, Skin colour sampling are used. If we consider accuracy of this proposed model, Skin colour detection has overall success about 72.4%, Single fingertip detection has accuracy of 74.0%, success rate in moving slides is 77% and Success rate in controlling the finger pointing is 80% [2]. Wahid et al. proposed and approached a method to Recognize Hand Gestures Using Machine-Learning Algorithms. If we consider accuracy of this proposed model, The SVM algorithm yielded the highest classification accuracies using both original EMG features (97.56%) and normalized EMG features (98.73%) among NB, RF, KNN and DA [3].
Ajay Talele, Aseem Patil, Bhushan Barse author presented "Detection of Real Time Objects Using TensorFlow and OpenCV". This paper introduced a cutting-edge computer imaginative and prescient-based totally impediment detection technique for cellular generation and its packages. Each character image pixel is classified as belonging either to an impediment based totally on its look. This paper presented a brand new approach for impediment detection with a single webcam digital camera [7]. Ahmed Kadem Hamed AlSaedi, Abbas H. Hassin Al Asadi author presented "A new hand gestures recognition system". They introduced a low cost system to fete the hand gesture in real-time. Generally, the system divided into five way, one to image accession, alternate to pre -processing the image, third for discovery and segmentation of hand region, four to features birth and five to count the figures of fritters and gestures recognition. The paper answered the challenge of gyration, exposure, scaling, and got the same results when used right or left hand. The system that handed used only bare hand and webcam of Laptop so it's veritably flexible for the stoner. The results of system shown that the rate of recognition was 96.6% and this result is considered veritably good compared with other exploration papers [8]. Dhall et al Discovered Hand gesture technology along with Convolutional neural network to build a hand gesture recognition application in "Automated Hand Gesture Recognition using a Deep Convolutional Neural Network model" paper. In this paper author used a CNN which has specific layers such as input layer, output layer and inbetween them there are some hidden layers. First hidden layer is convolutional layer which detects and extracts features from images, then Max pooling layer used for dimensionality reduction [5].

Image Pre-Processing
The point of pre-processing is to improve the standard of the image all together that we will examine it in an exceptionally better manner. By pre-processing we will smother undesired distortions and upgrade a few elements which are essential for the real application we are working for. Those features might vary for different applications. Steps for Image pre-processing: • Select a boundary of the input image within which we'll scan for the presence of a person's hand. • Produce a mask by opting only pixels that match a specified colour range. • Blur the mask image so that missing data points can be filled. • Draw a contour of the hand and use Open CV to identify the fingers.

Anaconda Framework
Anaconda is used for scientific computing like data science, AI applications, huge scope information handling, predictive analysis, and so forth), that expects to improve on package executives and management.

• PyCharm
PyCharm is a devoted Python Integrated Development Environment (IDE) executing an enormous scope of required devices for Python engineers, firmly incorporated to establish helping climate for useful Python, web, and data science improvement.

1) OpenCV
OpenCV is used in computer vision, image processing, and machine learning applications. OpenCV guide a large diversity of programming languages like Python, C++, Java, etc. The identification of objects, faces, or even the handwriting of a human can be done by the action of images and videos. It's an open-source library that used to accomplished task like face detection, object tracking, landmark detection and many also. It is used to capture the video and to perform hand detection process:

2) PyAutogui
Mouse and keyboard can be controlled to do different things. It is a cross-stage GUI automation Python module for human reality. This third-party library can be installed by using the command pip install pyautogui. It is used to press HotKeys:

3) MediaPipe
MediaPipe is a framework which is used for working in a machine learning pipeline. It is a highfidelity hand and finger hunting solution. It is just a part of single frame that from employs machine learning (ML) to derive 3D marker of a hand. It is used in hand detection process:

4) Keyboard
It records the keyboard action and assist to enter keys thus can also block the keys until a stated key is entered and affect the keys. It captures onscreen keyboard events as it takes all the keys. This module provides groups hotkeys.
It is used to press particular keys:

5) Numpy
NumPy may be used to carry out an extensive variety of mathematical operations on arrays. It provides effective information systems to Python to assure treasured computations with arrays and matrices, and it offers a huge library of high-stage mathematical features to perform on those data structures like arrays and matrices. It is used for Creating 3*3 kernel: To define range of skin color in HSV: To Convert the coordinates:

6) Time
The Python time module helps in representing time in code; representation can be in the form of objects, strings, and numbers. By using this module other functionality can be implemented like representing time and measuring the efficiency of your code. It is used to set frame rate:

7) Math
The Python Math Library implements some math features and constants in Python. These features can be used in code to perform plenty complicated mathematical computations. The library does not require any installation as it is an essential Python module.
It is used to find length of all sides of triangle: To apply cosine rule:

Hardware Requirements
• Web cam: Real time streaming of image or video to or through network can be done by using video camera. In this proposed system first, we will capture the images of hand gesture that user made in parallel to webcam. • Processor: intel Pentium 4 or more

Gestures
1) Ok (Thumbs Up): By doing Ok gesture presentation will start in presentation mode (Ref. Fig.2). 2) Two Fingers (Victory): By doing two finger gesture video will be played or paused in presentation slide (Ref. Fig.3). 3) Good: By doing Good gesture presentation will show previous slide. User can change slide backward (Ref. Fig.4).

DISCUSSION
Hand gestures are more natural in interaction as they are important part of body language, compared with other devices. Use of hand gestures does not require any extra device and makes interaction easy. In this proposed system, AI-based hand gesture detection methodology is proposed. Using hand gestures would make it easy for speaker to present more easily. The aim of this proposed system is to develop software that will help presenter to control the slides of presentation by using different hand gestures. With this software there will be no need for any device like keyboard, mouse or even remote for changing slides.

CONCLUSION
This proposed system, "Automated Digital Presentation Control Using Hand Gesture Technique" makes presentation easy. Presenter will be able to change slides without using any external device. This will be useful in corporate or institutions where presentation is part of work.