Articulated Robot Arm for Garbage Disposal in Hospital Environment

. The use of robotic arms is crucial in the medical industry, particularly in hospital settings. It can be used for a wide range of things, including as an aide in the operating room or for the removal of medical waste, among many other things. In this work, the robotic arm is designed to segregate medical waste as hazardous or non-hazardous. A dataset with five classes was created because there was no readily available data set for medical waste. The primary challenge in doing so is to programme the robotic arm’s movements and train the image dataset to classify objects as hazardous or non-hazardous. The 3D-printed robotic arm model has 6-Degree of Freedom(DOF) and is coupled with MG996R and SG90 servo motors. The robotic arm that is attached to an Arduino uno board is operated by the Blynk IoT platform. It uses YOLO V5 (You Only Look Once) algorithm to detect objects, and it favours intersection over union (IOU). To demonstrate, static robotic arm model was placed near the pile of medical waste to identify the waste and segregate it accordingly. Index Terms—Machine learning, Internet of things (IoT), Robotic arm, Computer vision.


INTRODUCTION
According to a recent report by the World Health Organization (WHO), the anti-pandemic effort generated tens of thousands of tonnes of medical waste. The healthcare business places a heavy burden on the world's waste management systems, endangering human and environmental health. The hospitals found it challenging to get rid of the dangerous medical waste that was produced after COVID. This makes it necessary for us to create a solution that will facilitate the effective disposal of these tonnes of medical waste. The transport and treatment of hospital waste is a time-consuming, risky, and infectious activity since housekeeping staff are exposed to medical and hazardous waste. This can be avoided by using a robotic arm to assist in the garbage disposal procedure; the robotic arm picks up the garbage and places it in the bins.
According to a recent review, the design and development of the robotic arm, as well as the materials utilised in its construction, are shown in object lifting and transferring activities. However it is limited to a 5-DOF robotic arm [1]. The design and development of a low cost and user friendly interface for the control were achieved in robotic arm with 6-DOF. In which, articulation of the robotic arm is achieved about six single-axis revolute joints: Base, Waist, Shoulder, Elbow, Wrist, Gripper. Although the movements of this robotic arm is slower and costlier when compared to others [2]. Servo motors are the preferred choice because they provide the highest level of precision, instantaneous feedback for diagnostic purposes, and total control over motion patterns. While the cost of other actuators is comparatively higher [3]. In an unstructured environment, the manipulator can be manoeuvred into any position or orientation that is practical. However, it has a higher cost when compared to other robotic arms [4]. Flash over, copper drag, threading, and grooving are some of the most frequent issues with DC motors. Having extra motor brushes on hand is a good idea in case we need to replace them due to contamination or wear and tear. The buildup of dirt causes the motor brush to short circuit [5]. A powerful image processing technique is used to locate and identify the target item. The objects are first discovered. Next, the retrieved image is presented to the classifier. It would print the kind of the object and its location. However many calculations and boxes are needed for similar items [6]. To extract the essential characteristics of objects from remote sensing photos, a suitable feature extractor based on the CNN model is used. It uses a sliding window method. Yet the execution of this procedure is rather challenging [7].
Using the 6-DoF model, which has a greater range of motion and greater dexterity, the robotic arm was developed and designed. This model is more efficient, lighter, and quickmoving than prior ones. Torque, velocity, and position may all be changed using the encoders on the servo actuators. They provide complete control over movement profiles. To avoid delay detection, the control unit of the actuator performs exact computations utilising data packets. Lack of openly available data sets for medical waste resulted in the creation of datasets for various images, which comprise the following five classes: masks, gloves, syringes, mayo scissors, and straight mayo scissors. Among the five classes, there are more than 2000 photos, with over 300 in each class. Using the labelImg software, distinct labels are assigned to items from related classes. With the YOLO V5 methodology, the drawbacks of the traditional sliding window method are fixed.
The paper is organized as follows: In Section 2 Operational Work Flow and Hardware Implementation of robotic arm is discussed. The Software used and it's implementation in the robotic arm is presented in Section 3 together with its dataset generation. The corresponding Results are discussed in Section 4 which is then followed by Conclusion and Future scope in Section 5 and then are the references used.

OPERATIONAL WORK FLOW
An Arduino Uno microcontroller controls the robotic arm and takes commands from the operator using a series of potentiometers. The arm is made up of five rotary joints, an end effector, and a servomotor that rotates the joints to provide rotational motion. The rotation of the servomotor controls how the Robotic Arm moves its arms and grippers. As shown in the Fig. 1, the Arduino is linked to the "Blynk" software using a USB cord to enable complete IoT (internet of things) access. The robotic arm is able to manoeuvre and pick up an object placed in front of it using a gripper attached to it by utilising a mobile smartphone as a controller. The medical waste is collected and then placed in front of a camera to evaluate if it is hazardous or not. The waste is placed into the appropriate trash container as soon as it is found.

Arduino UNO:
The board may be programmed using the Arduino IDE and a type B USB cable. It features 14 digital I/O pins and 6 analogue I/O pins. It accepts voltages between 7 and 20 volts, but it can also be powered by an external 9-volt battery or by a USB cable.

MG996R Servo Motor:
A metal gear servo motor with a maximum stall torque of 11 kg/cm is the MG996R. The motor rotates from 0 to 180 degrees depending on the duty cycle of the PWM wave given to its signal pin, just like other RC servos.

SG90 Servo Motor:
The term" micro servo motor," often known as "SG90," refers to a compact, lightweight server motor with a high output power. Servo spins 180 degrees (90 in each direction) and performs similar operations to more powerful servos. Any servo code, hardware, or library can operate these servos.

Power Supply:
All of the servo motors were powered by a 5V 10A switching power supply DC adapter, which served as the power supply unit. This power supply accepts AC input from 110V to 220V and outputs DC 5V to 10A.

Bread Board:
It is a solderless electronic model for testing circuit designs and electronics prototypes. The majority of electronic components can be connected to one another in electronic circuits by putting their terminals into the holes and then connecting them with wires.

Hardware implementation:
As shown in Fig. 2, the 3D printed 6 DoF Robotic arm are assembled and connected to the servo motors. The smaller SG90 micro servo motors were utilised for the gripper and the other 3 axes, the wrist roll, wrist pitch and wrist yaw, as well as the initial 3 axes, the waist, the shoulder, and the elbow. All these servo motors are connected and controlled using Arduino UNO micro-controller as shown in Fig. 3 and Fig. 4. This Arduino Uno is in turn connected to Blynk IoT app using a USB cable. For communication with the smartphone using the Blynk IoT application, all we need is to program the Arduino UNO board and the six digital pins connected to it using the Blynk IoT server. Therefore, once everything is connected, the Arduino is now programmed using the Blynk server. The communication between the Arduino, and the smartphone application is handled by the Blynk Server. This server helps us to connect the movement of the servo motor using sliders which are designed on its interface. This application can be operated on any smartphone. So, using the required libraries, your hardware is first trained. When a command is issued from the Blynk App on your smartphone, such as Toggle an LED, the command is forwarded to the hardware by the Blynk Server and the hardware responds appropriately.
Through the use of a number of potentiometers, an Arduino Uno microcontroller manages the robotic arm and receives commands from the user. In order to produce rotational motion, the arm has five rotating joints, an end effector, and a servomotor. The Robotic Arm's grippers and arms move in accordance with the rotation of the servomotor. Complete IoT (Internet of things) access is made possible by connecting the Arduino through a USB wire to the "Blynk" software. By using a mobile smartphone as a controller, the robotic arm is able to move and pick up an object placed in front of it using a gripper linked to it. In order to determine whether or not the medical waste is hazardous, it is placed in front of a camera after being collected. The waste is disposed of in the proper trash container as soon as it is found.

Medical waste dataset:
The first step in creating solutions to enhance data collection methods is to recognise the challenges to accurate data collection. This section describes both typical data collection difficulties and those that are especially specific to gathering data on hazardous waste. Datasets on five categories-masks, gloves, syringes, mayo scissors, and straight mayo scissors-were gathered as shown in Fig. 5. Data on medical hazardous waste had to be gathered manually because there were no pre-trained or labelled datasets available. The dataset could only be found manually, which was the only alternative. More than 300 images are included in each class, for a total of more than 2000 images across the five classes.

Python:
Python is the language that is used with the Arduino. This makes it highly relevant to robotics because you can use a Arduino to control a robot. High level programming language Python places a strong emphasis on code readability.

Labelimg:
LabelImg is a programme that allows for visual image annotation. It is created in Python and has a graphical user interface using Qt. Annotations are saved as XML files in the ImageNet-compliant PASCAL VOC format. Moreover, the CreateML and YOLO formats are supported.

YOLOv5
: YOLO creates a SxS grid from the image. The input is a split image, and each cell in the image determines the type of object present and checks to see if a boundary box is present. Boundary boxes for various object classes will be distinctive and coloured differently. It is quick and precise for localising photos. The COCO dataset was used to create the YOLOv5 family of compound-scaled object identification models, which has the fundamental skills for Test Time Augmentation (TTA), model assembly, hyperparameter evolution, and export to ONNX, CoreML, and TFLite.

Software implementation:
Once the data has been gathered and labelled using Labelmg, it is delivered for training. Each image has an associated label that includes the image's x, y, height, and width as well as a 1-D vector identifying the image's class. According to Fig. 6, the model is trained using the YOLO V4 method. You Only Look Once is referred to as YOLO. Many objects in a picture are recognised and located by this method (in realtime). The class probabilities of the discovered images are provided by the object identification process in YOLO, which is handled as a regression problem. Convolutional neural networks (CNN) are used in the YOLO technique to recognise objects quickly.
An outline that draws attention to an object in a picture is called a bounding box. The dimensions of each of the image's bounding boxes, including their width and height, are mentioned. Object detection makes use of the intersection over union (IOU) phenomenon, which happens when boxes intersect one another. To create an output box that perfectly encircles the items, YOLO uses IOU. The bounding boxes and their confidence ratings must be expected for each grid cell. If the anticipated and actual boundary boxes line up, the IOU is equal to 1. This approach gets rid of bounding boxes that aren't the same size as the actual box.
For robotic arms to move properly during navigation and gripping activities, object recognition is crucial. Accuracy and precision can be managed by adjusting the training dataset and training the algorithm more frequently. "Vision-based control of the robotic system" refers to the use of visual sensors as feedback data to govern how the robot operates. The system's effectiveness and efficiency can be increased by including vision-based algorithms. Vision-based systems are being used to simulate human visual sensors.

Classification of Medical Waste:
The lack of freely accessible data sets for medical waste led to the collection of datasets for various images, which includes the following five classes: masks, gloves, syringes, mayo scissors, and straight mayo scissors. Each class has more than 300 images, totalling to more than 2000 throughout the five classes. All the images were labelled manually using LabelImg software, the label of an image is a .txt file which contains a 1-D vector identifying the class of the image, as well as its x and y coordinates and height and breadth. The model is trained using the YOLOv5 method. This algorithm may be seen in Fig. 7 and Fig. 8, which recognizes and locates several elements in an image (in real-time). The class probabilities of the discovered images are provided by the object identification process in YOLO, which is handled as a regression issue. Convolutional neural networks (CNN) are used in the YOLO technique to recognize things quickly.
Mean Average Precision (mAP) is now used by the computer vision research community as a benchmark statistic to evaluate the stability of object recognition models.
Although precision assesses forecast accuracy, recall gauges the total number of forecasts in relation to the actual data. The overall amount of processing resources that a computer may use to carry out tasks is known as computing power. The IoU threshold is used to calculate precision for tasks involving object detection. If the IoU threshold is set at 0.8, then the precision is 66.67.

Hardware and Software integration:
The completed hardware was controlled by an Arduino Uno using a Blynk IoT software. Any camera module can be used to capture the image. Here, the image of the medical waste picked up by the robotic arm was captured using the laptop camera. This image was then processed by YOLO V5 algorithm and classified whether it is hazardous or not. The corresponding results are displayed on the screen. This object detection algorithm operates in real time. After identifying the waste, the robotic arm places it in the appropriate bin.
As hospital trash may not be evenly distributed and it may be in the form of a garbage heap. The robot picks up an object before identifying it. Fig. 9 depicts a robotic arm picking up a mask from a heap of garbage that was placed in front of it. The object is identified as hazardou, and dropped in a respective bin as depicted in Fig. 9(3). Similarly, the demonstration of segregation of mayo scissor is shown in Fig. 10.

Conclusion and Future Scope
The robotic arm successfully controlled to detect the object using image processing technique and it was able to pick up the object which was detected in front of it and placed it in the right container. The detected medical waste was separated as hazardous and non-hazardous waste. Enough number of images were trained to distinguish these medical waste as hazardous or not. The model is able to detect medical waste with high accuracy.
The study areas that this robotic arm can be extended to are numerous. With the use of inverse kinematics and ROS application, the robotic arm may be designed to pick up the object that is in front of it without any user control. Wheels might be used to design the robotic arm's movement, and these wheels could choose their own course utilising path planning. Real-time object detection may be used in the field of image processing and object detection.