Rubble Assistant Robot using SLAM

In this paper, we are going to present an implementation of SLAM. This SLAM technology is used for creating a map of an unknown environment by placing a robot. Robot will be interfaced and controlled using Robot Operating System (ROS). We are aiming to create a map of a region where disaster has taken place. It can be used to create a map of the regions where humans cannot reach. In the disaster prone areas it is not possible for humans to check the location of survivors. Our robot will help to amplify low human noise to detect presence of humans under the rubble and to detect them using sensors and then we can mark that location so that rescue team would quickly reach to that location for their rescue, also simultaneous live feed/streaming using normal camera to monitor the area. Here main part is odometry of robot. To calculate odometry, camera and encoders are used. So, SLAM based robots can help in mapping their locations digitally.


Introduction
Mapping a territory has always been an essential topic in science and technology, especially in robotics. Different types of methods have been developed, since one of the problem that has been developed and is still being solved is the SLAM. SLAM refers to Simultaneous Localization and Mapping. SLAM is a challenge of putting a robot in an unknown region which maps its surroundings with respect to its own position [1]. In this project our model will be interfaced with Robot Operating System (ROS). Mechanism of our robot will be a Rocker Bogie Mechanism. We are going to use Tiva C launchpad for interfacing the sensors, motors. We are going to create a map which requires various parameters which would be obtained from various sensors. As the robot moves around the area, it uses ultrasonic sensors to detect and avoid obstacles. DC encoder motors are used to provide mobility. MPU 6050 is used for calculating odometry of robot along with encoders. Mapping needs 3D vision camera to sense the surrounding objects and maps them in 3D. In this project we are using xbox kinect 360, which is a depth sensor mostly used for detecting gestures/motion for gaming purpose.

Literature Survey
The industrial revolution lead the advancement in science and technology. Through robotics it becomes easier to solve the problems with more accuracy and with more efficiency as compared to the manual method to solve any problem.Earlier in robotics, it was mandatory that an individual team should monitor the and give commands to the robot using communication interfaced with robot. The historical review of the first 20 years of the SLAM problem is given by Durrant-Whyte and Bailey in two surveys. (1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004) the classical age saw the introduction of the main probabilistic formulations for SLAM, including approaches based on Extended Kalman filters (EKF), Rao-Blackwellized particle filters, and maximum likelihood estimation; moreover, it describes the basic challenges connected to efficiency and robust data collection [1]. The marketability of the SLAM problem is related with the emergence of indoor applications of mobile robotics. Indoor operation removes the use of GPS to limit the localization error; furthermore, SLAM provides an appealing alternative to user-defined maps, showing that robot operation is possible in the absence of an Ad-HOC localization infrastructure [1]. Hardware implementation of our robot is based on Rocker-Bogie mechanism. NASA developed rocker-bogie suspension system for their rovers and was implemented in Mars Pathfinder and Sojourner rover [2]. This system maintains all wheels in contact with ground even in rough terrains.
Robots in the field of disaster management are rarely seen. Researchers at IIT-Istituto Italiano di Tecnologia developed, assembled a new disaster response robot called the Centauro [3]. Even few robots are being developed to control fire.

Software Implementation
The software designing of our robot model consist of number of implementation steps. These steps include ROS implementation, Programming Tiva C launchpad using Energia IDE, Gazebo simulation using rviz package of ROS. This simulation helps us to know whether the robot is working properly before actually implementing on hardware.

Introduction to ROS
The Robot Operating System (ROS) is a open-source platform and a flexible robotics framework. The autonomous mobile robot using multi-sensor data deals with various sensors functioning simultaneously.
ROS provides different software tools to visualize and debug robot data. The core of the ROS framework is a message-passing middle-ware in which processes can communicate and exchange data with each other even when running from different machines.Using the ROS developers can create many of robotic capabilities, such as mapping and navigation in mobile robots [4].
In our project, we used "ROS-Kinetic" framework running in Ubuntu operating system (16.04).

Creating Robot model
• Creating URDF File of Robot URDF is a Unified Robot Description Format which is used to create programmed representation of a mobile robot in Gazebo. It is an XML format for robot boundaries representation. The .urdf file consist of links and joints of the robot model. Every link and joint is described explicitly which increases redundancy [5].
• Creating a XACRO File XACRO file is a XML Macro Language. It is used to simplify the URDF file which reduces the redundancy of programming. It uses parameterized entities like shapes, collision parameters, weight for defining robot attributes [5].

Simulating Robot on Gazebo
Gazebo is a software to simulate the robot model before implementing it on hardware. It is used to create a virtual world where we can test work-ability of our robot.It provides a easy programming interface and good quality graphics [6].

Kinect's Depth Sensor Operation
Microsoft Kinect Sensor consist of visual camera, 3D depth sensor, microphone array, motorized tilt. Depth Sensor mainly consist of IR projector along with CMOS Infrared camera(IR Camera). We are using it for mapping purpose in which robot is capable of creating a map of its surrounding environment. Kinect sensor was mostly used for tracking human actions for gaming purpose [7].
Depth sensor transmits infrared rays and measures the distance of each node points of the object by measuring the reflected rays from the objects (time of flight). Kinect consists of array of four microphones which enables to conduct ambient noise suppression which helps us to amplify low voice of humans under the rubble.
OpenNI package of ROS is used for kinect's integration. It also supports freenect package.

Introduction to SLAM
SLAM is a simultaneous operation of localization and mapping. While mapping, landmark objects are recognized using robot's odometry and also while localization the position of robot is measured with respect to the objects in the surrounding [8]. In fig.2 we have shown Figure 6. SLAM the slam parameters which explain the localization of the robot. While creating a map, robot position is considered as X(k). z(k) is considered as the landmark location with respect to robot. p(k) denotes landmark location with respect to global reference point. While updating the map, robot is located using landmarks and the data which has been collected. The uncertainty of data (errors) involved in this process can be removed using filters [9].
The gmapping, rtabmap, and cartographer packages are widely used for implementing SLAM. Cartographer package is developed by google. In this project, we have used the gmapping package.

Simultaneous Localization and Mapping
• Mapping (gmapping): GMapping is a Rao-Blackwellized particle filter (RBPF) which is highly efficient to learn grid maps from data of laser range. Using this package we can create a 2D grid map from data (laser and position) collected by mobile robot. The graphical representation of a map can be visualized using Rviz (ROS Visualizer) package of ROS [10].
• Localization (amcl): Localization is achieved using Monte Carlo Localization, which uses particle filtering to navigate the robot. We used amcl (Adaptive Monte Carlo Localization) package of ROS. Amcl transforms kinect sensor's data into odometry frame which is combined with the real time position [10].
• Using the Odometry Data: Odometry of robot is obtained through IMU data and motor encoders. This is important to obtain the exact locations of robot in surrounding, which helps to obtain precise map of the area.
• Interfacing Tiva C Launchpad: Tiva C Launchpad is similar to arduino which acts as a bridge between ground sensors and processing computer. This is used to acquire sensor data as well as controlling motors. It is programmed using Energia IDE which provides userfriendly programming environment [11]. We have implemented a Rocker-Bogie Mechanism as a hardware design. Our objective behind this is to design a robot for working on different landscapes like rough terrain, plain surfaces,and also overcoming obstacles as well as climbing over obstacles of certain height.

Hardware Implementation
One of the biggest challenges was to use pipes for our hardware implementation. We cut those pipes in proper dimension and used various hardware tools to assemble them together. PVC pipes are easily available in the market and plus they are light-weighted. Due to these factors, we choose PVC pipes as a material to build our robot.
Various hardware components are used such as motors, sensors, embedded boards, batteries etc; they need to be integrated together. First, we tested each component individually such as working of motors, observed readings of different sensors such as ultrasonic sensor, IMU sensor interfaced with Tiva C Launchpad. Kinect sensor calibration was conducted using OpenNi package of ROS. Integrating all these components together was the last step of the project.

Robot Model Simulation
This model simulation contains kinect sensor, motors, actuators defined according to gazebo syntax. Due to this gazebo identifies various sensors and function accordingly.
A node running in terminal is a keyboard teleop node (teleoperation) which gives velocity commands on press of respective key from keyboard to the wheels attached to robot.
On applying velocity commands to robot, it moves accordingly in the world. Even we can increase/decrease speed of rotation from this teleop node [12].

Mapping
Rviz helps to visualize map generated by robot of a certain unknown world [13]. This generated map can be saved and used for navigation autonomously afterwards. Package used to generate map is gmapping package. Topics such as /scan of laserscan node and /map of gmapping node is used prominently.

Localization
Rviz also helps in navigating robot through a map generated through gmapping package earlier. To navigate robot without colliding is main task which is carried out through amcl package. We can set goal to our robot and our robot will reach the goal by planning its own path avoiding the obstacles placed in between [14]. • Underwater implementation of SLAM based robot will help to map area underwater and to study and to collect data marine life.
• Underground monitoring robot based on SLAM will help to monitor and survey area under the ground such as Oil pipeline, Tunnels, Mining Area, Underground Metro tracks.
• Hotel Assistant Robot which will act as laundry service/ room service or it will act as a waiter which will serve food to the tables. • Space Robots can be implemented which will map the unknown planets and can monitor the surface of that planet. • Unmanned Air-Vehicle (UAV) can be used for surveillance around the specific area. Using SLAM, it will be useful during wars for navigating enemy's territory from air.

Conclusion
Generally we cannot conclude our work right here as SLAM is still an area of research. Many ROS developers are still exploring this field of robotics and there are areas that have not even been touched yet. In our project, we were able to assemble Drobo (Short for Disaster robot) and were able to map area of its surrounding. In this year we were able to map an indoor area. In next stages of our research, we will be extending its reach and ability to map larger areas with outdoor locations. We also plan to extend path planning to uneven 3D terrains as disaster-prone area won't be plain surface at all.
Simultaneous Localization and Mapping (SLAM) is a technology which helps to build a map of an unknown location and simultaneously keeping track of it. Kinect used, which guarantees the precision of mapping environment. This kind of robot is beneficial in many applications. Our work will be helpful for further reference or study in SLAM. Despite of having complex structure, this technology will be a leading technology in near future.