On April 15, 1912, more than 1,500 people lost their lives out of the 2,240 people on board the Titanic disaster . If it were to be today, a large number of people if not all would have been saved with the recent advancement in robotics technology.
One of the most interesting aspects of these advancements is the ability of a robotic system equipped with several sensors to build a map of an unknown environment and locate itself on the map at the same time. This is called simultaneous localization and mapping (SLAM in short form). The map information is used to plan the robot motion and avoid obstacles on the path. If the Titanic had been equipped with these technologies, the iceberg which caused the disaster would have been detected and avoided far before the collision.
SLAM has found many applications not only in navigation, augmented reality, autonomous vehicles e.g. self-driving cars, drones but also in indoor & outdoor delivering robots, intelligent warehousing etc. In the context of this thesis, we propose to study, design and implement SLAM algorithm using our state-of-the-art Intel Realsense visual and light detection & ranging (LiDAR) sensors, and a mobile robot as a test-bed. The idea is to develop an algorithm that can aid a robotic system to go into a human hazardous area, for example, a mining site and perform some tasks of interest such as acquiring relevant data of the environment for post-processing. The robot should be capable of interacting with the environment effectively and also act as a remote pair of mobile eyes and ears, providing the operator with remote information about its location, position, and 2D/3D map of the environment.
Some of the most common challenges with SLAM are the accumulation of errors over time due to inaccurate pose estimation (localization errors) while the robot moves from the start location to the goal location; the high computational cost for image, point cloud processing and optimization . These challenges can cause a significant deviation from the actual values and at the same time leads to inaccurate localization if the image and cloud processing is not processed at a very high frequency . This would also impair the frequency with which the map is updated and hence the overall efficiency of the SLAM algorithm will be affected.
Tentative Work Plan
In the course of this thesis, the following concrete tasks will be focused on:
- study the concept of visual or LiDAR based SLAM as well as its application in the survey of an unknown environment.
- 2D/3D mapping in both static and dynamic environments.
- development of a sensor fusion algorithm for localization and multi-object tracking in the environment
- use of the SLAM algorithm for motion planning and control of the robot through the probabilistic approach.
- real-time experimentation, simulation (MATLAB, ROS & Gazebo, Rviz, C/C++, Python etc.) and validation of the algorithm.
About the Laboratory
- additive manufacturing unit (3D and laser printing technologies).
- metallic production workshop.
- robotics unit (mobile robots, robotic manipulator, robotic hand, unmanned aerial vehicles (UAV))
- sensors unit (Intel Realsense (LiDAR, depth and tracking cameras), Inertial Measurement Unit (IMU), OptiTrack cameras etc.)
- electronics and embedded systems unit (Arduino, Raspberry Pi, e.t.c)