B.Sc. or M.Sc. Project/Thesis: Mobile robot teleoperation based on human finger direction and vision

Supervisors: Univ. -Prof. Dr. Elmar Rückert  and Nwankwo Linus M.Sc.

Theoretical difficulty: mid
Practical difficulty: mid

Naturally, humans have the ability to give directions (go front, back, right, left etc) by merely pointing fingers towards the direction in question. This can be done effortlessly without saying a word. However, mimicking or training a mobile robot to understand such gestures is still today an open problem to solve.
In the context of this thesis, we propose finger-pose based mobile robot navigation to maximize natural human-robot interaction. This could be achieved by observing the human fingers’ Cartesian  pose from an

RGB-D camera and translating it to the robot’s linear and angular velocity commands. For this, we will leverage computer vision algorithms and the ROS framework to achieve the objectives.
The prerequisite for this project are basic understanding of Python or C++ programming, OpenCV and ROS.

Tentative work plan

In the course of this thesis, the following concrete tasks will be focused on:

  • study the concept of visual navigation of mobile robots
  • develop a hand detection and tracking algorithm in Python or C++
  • apply the developed algorithm to navigate a simulated mobile robot
  • real-time experimentation
  • thesis writing


  1. Shuang Li, Jiaxi Jiang, Philipp Ruppel, Hongzhuo Liang, Xiaojian Ma,
    Norman Hendrich, Fuchun Sun, Jianwei Zhang,  “A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU“,  IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),  October 25-29, 2020, Las Vegas, NV, USA.

Innovative Research Discussion

Meeting notes on the 23rd of March 2022

Location: Chair of CPS

Date & Time: 23rd March. 2022, 12 pm to 1.00 pm

Participants: Univ.-Prof. Dr. Elmar Rueckert, Linus Nwankwo, M.Sc.



  1. Discussion of the research progress
  2. Discussion of related literature for 2D Lidar-based SLAM and path planning for improved autonomous navigation

Topic 1: SLAM + Path Planning Algorithms

  1.  Compare Lidar-based SLAM with RGBD-based
  2. Select the appropriate algorithm to build the SLAM system
  3. Evaluate the performances of the chosen SLAM + path planning algorithm in terms of:
    • computational efficiency
    • collision avoidance in a dynamic environment.

Topic 2: Real-time Implementation

  1.  Compare our two mobile bases: 
  2. Implement the chosen SLAM algorithm to solve the problem of loop closure that we are currently facing.
  3.  Send the robot to a specific goal location within the laboratory

Future Steps

Toward improved autonomous navigation in real-time with any of our mobile bases.

Next Meeting

Yet to be defined

B.Sc. Thesis: Fritze Clemens on Mobile Robot Teleoperation in ROS for Basic SLAM Application

Supervisor: Linus Nwankwo, M.Sc.;
Univ.-Prof. Dr Elmar Rückert
Start date: 10th January 2022


Theoretical difficulty: mid
Practical difficulty: mid


Nowadays, robots used for survey of indoor and outdoor environments are either teleoperated in fully autonomous mode where the robot makes a decision by itself and have complete control of its actions; semi-autonomous mode where the robot’s decisions and actions are both manually (by a human) and autonomously (by the robot) controlled; and in full manual mode where the robot actions and decisions are manually controlled by humans. In full manual mode, the robot can be operated using a teach pendant, computer keyboard, joystick, mobile device, etc.

Although the Robot Operating System (ROS) has provided roboticists with easy and efficient tools to teleoperate or command robots with both hardware and software compatibility on the ROS framework, however, there is a need to provide an alternative approach to encourage a non-robotic expert to interact with a robot. The human hand-gesture approach does not only enables the robot users to teleoperate the robot by demonstration but also enhances user-friendly interaction between robots and humans.

In the context of this thesis, the application of human hand gestures is proposed to teleoperate our mobile robot using embedded computers and inertial measurement sensors. First, we will teleoperate the robot on the ROS platform and then via hand gestures, leveraging on the framework developed by [1] and [2].

Tentative Work Plan

To achieve our aim, the following concrete tasks will be focused on:

  • Design and generate the Unified Robot Description Format (URDF) for the mobile robot
  • Simulate the mobile robot within the ROS framework (Gazebo, Rviz)
  • Set up the interfaces and serial connection between ROS and the robot control devices
  • Develop an algorithm to teleoperate the robot in the ROS using hand gesture
  • Use the algorithm to perform Simultaneous Localization and Mapping (SLAM) for indoor applications (simulation only)


[1] Nils Rottmann et al,, 2020

[2] Wei Zhang, Hongtai Cheng, Liang Zhao, Lina Hao, Manli Tao and ChaoqunXiang, “A gesture-based Teleoperation System for Compliant Robot Motion“, Appl. Sci. 20199(24), 5290;

Thesis Document

Innovative Research Discussion

Meeting on the 9th of December 2021

Location: Chair of CPS

Date & Time: 9th Dec. 2021, 1pm to 1.30pm

Participants: Univ.-Prof. Dr. Elmar Rueckert, Linus Nwankwo, M.Sc.



  1. Discussion of the research progress
  2. Discussion of related literature for robot intention communication and human robot interaction

Topic 1: Robot Intention Communication and Human Robot Interaction

  1. Study the given literature to identify the areas of possible improvement
  2. Investigate the integration of LED projector for robot intention communication to humans
  3. Implement a visual signal algorithm to communicate the robot’s intention and direction of motion to humans

Topic 2: Probabilistic SLAM

  1.  Review the  AMCL approach to track the pose of our robot against the map of the environment while mapping.
  2.  Design a SLAM algorithm based on the probabilistic framework and evaluate the performance in terms of loop closure detection, accurate pose estimation and computational cost.

Future Steps

Integrate deep Learning approach with Faster R-CNN model to:

  • detect for example if the person interacting with robot is wearing a face mask
  • classify the face mask worn based on the recommended or not

Next Meeting

Yet to be scheduled

Innovative Research Discussion

Meeting on the 13th of October 2021

Location: Chair of CPS

Date & Time: 13th October. 2021, 1pm to 2pm

Participants: Univ.-Prof. Dr. Elmar Rueckert, Linus Nwankwo, M.Sc.



  1. Review of the previous meeting action points
  2. Discussion of the research progress and possible improvement

Topic 1: Review of the SLAM state-of-the-art

  1. Review the current literature to identify the areas of possible improvement
  2. Investigate the possible integration of RGB-D sensors with thermal sensors for indoor or outdoor SLAM
  3. Evaluate the impact of semantic segmentation and pose estimation on the quality of the map in dynamic environment

Topic 2: Existing Algorithms Performance Evaluation

  1. Implement a loop closure trigger for possible use with any of the SLAM algorithm
  2.  Implement the following SLAM algorithms and evaluate the performances in terms of optimality, accurate pose estimation, computational cost, etc.
  • RTABMap
  • Hector SLAM
  • GMapping

Next Steps

Implement own SLAM algorithm robust against:

  • Odometry Lost
  • Variation in weather and illumination
  • Weak dynamic object detection

Next Meeting

Yet to be scheduled

B.Sc. or M.Sc. Thesis/Project: Simultaneous Localization and Mapping (SLAM) from RGB-D sensors on a RMP 220 Lite robot.

Supervisor: Linus Nwankwo, M.S.c;
Univ.-Prof. Dr Elmar Rückert
Start date: ASAP, e.g., 1st of October 2021

Theoretical difficulty: low
Practical difficulty: high


On April 15, 1912, more than 1,500 people lost their lives out of the 2,240 people on board the Titanic disaster [4]. If it were to be today, a large number of people if not all would have been saved with the recent advancement in robotics technology. 

One of the most interesting aspects of these advancements is the ability of a robotic system equipped with several sensors to build a map of an unknown environment and locate itself on the map at the same time. This is called simultaneous localization and mapping (SLAM in short form). The map information is used to plan the robot motion and avoid obstacles on the path. If the Titanic had been equipped with these technologies, the iceberg which caused the disaster would have been detected and avoided far before the collision.


SLAM has found many applications not only in navigation, augmented reality, autonomous vehicles e.g. self-driving cars, drones but also in indoor & outdoor delivering robots, intelligent warehousing etc. In the context of this thesis, we propose to study, design and implement SLAM algorithm using our state-of-the-art Intel Realsense visual and light detection & ranging (LiDAR) sensors, and a mobile robot as a test-bed. The idea is to develop an algorithm that can aid a robotic system to go into a human hazardous area, for example, a mining site and perform some tasks of interest such as acquiring relevant data of the environment for post-processing. The robot should be capable of interacting with the environment effectively and also act as a remote pair of mobile eyes and ears, providing the operator with remote information about its location, position, and 2D/3D map of the environment.

Some of the most common challenges with SLAM are the accumulation of errors over time due to inaccurate pose estimation (localization errors) while the robot moves from the start location to the goal location; the high computational cost for image, point cloud processing and optimization [2]. These challenges can cause a significant deviation from the actual values and at the same time leads to inaccurate localization if the image and cloud processing is not processed at a very high frequency [3]. This would also impair the frequency with which the map is updated and hence the overall efficiency of the SLAM algorithm will be affected.

Tentative Work Plan

In the course of this thesis, the following concrete tasks will be focused on:

  • study the concept of visual or LiDAR based SLAM as well as its application in the survey of an unknown environment.
  • 2D/3D mapping in both static and dynamic environments.
  • development of a sensor fusion algorithm for localization and multi-object tracking in the environment
  • use of the SLAM algorithm for motion planning and control of the robot through the probabilistic approach.
  • real-time experimentation, simulation (MATLAB, ROS & Gazebo, Rviz, C/C++, Python etc.) and validation of the algorithm.

About the Laboratory

Robotics & AI-Lab of the Chair of Cyber Physical Systems is a research innovative lab focusing on robotics, artificial intelligence, machine and deep learning, embedded smart sensing systems and computational models[1]. To support its research and training activities, the laboratory currently has:
  • additive manufacturing unit (3D and laser printing technologies).
  • metallic production workshop.
  • robotics unit (mobile robots, robotic manipulator, robotic hand, unmanned aerial vehicles (UAV))
  • sensors unit (Intel Realsense (LiDAR, depth and tracking cameras), Inertial Measurement Unit (IMU), OptiTrack cameras etc.)
  • electronics and embedded systems unit (Arduino, Raspberry Pi, e.t.c)

Expression of Interest

Students interested in carrying out their Master of Science (M.Sc.) or Bachelor of Science (B.Sc.) thesis on the above topic should immediately contact or visit the Chair of Cyber Physical Systems.

Phone: +43 3842 402 – 1901 

E-mail: click here

Map: click here



[2]  V.Barrile, G. Candela, A. Fotia, ‘Point cloud segmentation using image processing techniques for structural analysis’, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W11, 2019 

[3]  Łukasz Sobczak , Katarzyna Filus , Adam Domanski and Joanna Domanska, ‘LiDAR Point Cloud Generation for SLAM Algorithm Evaluation’, Sensors 2021, 21, 3313. s21103313


Innovative Research Discussion

Meeting on the 18th of August 2021

Location: Chair of CPS

Date & Time: 18th Aug. 2021, 1pm to 2pm

Participants: Univ.-Prof. Dr. Elmar Rueckert, Linus Nwankwo, M.Sc.



  1. Review of the previous meeting action points
  2. Discussion of the first research topic
  3. Robot for Control, SLAM, Navigation & Motion Planning Research

Topic 1: Design and Implementation of an Environment Survey Robot


Safe human – robot interaction : The robot must have to project the direction of motion in the working environment (enabling humans to note the direction of motion of the robot).

Navigation and Motion Planning: Move around the environment with smooth turning motion (no jerky) and gradual stopping motion. The robot must at the same time detect and avoid obstacles at the workspace.

Localization and Mapping: Update the map of the unknown environment and at the same time keeping track of its current location with respect to the map.

Topic 2: Robots into consideration for the tasks

  1. Segway RMP Mobile Platform – RMP Lite 220/400 
  2. Segway RMP Mobile Platform – RMP Smart 260 – request for technical details sent (awaiting reply)

Next Steps

  1. Investigate the interconnection of Realsense cameras as a Kalman filter
  2. Experimentation with the Lego EV3

Next Meeting

Next meeting was scheduled on Thursday, 26th August, 2021. 11:00 am – 12:00 pm.

Meeting Notes

Meeting on the 13th of August 2021

Location: Chair of CPS

Date & Time: 13th Aug. 2021, 1pm to 2pm

Participants: Univ.-Prof. Dr. Elmar Rueckert, Linus Nwankwo, M.Sc.



  1. Organisatorial progress
  2. Feedback to the research talk presentation
  3. Topics of promising future research direction.
  4. Next steps.
  5. Date of the next meeting.

Top 1: Organisational Update

  1. Resident permit application – still in progress (awaiting appointment)

Top 2: Feedback to Research Talk presentation

  1.  Use of the University of Leoben slides template for official presentation is required.
  2. Use of social media platforms (Facebook, LinkedIn, Instagram, etc.) as footer or header for any presentation not required.
  3. Microsoft office tools, Latex etc. can be used for documents preparation

Top 3: Topics of Promising Future Research Direction

  1.  Linear Quadratic Regulator (LQR) optimization using reinforcement learning: In this topic, we propose to study how reinforcement machine learning techniques could be used to optimize LQR controllers.
  2. Design and implementation of an Environmental Survey Robot (to be called MoLBot):  In this topic, we proposed to develop a teleprensence robot to be used as a demonstrative system for testing control designs, navigations and motion planning algorithms with the capabilities of interacting with the environment (providing one with a remote pair of mobile eyes and ears to have remote presence at any location). Interactive and sensing features should includes Touch screen, LiDAR sensor, RGB-D camera, or a laser scanner  etc.
  3.  Development of an Innovative Stairs Climbing Robot: This project is proposed to develop a robot with the capabilities of climbing floor steps. The type of wheels to adapt the robot for the steps climbing not yet decided.
  4. Elevator Caller Robot: This project aims to develop a robot that can autonomously operate the university elevator. The features should at least include ROS Bluetooth with secure encryption.
  5. EV3 Lego Project:  In this project, we proposed to assemble the existing EV3 Lego robot parts and train students on environmental perception, navigation and motion planning.

Next Steps

  1. Search for adaptable robot wheels for the implementation of topic 3 above
  2. Search for a robot base for the implementation of topic 2 above

Next Meeting

Next meeting was scheduled on Wednesday, 18th August, 2021. 11:00 am – 12:00 pm.