image_pdfimage_print

M.Sc. Thesis: Christopher Steer on Performance Evaluation of Map-based and Mapless Mobile Navigation (M^3N) in Crowded Dynamic Environment

Supervisor: Linus Nwankwo, M.Sc.;
Univ.-Prof. Dr Elmar Rückert
Start date: 5th September 2022

 

Theoretical difficulty: mid
Practical difficulty: mid

Abstract

For over 20 years today, the simultaneous localisation and mapping (SLAM) method has been widely used to achieve autonomous navigation objectives. The robot is required to build the map of its work environment given the estimate of its state, sensor observation and series of control and simultaneously localise itself relative to the map. However, a mapless-based approach with deep reinforcement learning

has been proposed in recent years. For this, the agent (robot) learns the navigation policy given only sensor data and a series of control without a prior map of the task environment. In the context of this thesis, we evaluate the performance of both approaches in a crowded dynamic environment using our open-source open-shuttle mobile robot.

Tentative Work Plan

To achieve our objective, the following concrete tasks will be focused on:

  • Literature research and a general understanding of the field
    • mobile robotics and industrial use cases
    • Overview of map-based autonomous navigation (SLAM & Path planning)
    • Overview of mapless-based autonomous navigation approach with deep reinforcement learning
    •  
  • Setup and familiarize with the simulation environment
    • Build the robot model (URDF) for the simulation (optional if you wish to use the existing one)
    • Setup the ROS framework for the simulation (Gazebo, Rviz)
    • Recommended programming tools: C++, Python, Matlab
    •  
  • Intermediate presentation:
    • Presenting the results of the literature study
    • Possibility to ask questions about the theoretical background
    • Detailed planning of the next steps
    •  
  • Define key performance/quality metrics for evaluation:
    • Time to reach the desired goal
    • Average/mean speed
    • Path smoothness
    • Obstacle avoidance/distance to obstacles
    • Computational requirement
    • success rate
    • e.t.c
    •  
  • Assessment and execution:
    • Compare the results from both map-based and mapless approaches on the above-defined evaluation metrics.
    •  
  • Validation:
    • Validate both approaches in a real-world scenario using our open-source open-shuttle mobile robot.
    •  
  • Furthermore, the following optional goals are planned:
    • Develop a hybrid approach combining both the map-based and the mapless methods.
    •  
  • M.Sc. thesis writing
  • Research paper writing (optional)

 

Related Work

[1] Han HuKaicheng ZhangAaron Hao TanMichael RuanChristopher AgiaGoldie Nejat “Sim-to-Real Pipeline for Deep Reinforcement Learning for Autonomous Robot Navigation in Cluttered Rough Terrain”,  IEEE Robotics and Automation Letters ( Volume: 6, Issue: 4, October 2021).

[2] Md. A. K. NiloyAnika ShamaRipon K. ChakraborttyMichael J. RyanFaisal R. BadalZ. TasneemMd H. AhamedS. I. Mo, “Critical Design and Control Issues of Indoor Autonomous Mobile Robots: A Review”, IEEE Access ( Volume: 9), February 2021.

[3]  Ning Wang, Yabiao Wang, Yuming Zhao, Yong Wang and Zhigang Li , “Sim-to-Real: Mapless Navigation for USVs Using Deep Reinforcement Learning”, Journal of Marine Science and Engineering, 2022, 10, 895. https://doi.org/10.3390/jmse10070895

M.Sc. Thesis: Daniel Wagermaier on Improving fundamental metallurgical modelling using data-driven approaches

Supervisor: Univ.-Prof. Dr Elmar Rückert, Qoncept GmbH
Start date: 1st of August 2022

Theoretical difficulty: mid
Practical difficulty: low

Introduction

As direct observations and permanent measurements during steelmaking processes are not possible, modelling has become a powerful tool. The technique of fundamental-based metallurgical modelling is well-established and demonstrates its capabilities in a wide range of applications in modern steelmaking. Following the general trend, data-driven approaches are increasingly used today in various areas of metallurgical modelling, in addition to these  classical fundamental approaches. Depending on the field of application, fundamental-based and data-driven models both have their own advantages and disadvantages.

The overall goal of the present thesis is to combine both models in order to leverage the strengths of  these two different methods. The first step is to apply several different data-driven models and compare them to the metallurgical model to see how they perform differently. In the second phase, various ways of combining data-driven models with the metallurgical model should be investigated. For example, this could be done via a data-driven optimization of its tuning parameters or by replacing them with data-driven models. Also, adding a data-driven residual term to the metallurgical model could be possible. Based on these findings, the third part of the thesis should focus on online learning and methods of how to avoid an off-drifting of the model. The fourth and last section of the thesis should investigate various ways of detecting errors in new data. While point one and two are the main focus of the thesis, point three and four are considered to be optional.

Tentative Work Plan

The following concrete tasks will be focused on:

  • Literature research.
  • Training of different data-driven models in Python.
  • Performance comparison between data-driven models and the metallurgical model.
  • Combination of selected data-driven models and the metallurgical model in Python.
  • (Optional) Investigate different ways for online learning and live performance evaluation.
  • (Optional) Anomaly detection in new data.
  • Thesis writing.

M.Sc. thesis: Benjamin Schödinger on A framework for learning Vision and Tactile correlation

Supervisor: Vedant Dave, M.Sc; Univ.-Prof. Dr Elmar Rückert
Start date: 1st May 2022

Theoretical difficulty: Mid
Practical difficulty: Mid

Abstract

Tactile perception is one of the basic senses in humans that utilize almost at every instance. We predict the touch of the object even before touching it, only through vision. If a novel object is encountered, we predict the tactile sensation even before touching. The goal of this project is to predict tactile response that would be experienced if this grasp were performed on the object. This is achieved by extracting the features of the visual data and the tactile information and then learning the mapping between those features. 

We use Intel RealSense depth camera D435i for capturing images of the objects and Seed RH8D Hand with tactile sensors to capture the tactile data in real time(15 dimensional data). The main objective is to perform well on the novel object which have some shared feature representation of the previously seen objects.

Plan

Related Work

[1] B. S. Zapata-Impata, P. Gil, Y. Mezouar and F. Torres, “Generation of Tactile Data From 3D Vision and Target Robotic Grasps,” in IEEE Transactions on Haptics, vol. 14, no. 1, pp. 57-67, 1 Jan.-March 2021, doi: 10.1109/TOH.2020.3011899.

[2] Z. Abderrahmane, G. Ganesh, A. Crosnier and A. Cherubini, “A Deep Learning Framework for Tactile Recognition of Known as Well as Novel Objects,” in IEEE Transactions on Industrial Informatics, vol. 16, no. 1, pp. 423-432, Jan. 2020, doi: 10.1109/TII.2019.2898264.