image_pdfimage_print

Integrated CPS Project or B.Sc. Thesis: Mobile Navigation via micro-ROS

Supervisors:

Start date: October 2022

 

Qualifications

  • Interest in controlling and simulating mobile robotics
  • Interest in Programming in Python and ROS or ROS2
 
Keywords: Mobile robot control, robot operating system (ROS), ESP32

Description

The goal of this project or thesis is to develop a control and sensing interface for our mobile robot “RMP220“. The RMP220 has two powerful brush-less motors equipped with two magnetic encoders.

Learn in this project how to read the sensor values and how to control the motors via micro-ros on a ESP32 controller.

Links:

 

Note: This project is also offered as Internship position.

https://www.youtube.com/watch?v=-MfNrxHXwow

Single Person Project or Team Work

You may work on the project alone or in teams of up to 4 persons.

For a team work task, the goals will be extended to control the robot via ROS 2 and to simulate it in Gazebo or RViz.

Interested?

If this project sounds like fun to you, please contact Linus Nwankwo or Elmar Rueckert or simple visit us at our chair in the Metallurgie building, 1st floor.

Integrated CPS Project or B.Sc./M.Sc. Thesis: Mixed Reality Robot Teleoperation with Hololens 2

Supervisors:

Start date: October 2022

 

Qualifications

  • Basic skills in Python or C++
  • ROS or Unity3D/C#
 
Keywords: Augmented Reality, Robotic Interfaces, Engineering, Graphical Design

Description

Mixed Reality (AR) interface based on Unity 3D for intuitive programming of robotic manipulators (UR3). The interface will be implemented within on the ROS 2 robotic framework.

Note: This project is also offered as Internship position.

https://www.youtube.com/watch?v=-MfNrxHXwow

Abstract

Robots will become a necessity for every business in the near future. Especially companies that rely heavily on the constant manipulation of objects will need to be able to constantly repurpose their robots to meet the ever changing demands. Furthermore, with the rise of Machine Learning, human collaborators or ” robot teachers” will need a more intuitive interface to communicate with them, either when interacting with them or when teaching them.

In this project we will develop a novel Mixed (Augmented) Reality Interface for teleoperating the UR3 robotic manipulator. For this purpose we will use AR glasses to augment the user’s reality with information about the robot and enable intuitive programming of the robot. The interface will be implemented on a ROS 2 framework for enhanced scalability and better integration potential to other devices.

Outcomes

This thesis will result to an innovative graphical interface that enables non-experts to program a robotic manipulator.

The student will get valuable experience in the Robot Operating System (ROS) framework and developing graphical interfaces on Unity. The student will also get a good understanding of robotic manipulators (like UR3) and develop a complete engineering project.

Interested?

If this project sounds like fun to you, please contact Fotios Lygerakis by email at fotios.lygerakis@unileoben.ac.at or simple visit us at our chair in the Metallurgie building, 1st floor.

B.Sc. or M.Sc. Thesis/Project: Simultaneously predicting multiple driving strategies using probabilistic inference

Supervisors: Univ.-Prof. Dr Elmar Rückert, LUPA Electronics GmbH
Start date: ASAP from June 2022

 

Theoretical difficulty: high
Practical difficulty: low

Abstract

Wir Menschen sind in der Lage unter widrigen Bedingungen z.B. bei eingeschränkter Sicht, oder bei Störungen komplexe Vorgänge wahrzunehmen, vorherzusagen und innerhalb von wenigen Millisekunden zusammenhängende Entscheidungen zu treffen. Mit dem zunehmenden Grad der Automatisierung steigen auch die Anforderungen an künstliche Systeme. Immer komplexere und größere Datenmengen müssen verarbeitet werden um autonome Entscheidungen zu treffen. Mit gängigen KI Ansätzen stoßen wir aufgrund der konvergierenden Miniaturisierung an Grenzen, die z.B. im Bereich des autonomen Fahrens nicht ausreichen, um ein sicheres autonomes System zu entwickeln.

Ziel dieser Forschung ist es probabilistische Vorhersagemodelle in massiv parallelisierbaren neuronalen Netzen zu implementieren und mit diesen komplexe Entscheidungen Aufgrund erlernter interner Vorhersagemodelle zu treffen. Die neuronalen Modelle verarbeiten hoch dimensionale Daten moderner und innovativer taktiler und visueller Sensoren. Wir testen die neuronalen Vorhersage und Entscheidungsmodelle in humanoiden Roboteranwendungen in dynamischen Umgebungen.

Unser Ansatz beruht auf der Theorie der probabilistischen Informationsverarbeitung in neuronalen Netzen und unterscheidet sich somit grundlegend von den gängigen Methoden tiefer neuronaler Netze. Die zugrundeliegende Theorie ermöglicht weitreichende Modelleinblicke und erlaubt neben den Vorhersagen von Mittelwerten auch Unsicherheiten und Korrelationen. Diese zusätzlichen Vorhersagen sind entscheidend für verlässliche, erklärbare und robuste künstliche Systeme und sind eines der größten offenen Probleme in der künstlichen Intelligenz Forschung.

Tentative Work Plan

To achieve our aim, the following concrete tasks will be focused on:

  • Literature research on graphical model inference of motion plans.
  • Toy Task implementation in Python. 
  • Implementation of  GMMs, PTSMs and combinations in Python.
  • Visualization and analysis of the prediction performance. Definition of suitable evaluation criteria.
  • (Optional) Implementation in a realistic driving simulator.
  • Analysis and evaluation of the generated data.

B.Sc. or M.Sc. Thesis/Project: Machine Learning for Predicting Yield Strengths with the Stahl- und Walzwerk Marienhütte GmbH, Graz

Abstract

In this thesis, the student has the unique opportunity to investigate supervised machine learning methods for predicting yield strengths using probabilistic regression models and deep learning approaches. The thesis is implemented with support of the MSC Software GmbH and the Stahl- und Walzwerk Marienhütte GmbH in Graz.

In the image above and below you see the production line at the Stahl- und Walzwerk Marienhütte GmbH in Graz.

To ensure the high quality standards, frequent ‘yield strength’ measurements are performed. These measurements have resulted in a large dataset which can now be analyzed and used to learn a prediction model.  First tests were promising and the thesis will be very likely a big success. 

The goal of this thesis is to analyze the data and to learn prediction models taking uncertainty estimates into account.

The models will be implemented and tested in Python.

Tentative Work Plan

To achieve our aim, the following concrete tasks will be focused on:

  • Literature research on the underlying physical & chemical processes.
  • Data analysis, filtering, preprocessing, visualization of the existing data. 
  • Implementation of  deep neural networks (Variational Autoencoder), neural processes and GPs in Python. Baseline implementations are existing.
  • Visualization and analysis of the prediction performance. An outlier detection and warning system should be implemented.
  • (Optional) Implementation of neural time-series models like LSTMs.
  • Analysis and evaluation of the provided data.

B.Sc. or M.Sc. Project/Thesis: Mobile robot teleoperation based on human finger direction and vision

Theoretical difficulty: mid
Practical difficulty: mid

Naturally, humans have the ability to give directions (go front, back, right, left etc) by merely pointing fingers towards the direction in question. This can be done effortlessly without saying a word. However, mimicking or training a mobile robot to understand such gestures is still today an open problem to solve.
In the context of this thesis, we propose finger-pose based mobile robot navigation to maximize natural human-robot interaction. This could be achieved by observing the human fingers’ Cartesian  pose from an

RGB-D camera and translating it to the robot’s linear and angular velocity commands. For this, we will leverage computer vision algorithms and the ROS framework to achieve the objectives.
The prerequisite for this project are basic understanding of Python or C++ programming, OpenCV and ROS.

Tentative work plan

In the course of this thesis, the following concrete tasks will be focused on:

  • study the concept of visual navigation of mobile robots
  • develop a hand detection and tracking algorithm in Python or C++
  • apply the developed algorithm to navigate a simulated mobile robot
  • real-time experimentation
  • thesis writing

References

  1. Shuang Li, Jiaxi Jiang, Philipp Ruppel, Hongzhuo Liang, Xiaojian Ma,
    Norman Hendrich, Fuchun Sun, Jianwei Zhang,  “A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU“,  IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),  October 25-29, 2020, Las Vegas, NV, USA.
  2.  

B.Sc. or M.Sc. Thesis/Project: Deep Learning for Predicting Meniscus Level Fluctuations in the Mold at voestalpine Stahl GmbH, Linz

Abstract

In this thesis, the student has the unique opportunity to investigate meniscus level fluctuations in the mold using deep learning approaches at the voestalpine Stahl GmbH in Linz. 

The mold, illustrated in the image above, is equipped with electromagnetic mold level sensors and with temperature image cameras that measure the surface temperature of the casting powder. 

The goal of this thesis is to understand and model the underlying dynamic processes of the meniscus level fluctuations in the mold.

In the thesis black box models as well as gray box models that combine analytic dynamic models with learned  models will be investigated. 

Tentative Work Plan

To achieve our aim, the following concrete tasks will be focused on:

  • Literature research on meniscus level fluctuations in the mold.
  • Data analysis, filtering, preprocessing, visualization of meniscus level fluctuations data. 
  • Implementation of  deep convolutional neural networks (CNN) as low-dimensional feature extractors. Visualization and analysis of the dynamic processes.
  • (Optional) Implementation of neural time-series models like LSTMs trained with the computed CNN features.
  • Analysis and evaluation of the provided data.

B.Sc. or M.Sc. Thesis/Project: Dimensionality Reduction using Variational Autoencoder on Synchrotron XRD data

Theoretical difficulty: mid
Practical difficulty: low

Abstract

In the context of this thesis, we propose to apply modern machine learning approaches such as variational autoencoder to visualize and reduce the complexity of X-ray diffraction (XRD) data collected on advanced γ-TiAl based alloys. By classifying XRD data collected during in situ experiments into known phases, we aim at disclosing phase transformation temperatures and selected properties of the individual phases, which are of interest with regard to the current alloy development. Furthermore, the capabilities of the applied machine learning approaches going beyond basic XRD data analysis will be explored.

Sketch of a synchron from synchrotron.org.au, illustrating the process of accelerating electrons at almost the speed of light.   

Illustration of a collected data sample which is a 2D X-ray diffraction of a nominal Ti-44Al-7Mo (in at.%) alloy collected.

Topic and Motivation

Intermetallic γ-TiAl based alloys are a promising class of structural materials for lightweight high-temperature applications. Following intensive research activities, they have recently entered service in the automotive and aircraft engine industries, e.g. as low-pressure turbine blades in environmentally-friendly combustion engine options [1].

During the past decades, the development of these complex multi-phase alloys has been strongly driven by the application of diffraction and scattering techniques [2]. These characterization techniques offer access to the atomic structure of materials and provide insights into a variety of microstructural parameters. High-energy X-rays, such as available at modern synchrotron radiation sources (i.e. large-scale research facilities for X-ray experiments), and recent advances in hardware technology nowadays allow to conduct so-called in situ experiments that reveal at a high temporal resolution the relationship between selected external conditions (e.g. thermal or mechanical load) and structural changes in the material. Various specimen environments can be adjusted to emulate technologically relevant or real-life conditions, addressing a multitude of research topics ranging from fundamental questions in the primary alloy design over process-related to application-related issues. Modern setups at synchrotron sources even allow the investigation of elaborate manufacturing, joining and repair processes in an in situ manner, producing insights that have been inaccessible by means of conventional methods so far.

Advanced materials characterization techniques such as described above are often characterized by an ever-growing data acquisition speed and storage capabilities. While enabling novel insights, they, thus, also pose a serious challenge to modern materials science. In situ synchrotron X-ray diffraction (XRD) experiments usually bring about large sets of two-dimensional diffraction data such as those shown in Figure 1. New procedures are needed to quickly assess and analyze this type of datasets.

Tentative Work Plan

To achieve our aim, the following concrete tasks will be focused on:

  • Literature research on state-of-the-art materials characterization methods.
  • Implementation of  deep convolutional neural networks (CNN) and Variational Autoencoder in Python/Tensorflow.
  • Application and evaluation of variational autoencoder on the CNN features.
  • Analysis and Evaluation of the provided synchrotron data.

References

[1] Clemens, S. Mayer, Design, processing, microstructure, properties, and applications of advanced intermetallic TiAl alloys, Advanced Engineering Materials 15 (2013) 191-215, doi: 10.1002/adem.201200231.

[2] Spoerk-Erdely, P. Staron, J. Liu, N. Kashaev, A. Stark, K. Hauschildt, E. Maawad, S. Mayer, H. Clemens, Exploring structural changes, manufacturing, joining, and repair of intermetallic γ-TiAl-based alloys: Recent progress enabled by in situ synchrotron X-ray techniques, Advanced Engineering Materials (2020) 2000947, doi: 10.1002/adem.202000947.

B.Sc. or M.Sc. Thesis/Project: Running ROS-Mobile on a EV3

Supervisor: Linus Nwankwo, M.Sc.;
Univ.-Prof. Dr Elmar Rückert
Start date: ASAP from October 2021

 

Theoretical difficulty: low
Practical difficulty: mid

Abstract

Nowadays, robots used for survey of indoor and outdoor environments are either teleoperated in fully autonomous mode where the robot makes a decision by itself and have complete control of its actions; semi-autonomous mode where the robot’s decisions and actions are both manually (by a human) and autonomously (by the robot) controlled; and in full manual mode where the robot actions and decisions are manually controlled by humans. In full manual mode, the robot can be operated using a teach pendant, computer keyboard, joystick, mobile device, etc.

Recently, the Robot Operating System (ROS) has provided roboticists easy and efficient tools to visualize and debug robot data; teleoperate or control robots with both hardware and software compatibility on the ROS framework. Unfortunately, the Lego Mindstorms EV3 is not yet strongly supported on the ROS platform since the ROS master is too heavy on the EV3 RAM [2]. This limits our chances of exploring the full possibilities of the bricks.

However, in the context of this project, we aim to get ROS to run on the EV3 Mindstorms to enable us to teleoperate or control it on the ROS platform using a mobile device and leveraging on the framework developed by [1].

Tentative Work Plan

To achieve our aim, the following concrete tasks will be focused on:

  • Configure and run ROS on the Lego EV3 Mindstorms
  • Set up a network connection between the ROS-Mobile device and the EV3 robot
  • Teleoperate the EV3 robot on the ROS-Mobile platform
  • Perform Simultaneous Localization and Mapping (SLAM) for indoor applications with the EV3 robot

References

B.Sc. or M.Sc. Thesis/Project: Simultaneous Localization and Mapping (SLAM) from RGB-D sensors on a RMP 220 Lite robot.

Supervisor: Linus Nwankwo, M.S.c;
Univ.-Prof. Dr Elmar Rückert
Start date: ASAP, e.g., 1st of October 2021

Theoretical difficulty: low
Practical difficulty: high

Introduction

On April 15, 1912, more than 1,500 people lost their lives out of the 2,240 people on board the Titanic disaster [4]. If it were to be today, a large number of people if not all would have been saved with the recent advancement in robotics technology. 

One of the most interesting aspects of these advancements is the ability of a robotic system equipped with several sensors to build a map of an unknown environment and locate itself on the map at the same time. This is called simultaneous localization and mapping (SLAM in short form). The map information is used to plan the robot motion and avoid obstacles on the path. If the Titanic had been equipped with these technologies, the iceberg which caused the disaster would have been detected and avoided far before the collision.

 

SLAM has found many applications not only in navigation, augmented reality, autonomous vehicles e.g. self-driving cars, drones but also in indoor & outdoor delivering robots, intelligent warehousing etc. In the context of this thesis, we propose to study, design and implement SLAM algorithm using our state-of-the-art Intel Realsense visual and light detection & ranging (LiDAR) sensors, and a mobile robot as a test-bed. The idea is to develop an algorithm that can aid a robotic system to go into a human hazardous area, for example, a mining site and perform some tasks of interest such as acquiring relevant data of the environment for post-processing. The robot should be capable of interacting with the environment effectively and also act as a remote pair of mobile eyes and ears, providing the operator with remote information about its location, position, and 2D/3D map of the environment.

Some of the most common challenges with SLAM are the accumulation of errors over time due to inaccurate pose estimation (localization errors) while the robot moves from the start location to the goal location; the high computational cost for image, point cloud processing and optimization [2]. These challenges can cause a significant deviation from the actual values and at the same time leads to inaccurate localization if the image and cloud processing is not processed at a very high frequency [3]. This would also impair the frequency with which the map is updated and hence the overall efficiency of the SLAM algorithm will be affected.

Tentative Work Plan

In the course of this thesis, the following concrete tasks will be focused on:

  • study the concept of visual or LiDAR based SLAM as well as its application in the survey of an unknown environment.
  • 2D/3D mapping in both static and dynamic environments.
  • development of a sensor fusion algorithm for localization and multi-object tracking in the environment
  • use of the SLAM algorithm for motion planning and control of the robot through the probabilistic approach.
  • real-time experimentation, simulation (MATLAB, ROS & Gazebo, Rviz, C/C++, Python etc.) and validation of the algorithm.

About the Laboratory

Robotics & AI-Lab of the Chair of Cyber Physical Systems is a research innovative lab focusing on robotics, artificial intelligence, machine and deep learning, embedded smart sensing systems and computational models[1]. To support its research and training activities, the laboratory currently has:
  • additive manufacturing unit (3D and laser printing technologies).
  • metallic production workshop.
  • robotics unit (mobile robots, robotic manipulator, robotic hand, unmanned aerial vehicles (UAV))
  • sensors unit (Intel Realsense (LiDAR, depth and tracking cameras), Inertial Measurement Unit (IMU), OptiTrack cameras etc.)
  • electronics and embedded systems unit (Arduino, Raspberry Pi, e.t.c)

Expression of Interest

Students interested in carrying out their Master of Science (M.Sc.) or Bachelor of Science (B.Sc.) thesis on the above topic should immediately contact or visit the Chair of Cyber Physical Systems.

Phone: +43 3842 402 – 1901 

E-mail: click here

Map: click here

References

[1]  https://cps.unileoben.ac.at/

[2]  V.Barrile, G. Candela, A. Fotia, ‘Point cloud segmentation using image processing techniques for structural analysis’, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W11, 2019 

[3]  Łukasz Sobczak , Katarzyna Filus , Adam Domanski and Joanna Domanska, ‘LiDAR Point Cloud Generation for SLAM Algorithm Evaluation’, Sensors 2021, 21, 3313. https://doi.org/10.3390/ s21103313

[4]  https://en.wikipedia.org/wiki/Sinking_of_the_Titanic

Integrated CPS Project or B.Sc./M.Sc. Thesis: Learning to Walk through Reinforcement Learning

Supervisor: 


Start date: ASAP, e.g., 1st of October 2022

Qualifications

  • Interest in controlling and simulating legged robots
  • Interest in Programming in Python and ROS or ROS2
 
Keywords: locomotion, robot control, robot operating system (ROS), ESP32

Introduction

For humans, walking and running are effortless provided good health conditions are satisfied. However, training bipedal or quadrupedal robots to do the same is still today a challenging problem for roboticists and researchers. Quadrupedal robots are known to exhibit complex nonlinear dynamics which makes it near impossible for control engineers to design an effective controller for its locomotion or task-specific actions. 

Reinforcement learning in recent years has shown the most exciting and state-of-the-art artificial intelligence approaches to solving the above-mentioned problem. Although, other challenges, such as learning effective locomotion skills from scratch, transversing rough terrains, walking on a narrow balance beam [3], etc remains. Several researchers in their respective work have proved the possibilities of training quadrupedal robots to walk (fast or slow) or run (fast or slow) through reinforcement learning. Nevertheless, how efficient and effective these walking and running skills are achieved with real-time systems in comparison to humans or quadrupedal animals is still a task to solve.

In the context of this thesis, we propose to study the concept of reinforcement learning and subsequently apply it to train our 3D printed quadrupedal robot in the figure above to walk and run. For this, we will leverage on the work of [1, 2] to explore the robots’ capabilities in generating very dynamic motions or task-specific locomotive actions through reinforcement learning.

Tentative Work Plan

The following concrete tasks will be focused on:

  • study the concept of reinforcement learning as well as its application in quadruped robots for testing control and learning algorithms.
  • apply reinforcement learning algorithms to train the robot to perform skill-specific tasks such as walking, running, etc.
  • real-time experimentation, simulation (MATLAB, ROS & Gazebo, Rviz, C/C++, Python, etc) and validation.

References

[1]        Felix Grimminger, Avadesh Meduri, Majid Khadiv, Julian Viereck, Manuel Wuthrich Maximilien Naveau, Vincent Berenz, Steve Heim, Felix Widmaier, Thomas Flayols Jonathan Fiene, Alexander Badri-Sprowitz and Ludovic Righetti, “An Open Torque-Controlled Modular Robot Architecture for Legged Locomotion Research”, arXiv:1910.00093v2 [cs.RO] 23 Feb 2020.

[2]        Tuomas Haarnoja, Sehoon Ha, Aurick Zhou, Jie Tan, George Tucker and Sergey Levine, Learning to Walk via Deep Reinforcement Learning, arXiv:1812.11103v3 [cs.LG] 19 Jun 2019.

[3]        Haojie Shi1, Bo Zhou2, Hongsheng Zeng2, Fan Wang2y, Yueqiang Dong2, Jiangyong Li2, Kang Wang2, Hao Tian2, Max Q.-H. Meng, “Reinforcement Learning with Evolutionary Trajectory Generator: A General Approach for Quadrupedal Locomotion”, arXiv: 2109.0 6 4 09v1  [cs.RO]  14 Sep 2021.

Link: zur Folie