image_pdfimage_print

Integrated CPS Project or B.Sc. Thesis: Mobile Navigation via micro-ROS

Supervisors:

Start date: October 2022

 

Qualifications

  • Interest in controlling and simulating mobile robotics
  • Interest in Programming in Python and ROS or ROS2
 
Keywords: Mobile robot control, robot operating system (ROS), ESP32

Description

The goal of this project or thesis is to develop a control and sensing interface for our mobile robot “RMP220“. The RMP220 has two powerful brush-less motors equipped with two magnetic encoders.

Learn in this project how to read the sensor values and how to control the motors via micro-ros on a ESP32 controller.

Links:

 

Note: This project is also offered as Internship position.

https://www.youtube.com/watch?v=-MfNrxHXwow

Single Person Project or Team Work

You may work on the project alone or in teams of up to 4 persons.

For a team work task, the goals will be extended to control the robot via ROS 2 and to simulate it in Gazebo or RViz.

Interested?

If this project sounds like fun to you, please contact Linus Nwankwo or Elmar Rueckert or simple visit us at our chair in the Metallurgie building, 1st floor.

Integrated CPS Project or B.Sc./M.Sc. Thesis: Mixed Reality Robot Teleoperation with Hololens 2

Supervisors:

Start date: October 2022

 

Qualifications

  • Basic skills in Python or C++
  • ROS or Unity3D/C#
 
Keywords: Augmented Reality, Robotic Interfaces, Engineering, Graphical Design

Description

Mixed Reality (AR) interface based on Unity 3D for intuitive programming of robotic manipulators (UR3). The interface will be implemented within on the ROS 2 robotic framework.

Note: This project is also offered as Internship position.

https://www.youtube.com/watch?v=-MfNrxHXwow

Abstract

Robots will become a necessity for every business in the near future. Especially companies that rely heavily on the constant manipulation of objects will need to be able to constantly repurpose their robots to meet the ever changing demands. Furthermore, with the rise of Machine Learning, human collaborators or ” robot teachers” will need a more intuitive interface to communicate with them, either when interacting with them or when teaching them.

In this project we will develop a novel Mixed (Augmented) Reality Interface for teleoperating the UR3 robotic manipulator. For this purpose we will use AR glasses to augment the user’s reality with information about the robot and enable intuitive programming of the robot. The interface will be implemented on a ROS 2 framework for enhanced scalability and better integration potential to other devices.

Outcomes

This thesis will result to an innovative graphical interface that enables non-experts to program a robotic manipulator.

The student will get valuable experience in the Robot Operating System (ROS) framework and developing graphical interfaces on Unity. The student will also get a good understanding of robotic manipulators (like UR3) and develop a complete engineering project.

Interested?

If this project sounds like fun to you, please contact Fotios Lygerakis by email at fotios.lygerakis@unileoben.ac.at or simple visit us at our chair in the Metallurgie building, 1st floor.

Integrated CPS Project or B.Sc./M.Sc. Thesis: Learning to Walk through Reinforcement Learning

Supervisor: 


Start date: ASAP, e.g., 1st of October 2022

Qualifications

  • Interest in controlling and simulating legged robots
  • Interest in Programming in Python and ROS or ROS2
 
Keywords: locomotion, robot control, robot operating system (ROS), ESP32

Introduction

For humans, walking and running are effortless provided good health conditions are satisfied. However, training bipedal or quadrupedal robots to do the same is still today a challenging problem for roboticists and researchers. Quadrupedal robots are known to exhibit complex nonlinear dynamics which makes it near impossible for control engineers to design an effective controller for its locomotion or task-specific actions. 

Reinforcement learning in recent years has shown the most exciting and state-of-the-art artificial intelligence approaches to solving the above-mentioned problem. Although, other challenges, such as learning effective locomotion skills from scratch, transversing rough terrains, walking on a narrow balance beam [3], etc remains. Several researchers in their respective work have proved the possibilities of training quadrupedal robots to walk (fast or slow) or run (fast or slow) through reinforcement learning. Nevertheless, how efficient and effective these walking and running skills are achieved with real-time systems in comparison to humans or quadrupedal animals is still a task to solve.

In the context of this thesis, we propose to study the concept of reinforcement learning and subsequently apply it to train our 3D printed quadrupedal robot in the figure above to walk and run. For this, we will leverage on the work of [1, 2] to explore the robots’ capabilities in generating very dynamic motions or task-specific locomotive actions through reinforcement learning.

Tentative Work Plan

The following concrete tasks will be focused on:

  • study the concept of reinforcement learning as well as its application in quadruped robots for testing control and learning algorithms.
  • apply reinforcement learning algorithms to train the robot to perform skill-specific tasks such as walking, running, etc.
  • real-time experimentation, simulation (MATLAB, ROS & Gazebo, Rviz, C/C++, Python, etc) and validation.

References

[1]        Felix Grimminger, Avadesh Meduri, Majid Khadiv, Julian Viereck, Manuel Wuthrich Maximilien Naveau, Vincent Berenz, Steve Heim, Felix Widmaier, Thomas Flayols Jonathan Fiene, Alexander Badri-Sprowitz and Ludovic Righetti, “An Open Torque-Controlled Modular Robot Architecture for Legged Locomotion Research”, arXiv:1910.00093v2 [cs.RO] 23 Feb 2020.

[2]        Tuomas Haarnoja, Sehoon Ha, Aurick Zhou, Jie Tan, George Tucker and Sergey Levine, Learning to Walk via Deep Reinforcement Learning, arXiv:1812.11103v3 [cs.LG] 19 Jun 2019.

[3]        Haojie Shi1, Bo Zhou2, Hongsheng Zeng2, Fan Wang2y, Yueqiang Dong2, Jiangyong Li2, Kang Wang2, Hao Tian2, Max Q.-H. Meng, “Reinforcement Learning with Evolutionary Trajectory Generator: A General Approach for Quadrupedal Locomotion”, arXiv: 2109.0 6 4 09v1  [cs.RO]  14 Sep 2021.

Link: zur Folie