CPS and Univ.-Prof. Dr. Elmar Rückert in latest Edition of Triple M Magazine 2021
Print Media Article in Triple M Montanuniversität 2021
Short bio: Mr. Bartsch joined the CPS team in Nov. 2021. Before that, he worked as an educator in mechanical engineering, metal machining, and CAD at the education center Leoben (BFI Leoben).
On the 1st of July 2022, Mr. Bartsch completed his education at the technical high school in the fields of electronic data processing, network technology, and telecommunications.
At the chair of CPS, Mr. Bartsch develops robotic systems, electronics, mechanical designs, and complex embedded systems. He is further responsible for our technical infrastructure including our computing clusters and cloud server architectures.
Mr. Bartsch is the educator of our apprentice Mr. Obermayer.
Mr. Konrad Bartsch
Techniker des Lehrstuhls für Cyber-Physical-Systems
Montanuniversität Leoben
Franz-Josef-Straße 18,
8700 Leoben, Austria
Phone: +43 3842 402 – 1904
Email: konrad.bartsch@unileoben.ac.at
Web: https://cps.unileoben.ac.at
Supervisors: Sven Böttger, Elmar Rückert
Finished: 21.September 2021
The applicability of robotic automation has transcended the industrial domain through the emergence of collaborative robotics and is increasingly entering the realm of applications with high levels of physical human-robot interactions. This is concomitant with a paradigm shift towards higher force control sensitivity to accomplish functional and safety requirements concerning the regulation of contact forces between robots and humans. A fundamental challenge in this regard is the observability and estimation of interaction forces. Utilizing the availability of joint position and torque sensors in recent collaborative robot models that yield a larger perceptive field for interaction forces than local force sensors, a proprioceptive approach is taken in this thesis to develop inverse dynamic models to estimate dynamic disturbances and determine external interaction forces during fine-scale motion. A series of state-of-the-art techniques are implemented and evaluated on the KUKA LBR iiwa 14, including dynamic parameter identification, neural-network based single-step, and time-series models, and a novel hybrid architecture combining a rigid body dynamics model with downstream neural networks and joint rotational displacement encodings. The results indicate that significant improvements in torque and force estimation accuracy can be obtained by the proposed method when compared with conventional rigid body dynamics models or neural networks alone.
Supervisor: Linus Nwankwo, M.Sc.;
Univ.-Prof. Dr Elmar Rückert
Start date: ASAP from October 2021
Theoretical difficulty: low
Practical difficulty: mid
Nowadays, robots used for survey of indoor and outdoor environments are either teleoperated in fully autonomous mode where the robot makes a decision by itself and have complete control of its actions; semi-autonomous mode where the robot’s decisions and actions are both manually (by a human) and autonomously (by the robot) controlled; and in full manual mode where the robot actions and decisions are manually controlled by humans. In full manual mode, the robot can be operated using a teach pendant, computer keyboard, joystick, mobile device, etc.
Recently, the Robot Operating System (ROS) has provided roboticists easy and efficient tools to visualize and debug robot data; teleoperate or control robots with both hardware and software compatibility on the ROS framework. Unfortunately, the Lego Mindstorms EV3 is not yet strongly supported on the ROS platform since the ROS master is too heavy on the EV3 RAM [2]. This limits our chances of exploring the full possibilities of the bricks.
However, in the context of this project, we aim to get ROS to run on the EV3 Mindstorms to enable us to teleoperate or control it on the ROS platform using a mobile device and leveraging on the framework developed by [1].
To achieve our aim, the following concrete tasks will be focused on:
[1] Nils Rottmann et al, https://github.com/ROS-Mobile/ROS-Mobile-Android, 2020
[2] ROS.org, http://wiki.ros.org/Robots/EV3
This tutorial gives an instruction on installing Cuda and enabling Cuda acceleration using Pytorch in Win10. Installation in Linux or Mac systems are all possible. An additional .py file will verify whether the current computer configuration uses the Cuda or not. The following instruction assumes that you have already installed Python IDE, e.g., Anaconda, Pycharm, Visual Studio…
Step 1: Check which Cuda version is supported by your current GPUs under this website. From the left figure, we can see that A100 supports Cuda 11.0. It is also reported from other blogs/ forums that A100 can support Cuda 11.1. In this post, we install Cuda 11.1.
Step 2: Download Nvidia Cuda Toolkit 11.1 (the same version as Cuda in Step 1) from the website. In Win10, for instance, we follow up the choice as shown right. The size of exe(local) is around 3.1GB. After downloading, run the .exe and perform installation. It may take some minutes to complete installation.
Step 3: On the homepage of Pytorch, choose the appropriate options as shown in the left figure. IMPORTANT: The cuda version must be the same as in Step 1. It is also recommended to use Stable version. After finishing the , copy the command into Anaconda Powershell Prompt or other command prompt where you install packages for Python. Waiting for the installation, which may require larger than 1GB disk space and takes some minutes for installation. You could also find historical version of Pytorch in that homepage.
You could download a cuda-test.py file and run it. If the result shows ‘cuda’, then you can enjoy the Cuda acceleration for training neural networks!
Running Pytorch with Multiple GPUs can further increase the efficiency. We have 8 GPU cards and can be used parallely for training. Please refer to (1) (2) (3) for details.
The LEGO Mindstorms Education EV3 sets can be used in different scenarios. They offer a quick and easy introduction into robot control, motion planning and visual navigation from depth images with Python. One can assemble the robots in various ways with different sensors and motors depending on the desired task.
For more information go to Robot LEGO Robotics EV3 Dev and to https://pypi.org/project/python-ev3dev2/ .
First of all a development set is necessary. At the chair of Cyber-Physical Systems we have five sets available for students. The implementation of the python code and connection to the EV3 can be done with Visual Studio Code and the extension LEGO MINDSTORMS EV3 MicroPython. The EV3 bricks are equipped with a micro-SD card on which the Micropython Image is installed. A more detailed installation guide is provided on GitHub.
In the following is an example python code to control a motor with the EV3. At the beginning the motor has to be initialized with the corresponding port (line 8). There are two different ways to control a motor. First, one can set a desired acceleration and target position to run the motor (line 11). Or one can set the desired acceleration and let the motor run until it is stopped by a command (line 17-23).
If you want to get the python code or if you are interested in other example codes go to our GitHub repository or to this repository: https://github.com/bittner/lego-mindstorms-ev3-comparison#inspiration-for-lego-ev3-robots.
You may also build your LEGO robot model in a simulation tool and test your Python algorithms. Here is a list of projects:
Here is a list of 3D Modelling Tools for LEGO systems:
A CPS combines the predictions or commands of computer simulations (see the section on Simulation Tools) and offers a real-time visualization of the real system and the environment.
Such a CPS can also be developed with our LEGO Ev3 robots. Current sensor measurements can be communicated in real-time to a simulation and visualization tool via bluethooth or wifi connections. Here is a collection of relevant resources:
Deep Reinforcement Learning (DRL) has demonstrated great success in learning single or multiple tasks from scratch. Various DRL algorithms have been proposed and were applied in a broad class of tasks including chess and video games, robot navigation or robot manipulation.
In this work, we investigate the potential of DRL in a mapless navigation task within a warehouse. The challenges of the task are the partial observability of the space and the need of effective exploration strategies for fast learning of navigation strategies.
We trained a mobile robot (the agent) from scratch and compared how different sensor observations could influence the navigation performance. The evaluated sensors are a 360-degree Lidar sensor, only depth image and only RGB image. For Lidar and RGB inputs, we evaluated partial and full observability of the state space. We successfully trained the agent to navigate to a goal with a reward setting that is also applicable to the real world.
Currently, we are extending the work to multi-modal sensor inputs with both Lidar and RGB inputs (RGB image only frontal view) and incorporated self-curriculum learning on a more challenging navigation task in a warehouse and obtained promising initial outcomes.
https://cps.unileoben.ac.at/wp/RGB_only_front_snail_140k.mp4https://cps.unileoben.ac.at/wp/RGB_only_front_snail_85k.mp4
The video shows learned navigation strategies using a single RGB-D camera mounted at the front of the robot. The results were obtained after 85.000 interactions (single action excution, e.g. wheel velocity commands).
This video shows the learnded strategy after 140.000 interactions with the environment.
You may use this video for research and teaching purposes. Please cite the Chair of Cyber-Physical-Systems or the corresponding research paper.
2019 |
|
Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks Journal Article In: Neural Networks - Elsevier, vol. 109, pp. 67-80, 2019, ISBN: 0893-6080, (Impact Factor of 7.197 (2017)). | ![]() |
2017 |
|
Efficient Online Adaptation with Stochastic Recurrent Neural Networks Proceedings Article In: Proceedings of the International Conference on Humanoid Robots (HUMANOIDS), 2017. | ![]() |
Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation Signals Proceedings Article In: Proceedings of the Conference on Robot Learning (CoRL), 2017. | ![]() |
You may use this video for research and teaching purposes. Please cite the Chair of Cyber-Physical-Systems or the corresponding research paper.
You are interested in working with modern robots or want to understand how such machines ‘learn’?
If so, this bachelor thesis will enable you to dig into the fascinating world of robot learning. You will implement and apply modern machine learning algorithms in Python, Matlab or C++/ROS.
Your learning or control algorithm will be evaluated in cyber-physical-systems. Find out which theses are currently supervised and offered.