1

3 PhD Positions – June 30th 2021, RefID: 2103WPW

3 positions for fully employed University Assistant’s at the Chair of Cyber-Physical-Systems on the Department of Product Engineering at the earliest possible date or beginning on 15th of June in a 4-year term of employment. Salary Group B1 to Uni-KV, monthly minimum charge excl. SZ.: € 2.971,50 for 40 hours per week (14 times a year), actual classification takes place according to accountable activity-specific previous experience.

The following doctoral theses are available:

Fundamentals of learning methods for autonomous systems.

The goal is to make autonomous learning systems such as industrial robot arms, humanoid or mobile robots suitable for everyday use. To achieve this, large amounts of data must be processed in a few milliseconds (Big Data for Control) and efficient learning methods must be developed. In addition, safe human-machine interaction must be ensured when dealing with the autonomous systems. For this purpose, novel stochastic motion learning methods and model representations for compliant humanoid robots will be developed.

Fundamentals of stochastic neural networks.

Modern deep neural networks
can process large amounts of data and calculate complex predictions. These
methods are also increasingly used in autonomous systems. A major challenge
here is to integrate measurement and model uncertainties in the calculations
and predictions. For this purpose, novel neural networks will be developed that
are based on stochastic computations that enrich predictions with uncertainty
estimations. The neural networks will be used in learning tasks with robotic
arms.

Robot learning with embedded systems.

Modern robot systems are equipped with complex sensors and actuators. However, they lack the necessary control and learning methods to solve versatile tasks. The goal of this thesis is to develop novel AI-based sensor systems and to integrate them into autonomous systems. The developed algorithms will be applied in mobile computers and tested on realistic industrial applications with robotic arms.

What we offer

The opportunity to work on research ideas of exciting modern topics in artificial intelligence and robotics, to develop your own ideas, to be part of a young and newly formed team, to go on international research trips, and to receive targeted career guidance for a successful scientific career.

Job requirements

Completed
master’s degree in computer science, physics, telematics, statistics,
mathematics, electrical engineering, mechanics, robotics or an equivalent
education in the sense of the desired qualification. Willingness and ability
for scientific work in research including publications with the possibility to
write a dissertation.

Desired additional qualifications

Programming experience in one of the languages C, C++, C#, JAVA, Matlab, Python or similar. Experience with Linux or ROS is advantageous. Good English skills and willingness to travel for research and to give technical presentations.

Application

Application deadline: June 30th, 2021

Online Application via: Montanuniversität Leoben Webpage

The Montanuniversitaet Leoben intends to increase the number of women on its faculty and therefore specifically invites applications by women. Among equally qualified applicants women will receive preferential consideration.




How to build a professional low-cost lightboard for teaching

Making Virtual Lectures Interactive

Giving virtual lectures can be exciting. Inspired by numerous blog posts of colleagues all over the world (e.g., [1], [2]), I decided to turned an ordinary glass desk into a light board. The total costs were less than 100 EUR.

Below you can see some snapshots of the individual steps.

Details to the Lightboard Construction

The light board construction is based on

  • A glas pane, 8mm thick. Hint: do not use acrylic glass or glas panes thinner than 8mm. I got an used glass/metal desk for 20EUR.
  • LED stripes from YUNBO 4mm width, e.g. from [4] for 13EUR. Hint: Larger LED strips, which you can typically get at DIY markets have width of 10mm. These strips do not fit into the transparent u profile.
  • Glass clamps for 8mm glass, e.g., from onpira-sales [5] for 12EUR.
  • Transparent U profiles from a DIY store, e.g., the 4005011040225 from HORNBACH [6] for 14EUR.
  • 4 castor wheels with breaks, e.g. from HORNBACH no. 4002350510587 for 21EUR.

Details to the Markers, the Background and the Lighting

Some remarks are given below on the background, the lighting and the markers.

  • I got well suited flourescent markers, e.g., from [6] for 12EUR. Hint: Compared to liquid chalk, these markers do not produce any noise during the writing and are far more visible.
  • The background blind is of major importance. I used an old white roller blind from [7] and turned it into a black blind using 0.5l of black paint. Hint: In the future, I will use a larger blind with a width of 3m. A larger background blind is required to build larger lightboards (mine is 140x70mm). Additionally, the distance between the glass pane and the blind could be increased (in my current setting I have a distance of 55cm).
  • Lighting is important to illuminate the presenter. I currently use two small LED spots. However, in the future I will use professional LED studio panels with blinds, e.g. [8]. Hint: The blinds are important to prevent illuminating the black background.
  • The LED stripes run at 12Volts. However, my old glass pane had many scratches, which become fully visible at the maximum power. To avoid these distracting effects, I found an optimal setting with 8Volts worked best for my old glass pane.

Details to the Software and to the Microphone

At the University, we are using CISCO’s tool WEBEX for our virtual lectures. The tool is suboptimal for interactive lightboard lectures, however, with some additional tools, I converged to a working solution.

  • Camera streaming app, e.g., EPOCCAM for the iphones or IRIUN for android phones. Hint: the smartphone is mounted on a tripod using a smartphone mount.
  • On the client side, a driver software is required. Details can be found when running the smartphone app.
  • On my mac, I am running the app Quick Camera to get a real time view of the recording. The viewer is shown in a screen mounted to the ceiling. Hint: The screen has to be placed such that no reflections are shown in the recordings.
  • In the WEBEX application, I select the IRIUN (virtual) webcam as source and share the screen with the quick camera viewer app.
  • To ensure an undamped audio signal, I am using a lavalier microphone like that one [9].
  • For offline recordings, apple’s quicktime does a decent job. Video and audio sources can be selected correctly. Hint: I also tested VLC, however, the lag of 2-3 seconds was perceived suboptimal by the students (a workaround with proper command line arguments was not tested).

An Example Lecture

And that’s how it looks …




IROS 2020 – New Horizons for Robot Learning

Abstract

Robot learning combines the challenges of understanding, modeling and applying dynamical systems with task learning from rewards, through human robot interaction or from intrinsic motivation signals. While outstanding results using machine and deep learning have been generated in robot learning in the last years, current challenges in industrial applications are underrepresented. The goal of this workshop is to go beyond discussing potential industrial applications like in related past workshops.

These topics were discussed with Pieter Abbeel, Dileep George, Sergey Levine, Jan Peters, Freek Stulp, Marc Toussaint, Patrick van der Smagt, and Georg von Wichert.

Links

Details to the workshop, the speakers and links to slides can be found on the workshop webpage.

 




Elmar Rückert, „KI-Denker der Zukunft 2019“

Print Media Article in the BILANZ magazine 2019


Links

Link to the PDF.




NeurIPS 2019 – Robot open-Ended Autonomous Learning

NeurIPS 2019 Competition Track

Open-ended learning aims to build learning machines and robots that are able to acquire skills and knowledge in an incremental fashion in a certain environment. This competition addresses autonomous open-ended learning with a focus on simulated robot systems that: (a) acquire a sensorimotor competence that allows them to interact with objects and physical environments; (b) learn in a fully autonomous way, i.e. with no human intervention (e.g., no tasks or reward functions) on the basis of mechanisms such as curiosity, intrinsic motivations, task-free reinforcement learning, self-generated goals, and any other mechanism that might support autonomous learning. The competition challenge will feature two phases: during an initial “”intrinsic phase”” the system will have a certain time to freely explore and learn in an environment containing multiple objects, and then during an “”extrinsic phase”” the quality of the autonomously acquired knowledge will be measured with tasks unknown at design time and during the intrinsic phase.

Links

Details on the competition can be found on the project webpage.

 

Publications

2020

Cartoni, E.; Mannella, F.; Santucci, V. G.; Triesch, J.; Rueckert, E.; Baldassarre, G.

REAL-2019: Robot open-Ended Autonomous Learning competition Journal Article

In: Proceedings of Machine Learning Research, vol. 123, pp. 142-152, 2020, (NeurIPS 2019 Competition and Demonstration Track).

Links | BibTeX

REAL-2019: Robot open-Ended Autonomous Learning competition




H2020 Goal-Robots 11/2016-10/2020

This project aims to develop a new paradigm to build open-ended learning robots called `Goal-based Open ended Autonomous Learning’ (GOAL). GOAL rests upon two key insights. First, to exhibit an autonomous open-ended learning process, robots should be able to self-generate goals, and hence tasks to practice. Second, new learning algorithms can leverage self-generated goals to dramatically accelerate skill learning. The new paradigm will allow robots to acquire a large repertoire of flexible skills in conditions unforeseeable at design time with little human intervention, and then to exploit these skills to efficiently solve new user-defined tasks with no/little additional learning.

Link: http://www.goal-robots.eu




Loomo Innovation am Campus Lübeck

Print Media Article im FMSH Magazin 2019


Links

Link to the PDF.




How to build and use a low-cost sensor glove

This post discusses how to develop a low cost sensor glove with tactile feedback using flex sensors and small vibration motors. MATLAB and JAVA code is linked.

Note that this project is not longer maintained. Use the GitHub project instead.

Publications

2016

Weber, Paul; Rueckert, Elmar; Calandra, Roberto; Peters, Jan; Beckerle, Philipp

A Low-cost Sensor Glove with Vibrotactile Feedback and Multiple Finger Joint and Hand Motion Sensing for Human-Robot Interaction Proceedings Article

In: Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2016.

Links | BibTeX

A Low-cost Sensor Glove with Vibrotactile Feedback and Multiple Finger Joint and Hand Motion Sensing for Human-Robot Interaction

2015

Rueckert, Elmar; Lioutikov, Rudolf; Calandra, Roberto; Schmidt, Marius; Beckerle, Philipp; Peters, Jan

Low-cost Sensor Glove with Force Feedback for Learning from Demonstrations using Probabilistic Trajectory Representations Proceedings Article

In: ICRA 2015 Workshop on Tactile and force sensing for autonomous compliant intelligent robots, 2015.

Links | BibTeX

Low-cost Sensor Glove with Force Feedback for Learning from Demonstrations using Probabilistic Trajectory Representations

Details to the Hardware

  • Arduino Mega 2560 Board
  • Check which USB device is used (e.g., by running dmesg). On most of our machines it is /dev/ttyACM0
  • Enable read/write permissions if necessary, e.g., run sudo chmod o+rw /dev/ttyACM0
  • Serial protocoll based communication: Flex sensor readings are streamed and Vibration motor PWM values can be set between 0 and 255
  • Firmware can be found here (follow the instructions in the README.txt to compile and upload the firmware)
  • Features frame rates of up to 350Hz
  • Five flex sensors provide continuous readings within the range [0, 1024]

Simple Matlab Serial Interface (max 100Hz)

  • Download the Matlab demo code from here
  • Tell Matlab which serial ports to use: copy the java.opts file to your Matlab bin folder, e.g., to /usr/local/MATLAB/R2012a/bin/glnxa64/
  • Run FastComTest.m

Fast Mex-file based Matlab Interface – max 350Hz

  • Install libserial-dev
  • Compile the mex function with: mex SensorGloveInterface.cpp -lserial
  • Run EventBasedSensorGloveDemo.m

 




AI and Learning in Robotics

Robotics AI requires autonomous learning capabilities

The challenges in understanding human motor control, in brain-machine interfaces and anthropomorphic robotics are currently converging. Modern anthropomorphic robots with their compliant actuators and various types of sensors (e.g., depth and vision cameras, tactile fingertips, full-body skin, proprioception) have reached the perceptuomotor complexity faced in human motor control and learning. While outstanding robotic and prosthetic devices exist, current brain machine interfaces (BMIs) and robot learning methods have not yet reached the required autonomy and performance needed to enter daily life.

The groups vision is that four major challenges have to be addressed to develop truly autonomous learning systems. These are, (1) the decomposability of complex motor skills into basic primitives organized in complex architectures, (2) the ability to learn from partial observable noisy observations of inhomogeneous high-dimensional sensor data, (3) the learning of abstract features, generalizable models and transferable policies from human demonstrations, sparse rewards and through active learning, and (4), accurate predictions of self-motions, object dynamics and of humans movements for assisting and cooperating autonomous systems.




Neural and Probabilistic Robotics

Neural and Probabilistic Robotics

Neural models have incredible learning and modeling capabilities which was demonstrated in complex robot learning tasks (e.g., Martin Riedmiller’s or Sergey Levine’s work). While these results are promising we lack a theoretical understanding of the learning capabilities of such networks and it is unclear how learned features and models can be reused or exploited in other tasks.

The ai-lab investigates deep neural network implementations that are theoretical grounded in the framework of probabilistic inference and develops deep transfer learning strategies for stochastic neural networks. We evaluate our models in challenging robotics applications where the networks have to scale to high-dimensional control signals and need to generate reactive feedback command in real-time.

Our developments will enable complex online adaptation and skill learning behavior in autonomous systems and will help to gain a better understanding of the meaning and function of the learned features in large neural networks with millions of parameters.