Author: Elmar Rueckert
AI and Learning in Robotics
The challenges in understanding human motor control, in brain-machine interfaces and anthropomorphic robotics are currently converging. Modern anthropomorphic robots with their compliant actuators and various types of sensors (e.g., depth and vision cameras, tactile fingertips, full-body skin, proprioception) have reached the perceptuomotor complexity faced in human motor control and learning. While outstanding robotic and prosthetic devices exist, current brain machine interfaces (BMIs) and robot learning methods have not yet reached the required autonomy and performance needed to enter daily life.
The groups vision is that four major challenges have to be addressed to develop truly autonomous learning systems. These are, (1) the decomposability of complex motor skills into basic primitives organized in complex architectures, (2) the ability to learn from partial observable noisy observations of inhomogeneous high-dimensional sensor data, (3) the learning of abstract features, generalizable models and transferable policies from human demonstrations, sparse rewards and through active learning, and (4), accurate predictions of self-motions, object dynamics and of humans movements for assisting and cooperating autonomous systems.
Neural and Probabilistic Robotics
Neural models have incredible learning and modeling capabilities which was demonstrated in complex robot learning tasks (e.g., Martin Riedmiller’s or Sergey Levine’s work). While these results are promising we lack a theoretical understanding of the learning capabilities of such networks and it is unclear how learned features and models can be reused or exploited in other tasks.
The ai-lab investigates deep neural network implementations that are theoretical grounded in the framework of probabilistic inference and develops deep transfer learning strategies for stochastic neural networks. We evaluate our models in challenging robotics applications where the networks have to scale to high-dimensional control signals and need to generate reactive feedback command in real-time.
Our developments will enable complex online adaptation and skill learning behavior in autonomous systems and will help to gain a better understanding of the meaning and function of the learned features in large neural networks with millions of parameters.