1

Invited Talk at the ICDL Conference, Lisbon, Portugal

Home – Background Slideshow

Title: Experience Replay and Intrinsic Motivation in Neural Motor Skill Learning Models




3 HUMANOIDS Papers Accepted

Rueckert, E.; Nakatenus, M.; Tosatto, S.; Peters, J. (2017). Learning Inverse Dynamics Models in O(n) time with LSTM networks.

Tanneberg, D.; Peters, J.; Rueckert, E. (2017). Efficient Online Adaptation with Stochastic Recurrent Neural Networks.

Stark, S.; Peters, J.; Rueckert, E. (2017). A Comparison of Distance Measures for Learning Nonparametric Motor Skill Libraries.




CoRL Paper accepted

Tanneberg, D.; Peters, J.; Rueckert, E. (2017). Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation SignalsProceedings of the Conference on Robot Learning (CoRL).




W1 Juniorprofessorship with tenure track at University Lübeck

With February 1st, 2018 I will work as professor for robotics at the university Lübeck.




Invited Talk at University Lübeck

Title: Neural models for robot motor skill learning.

Abstract: 

The challenges in understanding human motor control, in brain-machine interfaces
and anthropomorphic robotics are currently converging. Modern anthropomorphic
robots with their compliant actuators and various types of sensors (e.g., depth
and vision cameras, tactile fingertips, full-body skin, proprioception) have
reached the perceptuomotor complexity faced in human motor control and learning.
While outstanding robotic and prosthetic devices exist, current brain machine
interfaces (BMIs) and robot learning methods have not yet reached the required
autonomy and performance needed to enter daily life.
For truly autonomous robotic and prosthetic devices four major challenges have
to be addressed. These fields can be grouped into the major area of
Neurorobotics and are, (1) the decomposability of complex motor skills into
basic primitives organized in complex architectures, (2) the ability to learn
from partial observable noisy observations of inhomogeneous high-dimensional
sensor data, (3) the learning of abstract features, generalizable models and
transferable policies from human demonstrations, sparse rewards and through
active learning, and (4), accurate predictions of self-motions, object dynamics
and of humans movements for assisting and cooperating autonomous systems.
My contributions are probabilistic computational models that can be trained from
high-dimensional input streams of neural and artificial data (e.g., action
potentials, movement kinematics, joint forces, EMG signals, tactile readings).
The learned models are evaluated in human motor adaptation experiments and in
robot reaching and balancing tasks. These probabilistic models can be
co-activated and sequenced in time as movement primitives and can be modulated
by a small set of control parameters to generalize to new tasks. In neural
network implementations forward and inverse kinematic models are learned
simultaneously and used to generate movement plans in a compliant humanoid
robot. The neural models capture the correlations of the input and can forecast
self-motions or co-workers intentions as demonstrated in a recent human
adaptation experiment which showed that postural control precedes and predicts
volitional motor control.



Invited Talk at the Frankfurt Institute for Advanced Studies (FIAS), Germany

Learning to Plan through Reinforcement Learning in Spiking Neural Networks

Abstract: Movement planing is a fundamental skill that is involved in many human motor control tasks. While the hippocampus plays a central role, the functional principles underlying planning are largely unexplored. In this talk, I present a computational model for planning that is derived from theoretical principles of the probabilistic inference framework. Optimal learning rules are inferred and links to the widely used machine learning techniques expectation maximization and policy search are established. As computational model for hippocampal sweeps, we show that the network dynamics are qualitatively similar to transient firing patterns during planning and foraging in the hippocampus of awake behaving rats. In robotic tasks, non-Gaussian hard constraints are modeled, dozens of movement plans are simulated in parallel, and forward and inverse kinematic models are learned simultaneously through interactions with the environment.




Invited Talk at the Institute of Neuroinformatics (INI), Zurich, Switzerland

Probabilistic computational models of human motor control for robot learning.




Invited Talk at the Albert-Ludwigs-Universität Freiburg, Germany

Neural models for brain-machine interfaces and anthropomorphic robotics




Journal Paper Accepted at Nature Publishing Group: Scientific Reports.

Rueckert, Elmar; Camernik, Jernej; Peters, Jan; Babic, Jan

Probabilistic Movement Models Show that Postural Control Precedes and Predicts Volitional Motor Control 

Nature Publishing Group: Scientific Reports, (28455), 2016.




Journal Paper Accepted at Nature Publishing Group: Scientific Reports.

Rueckert, Elmar; Kappel, David; Tanneberg, Daniel; Pecevski, Dejan; Peters, Jan

Recurrent Spiking Networks Solve Planning Tasks

Nature Publishing Group: Scientific Reports, (21142), 2016.