1

IROS 2020 – New Horizons for Robot Learning

Abstract

Robot learning combines the challenges of understanding, modeling and applying dynamical systems with task learning from rewards, through human robot interaction or from intrinsic motivation signals. While outstanding results using machine and deep learning have been generated in robot learning in the last years, current challenges in industrial applications are underrepresented. The goal of this workshop is to go beyond discussing potential industrial applications like in related past workshops.

These topics were discussed with Pieter Abbeel, Dileep George, Sergey Levine, Jan Peters, Freek Stulp, Marc Toussaint, Patrick van der Smagt, and Georg von Wichert.

Links

Details to the workshop, the speakers and links to slides can be found on the workshop webpage.

 




NeurIPS 2019 – Robot open-Ended Autonomous Learning

NeurIPS 2019 Competition Track

Open-ended learning aims to build learning machines and robots that are able to acquire skills and knowledge in an incremental fashion in a certain environment. This competition addresses autonomous open-ended learning with a focus on simulated robot systems that: (a) acquire a sensorimotor competence that allows them to interact with objects and physical environments; (b) learn in a fully autonomous way, i.e. with no human intervention (e.g., no tasks or reward functions) on the basis of mechanisms such as curiosity, intrinsic motivations, task-free reinforcement learning, self-generated goals, and any other mechanism that might support autonomous learning. The competition challenge will feature two phases: during an initial “”intrinsic phase”” the system will have a certain time to freely explore and learn in an environment containing multiple objects, and then during an “”extrinsic phase”” the quality of the autonomously acquired knowledge will be measured with tasks unknown at design time and during the intrinsic phase.

Links

Details on the competition can be found on the project webpage.

 

Publications

2020

Cartoni, E.; Mannella, F.; Santucci, V. G.; Triesch, J.; Rueckert, E.; Baldassarre, G.

REAL-2019: Robot open-Ended Autonomous Learning competition Journal Article

In: Proceedings of Machine Learning Research, vol. 123, pp. 142-152, 2020, (NeurIPS 2019 Competition and Demonstration Track).

Links | BibTeX

REAL-2019: Robot open-Ended Autonomous Learning competition




NIPS 2016 – Neurorobotics: A Chance for New Ideas, Algorithms and Approaches

Abstract

Modern robots are complex machines with many compliant actuators and various types of sensors including depth and vision cameras, tactile electrodes and dozens of proprioceptive sensors. The obvious challenges are to process these high dimensional input patterns, memorize low dimensional representations of them and to generate the desired motor commands to interact in dynamically changing environments. Similar challenges exist in brain machine interfaces (BMIs) where complex prostheses with perceptional feedback are controlled, or in motor neuroscience where in addition cognitive features need to be considered. Despite this broad research overlap the developments happened mainly in parallel and were not ported or exploited in the related domains. The main bottleneck for collaborative studies has been a lack of interaction between the core robotics, the machine learning and the neuroscience communities.

Link

Link to the workshop description at the nips webpage. 

Agenda and Speaker

Day One, Neurorobotics WS, Fri Dec 9th 2016
  14.20-14.30 Introduction by Elmar Rueckert and Martin Riedmiller  
Session One: Reinforcement Learning, Imitation, and Active Learning
1 14.30-15.00 Juergen Schmidhuber (Scientific Director of the Swiss AI Lab IDSIA)  
  15.00-15.30 Posters and Coffee  
2 15.30-16.00 Sergey Levine (University of California, Berkeley)  
3 16.00-16.30 Pieter Abbeel (University of California, Berkeley)  
4 16.30-17.00 Johanni Brea (École polytechnique fédérale de Lausanne, EPFL)  
  17.00-17.20 Posters and Coffee  
5 17.20-17.45 Paul Schrater (University of Minnesota)  
6 17.45-18.10 Frank Hutter (University Freiburg)  
7 18.10-18.35 Raia Hadsell (Google DeepMind)  
  18.35-19.00 Panel Discussion, Session One
Day Two, Neurorobotics WS, Sat Dec 10th 2016
Session One: Reinforcement Learning, Imitation, and Active Learning
  08.30-08.35 Introduction by Elmar Rueckert and Martin Riedmiller  
8 08.35-09.05 Robert Legenstein (Graz University of Technology)  
9 09.05-09.35 Sylvain Calinon(Idiap Research Institute, EPFL Lausanne)  
10 09.35-10.05 Chelsea Finn (University of California, Berkeley)  
11 10.05-10.35 Peter Stone (University of Texas at Austin)  
  10.35-11.00 Posters and Coffee  
12 11.00-11.30 Paul Verschure (Catalan Institute of Advanced Research)  
Session Two: Model Representations and Features
13 11.30-12.00 Tobi Delbrück (University of Zurich and ETH Zurich)  
14 12.00-12.30 Moritz Grosse-Wentrup (Max Planck Institute Tuebingen)  
15 12.30-13.00 Kristian Kersting (Technische Universität Dortmund)  
  13.00-14.00 Lunch break  
Session Three: Feedback and Control
16 14.00-14.30 Emo Todorov (University of Washington)  
17 14.30-15.00 Richard Sutton (University of Alberta)  
  15.00-15.30 Posters and Coffee  
18 15.30-16.00 Bert Kappen (Radboud University)  
19 16.00-16.30 Jean-Pascal Pfister (University of Zurich and ETH Zurich)  
  16.30-17.00 Posters and Coffee  
20 17.00-17.30 Jan Babic (Josef Stefan Institute Ljubijana)  
21 17.30-18.00 Martin Giese (University Clinic Tübingen)  
  18.00-18.30 Panel Discussion, Session Two and Session Three

Accapted Wokshop Papers

  • Kyuhwa Lee, Ruslan Aydarkhanov, Luca Randazzo and José Millán. Neural Decoding of Continuous Gait Imagery from Brain Signals. (ID 2)
  • Aviv Tamar, Garrett Thomas, Tianhao Zhang, Sergey Levine and Pieter Abbeel. Episodic MPC Improvement with the Hindsight Plan. (ID 11)
  • Jim Mainprice, Arunkumar Bryavan, Daniel Kappler, Dieter Fox, Stefan Schaal and Nathan Ratliff. Functional manifold projections in Deep-LEARCH. (ID 12)
  • Nutan Chen, Maximilian Karl and Patrick van der Smagt. Dynamic Movement Primitives in Latent Space of Time-Dependent Variational Autoencoders. (ID 1)
    → Alexander Gabriel, Riad Akrour and Gerhard Neumann. Empowered Skills. (ID 7)
  • Dieter Buechler, Roberto Calandra and Jan Peters. Modeling Variability of Musculoskeletal Systems with Heteroscedastic Gaussian Processes. (ID 10)
  • David Sharma, Daniel Tanneberg, Moritz Grosse-Wentrup, Jan Peters and Elmar Rueckert. Adapting Brain Signals with Reinforcement Learning Strategies for Brain Computer Interfaces. (ID 16)
  • Dmytro Velychko, Benjamin Knopp and Dominik Endres. The Variational Coupled Gaussian Process Dynamical Model. (ID 5)
  • Felix End, Riad Akrour and Gerhard Neumann. Layered Direct Policy Search for Learning Hierarchical Skills. (ID 6)
  • Erwan Renaudo, Benoît Girard, Raja Chatila and Mehdi Khamassi. Bio-inspired habit learning in a robotic architecture. (ID 9)

Organizer


Elmar Rueckert is a postdoctoral scholar at the Intelligent Autonomous Systems lab headed by Jan Peters. He has a strong expertise in learning spiking neural networks, probabilistic planning and robot control. Before joining IAS in 2014, he has been with the Institute for Theoretical Computer Science at Graz University of Technology, where he received his Ph.D. under the supervision of Wolfgang Mass. His Thesis, “On Biologically inspired motor skill learning in robotics through probabilistic inference” concentrated on probabilistic inference for motor skill learning and on learning biologically inspired movement representations.

Martin Riedmiller joined Google DeepMind in 2015 as research scientist. He received a Diploma in Computer Science in 1992 and a PhD on Self-learning Neural Controllers in 1996 from University of Karlsruhe. He has been a professor at TU Dortmund (2002), University of Osnabrück (2003-2009), and University of Freiburg (2009-2015) where he headed the Machine Learning Lab. His general research interest is applying machine learning techniques to interesting real world problems. His RoboCup team Brainstormes won five international competitions in the 2D Simulation and MiddleSize leagues.