NIPS 2016 – Neurorobotics: A Chance for New Ideas, Algorithms and Approaches

Abstract

Modern robots are complex machines with many compliant actuators and various types of sensors including depth and vision cameras, tactile electrodes and dozens of proprioceptive sensors. The obvious challenges are to process these high dimensional input patterns, memorize low dimensional representations of them and to generate the desired motor commands to interact in dynamically changing environments. Similar challenges exist in brain machine interfaces (BMIs) where complex prostheses with perceptional feedback are controlled, or in motor neuroscience where in addition cognitive features need to be considered. Despite this broad research overlap the developments happened mainly in parallel and were not ported or exploited in the related domains. The main bottleneck for collaborative studies has been a lack of interaction between the core robotics, the machine learning and the neuroscience communities.

Link

Agenda and Speaker

Day One, Neurorobotics WS, Fri Dec 9th 2016
  14.20-14.30 Introduction by Elmar Rueckert and Martin Riedmiller  
Session One: Reinforcement Learning, Imitation, and Active Learning
1 14.30-15.00 Juergen Schmidhuber (Scientific Director of the Swiss AI Lab IDSIA)  
  15.00-15.30 Posters and Coffee  
2 15.30-16.00 Sergey Levine (University of California, Berkeley)  
3 16.00-16.30 Pieter Abbeel (University of California, Berkeley)  
4 16.30-17.00 Johanni Brea (École polytechnique fédérale de Lausanne, EPFL)  
  17.00-17.20 Posters and Coffee  
5 17.20-17.45 Paul Schrater (University of Minnesota)  
6 17.45-18.10 Frank Hutter (University Freiburg)  
7 18.10-18.35 Raia Hadsell (Google DeepMind)  
  18.35-19.00 Panel Discussion, Session One
Day Two, Neurorobotics WS, Sat Dec 10th 2016
Session One: Reinforcement Learning, Imitation, and Active Learning
  08.30-08.35 Introduction by Elmar Rueckert and Martin Riedmiller  
8 08.35-09.05 Robert Legenstein (Graz University of Technology)  
9 09.05-09.35 Sylvain Calinon(Idiap Research Institute, EPFL Lausanne)  
10 09.35-10.05 Chelsea Finn (University of California, Berkeley)  
11 10.05-10.35 Peter Stone (University of Texas at Austin)  
  10.35-11.00 Posters and Coffee  
12 11.00-11.30 Paul Verschure (Catalan Institute of Advanced Research)  
Session Two: Model Representations and Features
13 11.30-12.00 Tobi Delbrück (University of Zurich and ETH Zurich)  
14 12.00-12.30 Moritz Grosse-Wentrup (Max Planck Institute Tuebingen)  
15 12.30-13.00 Kristian Kersting (Technische Universität Dortmund)  
  13.00-14.00 Lunch break  
Session Three: Feedback and Control
16 14.00-14.30 Emo Todorov (University of Washington)  
17 14.30-15.00 Richard Sutton (University of Alberta)  
  15.00-15.30 Posters and Coffee  
18 15.30-16.00 Bert Kappen (Radboud University)  
19 16.00-16.30 Jean-Pascal Pfister (University of Zurich and ETH Zurich)  
  16.30-17.00 Posters and Coffee  
20 17.00-17.30 Jan Babic (Josef Stefan Institute Ljubijana)  
21 17.30-18.00 Martin Giese (University Clinic Tübingen)  
  18.00-18.30 Panel Discussion, Session Two and Session Three
 

Accapted Wokshop Papers

  • Kyuhwa Lee, Ruslan Aydarkhanov, Luca Randazzo and José Millán. Neural Decoding of Continuous Gait Imagery from Brain Signals. (ID 2)
  • Aviv Tamar, Garrett Thomas, Tianhao Zhang, Sergey Levine and Pieter Abbeel. Episodic MPC Improvement with the Hindsight Plan. (ID 11)
  • Jim Mainprice, Arunkumar Bryavan, Daniel Kappler, Dieter Fox, Stefan Schaal and Nathan Ratliff. Functional manifold projections in Deep-LEARCH. (ID 12)
  • Nutan Chen, Maximilian Karl and Patrick van der Smagt. Dynamic Movement Primitives in Latent Space of Time-Dependent Variational Autoencoders. (ID 1)
    → Alexander Gabriel, Riad Akrour and Gerhard Neumann. Empowered Skills. (ID 7)
  • Dieter Buechler, Roberto Calandra and Jan Peters. Modeling Variability of Musculoskeletal Systems with Heteroscedastic Gaussian Processes. (ID 10)
  • David Sharma, Daniel Tanneberg, Moritz Grosse-Wentrup, Jan Peters and Elmar Rueckert. Adapting Brain Signals with Reinforcement Learning Strategies for Brain Computer Interfaces. (ID 16)
  • Dmytro Velychko, Benjamin Knopp and Dominik Endres. The Variational Coupled Gaussian Process Dynamical Model. (ID 5)
  • Felix End, Riad Akrour and Gerhard Neumann. Layered Direct Policy Search for Learning Hierarchical Skills. (ID 6)
  • Erwan Renaudo, Benoît Girard, Raja Chatila and Mehdi Khamassi. Bio-inspired habit learning in a robotic architecture. (ID 9)

Organizer

Elmar Rueckert is a postdoctoral scholar at the Intelligent Autonomous Systems lab headed by Jan Peters. He has a strong expertise in learning spiking neural networks, probabilistic planning and robot control. Before joining IAS in 2014, he has been with the Institute for Theoretical Computer Science at Graz University of Technology, where he received his Ph.D. under the supervision of Wolfgang Mass. His Thesis, “On Biologically inspired motor skill learning in robotics through probabilistic inference” concentrated on probabilistic inference for motor skill learning and on learning biologically inspired movement representations.

 

Martin Riedmiller joined Google DeepMind in 2015 as research scientist. He received a Diploma in Computer Science in 1992 and a PhD on Self-learning Neural Controllers in 1996 from University of Karlsruhe. He has been a professor at TU Dortmund (2002), University of Osnabrück (2003-2009), and University of Freiburg (2009-2015) where he headed the Machine Learning Lab. His general research interest is applying machine learning techniques to interesting real world problems. His RoboCup team Brainstormes won five international competitions in the 2D Simulation and MiddleSize leagues.

 

Leave a Reply

Your email address will not be published.