Author: Elmar Rueckert
Learning Multimodal Solutions with Movement Primitives
Video
Link to the file
You may use this video for research and teaching purposes. Please cite the Chair of Cyber-Physical-Systems or the corresponding research paper.
Publications
2015 |
|
Rueckert, Elmar; Mundo, Jan; Paraschos, Alexandros; Peters, Jan; Neumann, Gerhard Extracting Low-Dimensional Control Variables for Movement Primitives Proceedings Article In: Proceedings of the International Conference on Robotics and Automation (ICRA), 2015. @inproceedings{Rueckert2015, | ![]() |
Dynamic Control of a CableBot
Building a CableBot and Learning the Dynamics Model and the Controller

Controlling cable driven master slave robots is a challenging task. Fast and precise motion planning requires stabilizing struts which are disruptive elements in robot-assisted surgeries. In this work, we study parallel kinematics with an active deceleration mechanism that does not require any hindering struts for stabilization.
Reinforcement learning is used to learn control gains and model parameters which allow for fast and precise robot motions without overshooting. The developed mechanical design as well as the controller optimization framework through learning can improve the motion and tracking performance of many widely used cable-driven master slave robots in surgical robotics.
Project Consortium
- Montanuniversität Leoben
Related Work
H Yuan, E Courteille, D Deblaise (2015). Static and dynamic stiffness analyses of cable-driven parallel robots with non-negligible cable mass and elasticity, Mechanism and Machine Theory, 2015 – Elsevier, link.
MA Khosravi, HD Taghirad (2011). Dynamic analysis and control of cable driven robots with elastic cables, Transactions of the Canadian Society for Mechanical Engineering 35.4 (2011): 543-557, link.
Publications
2019 |
|
Rueckert, Elmar; Jauer, Philipp; Derksen, Alexander; Schweikard, Achim Dynamic Control Strategies for Cable-Driven Master Slave Robots Proceedings Article In: Keck, Tobias (Ed.): Proceedings on Minimally Invasive Surgery, Luebeck, Germany, 2019, (January 24-25, 2019). @inproceedings{Rueckert2019c, | ![]() |
Active transfer learning with neural networks through human-robot interactions (TRAIN)
DFG Project 07/2020-01/2025

In our vision, autonomous robots are interacting with humans at industrial sites, in health care, or at our homes managing the household. From a technical perspective, all these application domains require that robots process large amounts of data of noisy sensor observations during the execution of thousands of different motor and manipulation skills. From the perspective of many users, programming these skills manually or using recent learning approaches, which are mostly operable only by experts, will not be feasible to use intelligent autonomous systems in tasks of everyday life.
In this project, we aim at improving robot skill learning with deep networks considering human feedback and guidance. The human teacher is rating different transfer learning strategies in the artificial neural network to improve the learning of novel skills by optimally exploiting existing encoded knowledge. Neural networks are ideally suited for this task as we can gradually increase the number of transferred parameters and can even transition between the transfer of task specific knowledge to abstract features encoded in deeper layers. To consider this systematically, we evaluate subjective feedback and physiological data from user experiments and elaborate assessment criteria that enable the development of human-oriented transfer learning methods. In two main experiments, we first investigate how users experience transfer learning and then examine the influence of shared autonomy of humans and robots. This will result in a methodical robot skill learning framework that adapts to the users’ needs, e.g., by adjusting the degree of autonomy of the robot to laymen requirements. Even though we evaluate the learning framework focusing on pick and place tasks with anthropomorphic robot arms, our results will be transferable to a broad range of human-robot interaction scenarios including collaborative manipulation tasks in production and assembly, but also for designing advanced controls for rehabilitation and household robots.
Project Consortium
Friedrich-Alexander-Universität Erlangen-Nürnberg
- Montanuniversität Leoben
Links
Details on the research project can be found on the project webpage.
Publications
2021 |
|
Tanneberg, Daniel; Ploeger, Kai; Rueckert, Elmar; Peters, Jan SKID RAW: Skill Discovery from Raw Trajectories Journal Article In: IEEE Robotics and Automation Letters (RA-L), pp. 1–8, 2021, ISSN: 2377-3766, (© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.). @article{Tanneberg2021, | ![]() |
Jamsek, Marko; Kunavar, Tjasa; Bobek, Urban; Rueckert, Elmar; Babic, Jan Predictive exoskeleton control for arm-motion augmentation based on probabilistic movement primitives combined with a flow controller Journal Article In: IEEE Robotics and Automation Letters (RA-L), pp. 1–8, 2021, ISSN: 2377-3766, (© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.). @article{Jamsek2021, | ![]() |
Cansev, Mehmet Ege; Xue, Honghu; Rottmann, Nils; Bliek, Adna; Miller, Luke E.; Rueckert, Elmar; Beckerle, Philipp Interactive Human-Robot Skill Transfer: A Review of Learning Methods and User Experience Journal Article In: Advanced Intelligent Systems, pp. 1–28, 2021. @article{Cansev2021, | ![]() |
2020 |
|
Rottmann, N.; Kunavar, T.; Babič, J.; Peters, J.; Rueckert, E. Learning Hierarchical Acquisition Functions for Bayesian Optimization Proceedings Article In: International Conference on Intelligent Robots and Systems (IROS’ 2020), 2020. @inproceedings{Rottmann2020HiBO, | ![]() |
Xue, H.; Boettger, S.; Rottmann, N.; Pandya, H.; Bruder, R.; Neumann, G.; Schweikard, A.; Rueckert, E. Sample-Efficient Covariance Matrix Adaptation Evolutional Strategy via Simulated Rollouts in Neural Networks Proceedings Article In: International Conference on Advances in Signal Processing and Artificial Intelligence (ASPAI’ 2020), 2020. @inproceedings{Xue2020, | ![]() |
Vedant Dave, M.Sc.
Ph.D. Student at the Montanuniversität Leoben

Short bio: Mr. Vedant Dave started at CPS on 23rd September 2021.
He received his Master degree in Automation and Robotics from Technische Universität Dortmund in 2021 with the study focus on Robotics and Artificial Intelligence. His thesis was entitled “Model-agnostic Reinforcement Learning Solution for Autonomous Programming of Robotic Motion”, which took place at at Mercedes-Benz AG. In the thesis, he implemented Reinforcement learning for the motion planning of manipulators in complex environments. Before that, he did his Research internship at Bosch Center for Artificial Intelligence, where he worked on Probabilistic Movement Primitives on Riemannian Manifolds.
Research Interests
- Information Theoretic Reinforcement Learning
- Robust Multimodal Representation Learning
- Unsupervised Skill Discovery
- Movement Primitives
Research Videos
Contact & Quick Links
M.Sc. Vedant Dave
Doctoral Student supervised by Univ.-Prof. Dr. Elmar Rueckert.
Montanuniversität Leoben
Franz-Josef-Straße 18,
8700 Leoben, Austria
Phone: +43 3842 402 – 1903
Email: vedant.dave@unileoben.ac.at
Web Work: CPS-Page
Chat: WEBEX
Publications
2025 |
|
Vanjani, Pankhuri; Mattes, Paul; Li, Maximilian Xiling; Dave, Vedant; Lioutikov, Rudolf DisDP: Robust Imitation Learning via Disentangled Diffusion Policies Proceedings Article Forthcoming In: Reinforcement Learning Conference (RLC), Reinforcement Learning Journal, Forthcoming. @inproceedings{dave2025disdp, | ![]() |
Dave, Vedant; Rueckert, Elmar Skill Disentanglement in Reproducing Kernel Hilbert Space Proceedings Article In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 16153-16162, 2025. @inproceedings{Dave2025bb, Unsupervised Skill Discovery aims at learning diverse skills without any extrinsic rewards and leverage them as prior for learning a variety of downstream tasks. Existing approaches to unsupervised reinforcement learning typically involve discovering skills through empowerment-driven techniques or by maximizing entropy to encourage exploration. However, this mutual information objective often results in either static skills that discourage exploration or maximise coverage at the expense of non-discriminable skills. Instead of focusing only on maximizing bounds on f-divergence, we combine it with Integral Probability Metrics to maximize the distance between distributions to promote behavioural diversity and enforce disentanglement. Our method, Hilbert Unsupervised Skill Discovery (HUSD), provides an additional objective that seeks to obtain exploration and separability of state-skill pairs by maximizing the Maximum Mean Discrepancy between the joint distribution of skills and states and the product of their marginals in Reproducing Kernel Hilbert Space. Our results on Unsupervised RL Benchmark show that HUSD outperforms previous exploration algorithms on state-based tasks. | ![]() |
Nwankwo, Linus; Ellensohn, Bjoern; Dave, Vedant; Hofer, Peter; Forstner, Jan; Villneuve, Marlene; Galler, Robert; Rueckert, Elmar EnvoDat: A Large-Scale Multisensory Dataset for Robotic Spatial Awareness and Semantic Reasoning in Heterogeneous Environments Proceedings Article In: IEEE International Conference on Robotics and Automation (ICRA 2025)., 2025. @inproceedings{Nwankwo2025, | ![]() |
2024 |
|
Dave, Vedant; Rueckert, Elmar Denoised Predictive Imagination: An Information-theoretic approach for learning World Models Conference European Workshop on Reinforcement Learning (EWRL), 2024. @conference{dpidave2024, Humans excel at isolating relevant information from noisy data to predict the behavior of dynamic systems, effectively disregarding non-informative, temporally-correlated noise. In contrast, existing reinforcement learning algorithms face challenges in generating noise-free predictions within high-dimensional, noise-saturated environments, especially when trained on world models featuring realistic background noise extracted from natural video streams. We propose a novel information-theoretic approach that learn world models based on minimising the past information and retaining maximal information about the future, aiming at simultaneously learning control policies and at producing denoised predictions. Utilizing Soft Actor-Critic agents augmented with an information-theoretic auxiliary loss, we validate our method's effectiveness on complex variants of the standard DeepMind Control Suite tasks, where natural videos filled with intricate and task-irrelevant information serve as a background. Experimental results demonstrate that our model outperforms nine state-of-the-art approaches in various settings where natural videos serve as dynamic background noise. Our analysis also reveals that all these methods encounter challenges in more complex environments. | ![]() |
Lygerakis, Fotios; Dave, Vedant; Rueckert, Elmar M2CURL: Sample-Efficient Multimodal Reinforcement Learning via Self-Supervised Representation Learning for Robotic Manipulation Proceedings Article In: IEEE International Conference on Ubiquitous Robots (UR 2024), IEEE 2024. @inproceedings{Lygerakis2024, | ![]() |
Dave*, Vedant; Lygerakis*, Fotios; Rueckert, Elmar Multimodal Visual-Tactile Representation Learning through Self-Supervised Contrastive Pre-Training Proceedings Article In: IEEE International Conference on Robotics and Automation (ICRA), pp. 8013-8020, IEEE, 2024, ISBN: 979-8-3503-8457-4, (* equal contribution). @inproceedings{Dave2024b, The rapidly evolving field of robotics necessitates methods that can facilitate the fusion of multiple modalities. Specifically, when it comes to interacting with tangible objects, effectively combining visual and tactile sensory data is key to understanding and navigating the complex dynamics of the physical world, enabling a more nuanced and adaptable response to changing environments. Nevertheless, much of the earlier work in merging these two sensory modalities has relied on supervised methods utilizing datasets labeled by humans. This paper introduces MViTac, a novel methodology that leverages contrastive learning to integrate vision and touch sensations in a self-supervised fashion. By availing both sensory inputs, MViTac leverages intra and inter-modality losses for learning representations, resulting in enhanced material property classification and more adept grasping prediction. Through a series of experiments, we showcase the effectiveness of our method and its superiority over existing state-of-the-art self-supervised and supervised techniques. In evaluating our methodology, we focus on two distinct tasks: material classification and grasping success prediction. Our results indicate that MViTac facilitates the development of improved modality encoders, yielding more robust representations as evidenced by linear probing assessments. https://sites.google.com/view/mvitac/home | ![]() |
2022 |
|
Dave, Vedant; Rueckert, Elmar Can we infer the full-arm manipulation skills from tactile targets? Workshop Advances in Close Proximity Human-Robot Collaboration Workshop, International Conference on Humanoid Robots (Humanoids), 2022. @workshop{Dave2022WS, Tactile sensing provides significant information about the state of the environment for performing manipulation tasks. Manipulation skills depends on the desired initial contact points between the object and the end-effector. Based on physical properties of the object, this contact results into distinct tactile responses. We propose Tactile Probabilistic Movement Primitives (TacProMPs), to learn a highly non-linear relationship between the desired tactile responses and the full-arm movement, where we condition solely on the tactile responses to infer the complex manipulation skills. We use a Gaussian mixture model of primitives to address the multimodality in demonstrations. We demonstrate the performance of our method in challenging real-world scenarios. | ![]() |
Dave, Vedant; Rueckert, Elmar Predicting full-arm grasping motions from anticipated tactile responses Proceedings Article In: International Conference on Humanoid Robots (Humanoids), pp. 464-471, IEEE, 2022, ISBN: 979-8-3503-0979-9. @inproceedings{Dave2022, Tactile sensing provides significant information about the state of the environment for performing manipulation tasks. Depending on the physical properties of the object, manipulation tasks can exhibit large variation in their movements. For a grasping task, the movement of the arm and of the end effector varies depending on different points of contact on the object, especially if the object is non-homogeneous in hardness and/or has an uneven geometry. In this paper, we propose Tactile Probabilistic Movement Primitives (TacProMPs), to learn a highly non-linear relationship between the desired tactile responses and the full-arm movement. We solely condition on the tactile responses to infer the complex manipulation skills. We formulate a joint trajectory of full-arm joints with tactile data, leverage the model to condition on the desired tactile response from the non-homogeneous object and infer the full-arm (7-dof panda arm and 19-dof gripper hand) motion. We use a Gaussian Mixture Model of primitives to address the multimodality in demonstrations. We also show that the measurement noise adjustment must be taken into account due to multiple systems working in collaboration. We validate and show the robustness of the approach through two experiments. First, we consider an object with non-uniform hardness. Grasping from different locations require different motion, and results into different tactile responses. Second, we have an object with homogeneous hardness, but we grasp it with widely varying grasping configurations. Our result shows that TacProMPs can successfully model complex multimodal skills and generalise to new situations. | ![]() |
Leonel, Rozo*; Vedant, Dave* Orientation Probabilistic Movement Primitives on Riemannian Manifolds Proceedings Article In: Conference on Robot Learning (CoRL), pp. 11, 2022, (* equal contribution). @inproceedings{Leonel2022, Learning complex robot motions necessarily demands to have models that are able to encode and retrieve full-pose trajectories when tasks are defined in operational spaces. Probabilistic movement primitives (ProMPs) stand out as a principled approach that models trajectory distributions learned from demonstrations. ProMPs allow for trajectory modulation and blending to achieve better generalization to novel situations. However, when ProMPs are employed in operational space, their original formulation does not directly apply to full-pose movements including rotational trajectories described by quaternions. This paper proposes a Riemannian formulation of ProMPs that enables encoding and retrieving of quaternion trajectories. Our method builds on Riemannian manifold theory, and exploits multilinear geodesic regression for estimating the ProMPs parameters. This novel approach makes ProMPs a suitable model for learning complex full-pose robot motion patterns. Riemannian ProMPs are tested on toy examples to illustrate their workflow, and on real learning-from-demonstration experiments. | ![]() |
Linus Nwankwo, M.Sc.
Short Bio
Mr. Linus Nwankwo started his PhD studies at CPS in 2021. Prior to joining CPS, he interned at the Department of Electrical and Computer Engineering, Technische Universität Kaiserslautern, Germany. In 2020, he earned his M.Sc. degree in Automation and Robotics, a speciality in control for Green Mechatronics (GreeM) at the University of Bourgogne-Franche-Comté (UBFC), France.
His current research focuses on SLAM and the application of supervised learning models for environment-resilient robot autonomy and spatial awareness. He also works on grounding foundation models (LLMs & multi-modal VLMs) to enable autonomous agents to interact with their environments and perform long-horizon tasks in a manner akin to human cognition.
Research Interests
- Robotic Spatial Awareness
- Robust and ecocentric SLAM methods.
- Path planning and autonomous navigation methods.
- Environment-aware perception and robot autonomy in heterogeneous in-outdoor and subterranean environments.
- Machine Learning and Human-Robot Interaction (HRI)
- Grounding free-form natural language instructions into robotic affordances.
- LLMs and VLMs for effective natural language-conditioned HRI in the real world.
- Intention and social-aware planning for social service robots navigation.
Research Videos
Contacts
M.Sc. Linus Nwankwo
Doctoral Student supervised by Univ.-Prof. Dr. Elmar Rueckert since August 2021.
Montanuniversität Leoben
Franz-Josef-Straße 18,
8700 Leoben, Austria
Phone: +43 3842 402 – 1901 (Sekretariat CPS)
Email: linus.nwankwo@unileoben.ac.at
Web Work: CPS-Page
Web Private:https://linusnep.github.io/AboutMe/
Chat: WEBEX
Publications
2025 |
|
Nwankwo, Linus; Ellensohn, Bjoern; Dave, Vedant; Hofer, Peter; Forstner, Jan; Villneuve, Marlene; Galler, Robert; Rueckert, Elmar EnvoDat: A Large-Scale Multisensory Dataset for Robotic Spatial Awareness and Semantic Reasoning in Heterogeneous Environments Proceedings Article In: IEEE International Conference on Robotics and Automation (ICRA 2025)., 2025. @inproceedings{Nwankwo2025, | ![]() |
2024 |
|
Nwankwo, Linus; Rueckert, Elmar 2024, ( In Workshop of the 2024 ACM/IEEE International Conference on HumanRobot Interaction (HRI ’24 Workshop), March 11–14, 2024, Boulder, CO, USA. ACM, New York, NY, USA). @workshop{Nwankwo2024MultimodalHA, In this paper, we extended the method proposed in [17] to enable humans to interact naturally with autonomous agents through vocal and textual conversations. Our extended method exploits the inherent capabilities of pre-trained large language models (LLMs), multimodal visual language models (VLMs), and speech recognition (SR) models to decode the high-level natural language conversations and semantic understanding of the robot's task environment, and abstract them to the robot's actionable commands or queries. We performed a quantitative evaluation of our framework's natural vocal conversation understanding with participants from different racial backgrounds and English language accents. The participants interacted with the robot using both vocal and textual instructional commands. Based on the logged interaction data, our framework achieved 87.55% vocal commands decoding accuracy, 86.27% commands execution success, and an average latency of 0.89 seconds from receiving the participants' vocal chat commands to initiating the robot’s actual physical action. The video demonstrations of this paper can be found at https://linusnep.github.io/MTCC-IRoNL/ | ![]() |
Nwankwo, Linus; Rueckert, Elmar The Conversation is the Command: Interacting with Real-World Autonomous Robots Through Natural Language Proceedings Article In: HRI '24: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction., pp. 808–812, ACM/IEEE Association for Computing Machinery, New York, NY, USA, 2024, ISBN: 9798400703232, (Published as late breaking results. Supplementary video: https://cloud.cps.unileoben.ac.at/index.php/s/fRE9XMosWDtJ339 ). @inproceedings{Nwankwo2024, In recent years, autonomous agents have surged in real-world environments such as our homes, offices, and public spaces. However, natural human-robot interaction remains a key challenge. In this paper, we introduce an approach that synergistically exploits the capabilities of large language models (LLMs) and multimodal vision-language models (VLMs) to enable humans to interact naturally with autonomous robots through conversational dialogue. We leveraged the LLMs to decode the high-level natural language instructions from humans and abstract them into precise robot actionable commands or queries. Further, we utilised the VLMs to provide a visual and semantic understanding of the robot's task environment. Our results with 99.13% command recognition accuracy and 97.96% commands execution success show that our approach can enhance human-robot interaction in real-world applications. The video demonstrations of this paper can be found at https://osf.io/wzyf6 and the code is available at our GitHub repository. | ![]() |
2023 |
|
Nwankwo, Linus; Rueckert, Elmar Understanding why SLAM algorithms fail in modern indoor environments Proceedings Article In: International Conference on Robotics in Alpe-Adria-Danube Region (RAAD). , pp. 186 – 194, Cham: Springer Nature Switzerland., 2023. @inproceedings{Nwankwo2023, Simultaneous localization and mapping (SLAM) algorithms are essential for the autonomous navigation of mobile robots. With the increasing demand for autonomous systems, it is crucial to evaluate and compare the performance of these algorithms in real-world environments. In this paper, we provide an evaluation strategy and real-world datasets to test and evaluate SLAM algorithms in complex and challenging indoor environments. Further, we analysed state-of-the-art (SOTA) SLAM algorithms based on various metrics such as absolute trajectory error, scale drift, and map accuracy and consistency. Our results demonstrate that SOTA SLAM algorithms often fail in challenging environments, with dynamic objects, transparent and reflecting surfaces. We also found that successful loop closures had a significant impact on the algorithm’s performance. These findings highlight the need for further research to improve the robustness of the algorithms in real-world scenarios. | ![]() |
Nwankwo, Linus; Fritze, Clemens; Bartsch, Konrad; Rueckert, Elmar ROMR: A ROS-based Open-source Mobile Robot Journal Article In: HardwareX, vol. 15, pp. 1–29, 2023. @article{Nwankwo2023b, Currently, commercially available intelligent transport robots that are capable of carrying up to 90kg of load can cost $5,000 or even more. This makes real-world experimentation prohibitively expensive, and limiting the applicability of such systems to everyday home or industrial tasks. Aside from their high cost, the majority of commercially available platforms are either closed-source, platform-specific, or use difficult-to-customize hardware and firmware. In this work, we present a low-cost, open-source and modular alternative, referred to herein as ”ROS-based open-source mobile robot (ROMR)”. ROMR utilizes off-the-shelf (OTS) components, additive manufacturing technologies, aluminium profiles, and a consumer hoverboard with high-torque brushless direct current (BLDC) motors. ROMR is fully compatible with the robot operating system (ROS), has a maximum payload of 90kg, and costs less than $1500. Furthermore, ROMR offers a simple yet robust framework for contextualizing simultaneous localization and mapping (SLAM) algorithms, an essential prerequisite for autonomous robot navigation. The robustness and performance of the ROMR were validated through realworld and simulation experiments. All the design, construction and software files are freely available online under the GNU GPL v3 license at https://doi.org/10.17605/OSF.IO/K83X7. A descriptive video of ROMR can be found at https://osf.io/ku8ag. | ![]() |
Nikolaus Feith, M.Sc.
Ph.D. Student at the Montanuniversität Leoben

Hello, my name is Nikolaus Feith and I started working at the Chair for CPS in June 2021. After finishing my Master’s degree in Mining Mechanical Engineering at the University of Leoben in June 2022, I started my PhD at the CPS Chair in July 2022.
In my PhD thesis, I am investigating the application of human expertise through Interactive Machine Learning in robotic systems.
Research Interests
- Machine Learning
- Interactive Machine Learning
- Model Free Reinforcement Learning
- Robot Learning
- Optimization
- Bayesian Optimization
- CMA-ES
- Human-Robot Interfaces
- Augmented Reality
- Robot Web Tools
- Embedded Systems in Robotics
- Cyber Physical Systems
Teaching & Thesis Supervision
Current & Past Theses
- B.Sc. Thesis – Christoph Andres: Development of a ROS2 Interface for the FANUC CRX-10iA robot arm
- M.Sc. Thesis – Christopher Martin Shimmin: Bayesian Optimization for learning optimal parameters of Electronic Control Units (ECU’s) for Motorcycles
- B.Sc. Thesis – Marco Schwarz: Development of a generic ROS2 Device Interface based on Micro-ROS on a ESP32
Teaching
Contact
M.Sc. Nikolaus Feith
Doctoral Student supervised by Univ.-Prof. Dr. Elmar Rueckert since July 2022.
Montanuniversität Leoben
Franz-Josef-Straße 18,
8700 Leoben, Austria
Phone: +43 3842 402 – 1901 (Sekretariat CPS)
Email: nikolaus.feith@unileoben.ac.at
Web Work: CPS-Page
Chat: WEBEX
Publications
2024 |
|
Feith, Nikolaus; Rueckert, Elmar Integrating Human Expertise in Continuous Spaces: A Novel Interactive Bayesian Optimization Framework with Preference Expected Improvement Proceedings Article In: IEEE International Conference on Ubiquitous Robots (UR 2024), IEEE 2024. @inproceedings{Feith2024A, | ![]() |
Feith, Nikolaus; Rueckert, Elmar Advancing Interactive Robot Learning: A User Interface Leveraging Mixed Reality and Dual Quaternions Proceedings Article In: IEEE International Conference on Ubiquitous Robots (UR 2024), IEEE 2024. @inproceedings{Feith2024B, | ![]() |
Dr. Daniel Tanneberg
Ph.D. Student at the University of Luebeck

Short bio: Dr. Daniel Tanneberg passed his PhD defense on the 3rd of December in 2020. He is now working as senior researcher at the Honda Research Institute in Offenbach, Germany.
He was co-supervised by Prof. Jan Peters from the Technische Universitaet Darmstadt and Univ.-Prof. Dr. Elmar Rueckert, the head of this lab.
Daniel has joined the Intelligent Autonomous Systems (IAS) Group at the Technische Universitaet Darmstadt in October 2015 as a Ph.D. Student. His research focused on (biologically-inspired) machine learning for robotics and neuroscience. During his Ph.D., Daniel investigated the applicability and properties of spiking and memory-augmented deep neural networks. His neural networks were applied to robotic as well as to algorithmic tasks.
With his masters thesis with the title Neural Networks Solve Robot Planning Problems he won the prestigoues Hanns-Voith-Stiftungspreis 2017 ’Digital Solutions’.
Research Interests
- (Biologically-inspired) Machine Learning, (Memory-augmented) Neural Networks, Deep Learning, (Stochastic) Neural Networks, Lifelong-Learning.
Contact & Quick Links
Dr. Daniel Tanneberg
Former Doctoral Student supervised by Prof. Dr. Jan Peters and Univ.-Prof. Dr. Elmar Rueckert from 10/2015 to 12/2020.
Hochschulstr. 10,
64289 Darmstadt,
Deutschland
Email: daniel@robot-learning.de
Web: https://www.rob.uni-luebeck.de/index.php?id=460
Publcations
2021 |
|
Tanneberg, Daniel; Ploeger, Kai; Rueckert, Elmar; Peters, Jan SKID RAW: Skill Discovery from Raw Trajectories Journal Article In: IEEE Robotics and Automation Letters (RA-L), pp. 1–8, 2021, ISSN: 2377-3766, (© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.). @article{Tanneberg2021, | ![]() |
2020 |
|
Tanneberg, Daniel; Rueckert, Elmar; Peters, Jan Evolutionary training and abstraction yields algorithmic generalization of neural computers Journal Article In: Nature Machine Intelligence, pp. 1–11, 2020. @article{Tanneberg2020, | ![]() |
2019 |
|
Tanneberg, Daniel; Peters, Jan; Rueckert, Elmar Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks Journal Article In: Neural Networks – Elsevier, vol. 109, pp. 67-80, 2019, ISBN: 0893-6080, (Impact Factor of 7.197 (2017)). @article{Tanneberg2019, | ![]() |
2017 |
|
Tanneberg, Daniel; Peters, Jan; Rueckert, Elmar Efficient Online Adaptation with Stochastic Recurrent Neural Networks Proceedings Article In: Proceedings of the International Conference on Humanoid Robots (HUMANOIDS), 2017. @inproceedings{Tanneberg2017a, | ![]() |
Thiem, Simon; Stark, Svenja; Tanneberg, Daniel; Peters, Jan; Rueckert, Elmar Simulation of the underactuated Sake Robotics Gripper in V-REP Proceedings Article In: Workshop at the International Conference on Humanoid Robots (HUMANOIDS), 2017. @inproceedings{Thiem2017b, | ![]() |
Tanneberg, Daniel; Peters, Jan; Rueckert, Elmar Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation Signals Proceedings Article In: Proceedings of the Conference on Robot Learning (CoRL), 2017. @inproceedings{Tanneberg2017, | ![]() |
2016 |
|
Tanneberg, Daniel; Paraschos, Alexandros; Peters, Jan; Rueckert, Elmar Deep Spiking Networks for Model-based Planning in Humanoids Proceedings Article In: Proceedings of the International Conference on Humanoid Robots (HUMANOIDS), 2016. @inproceedings{tanneberg_humanoids16, | ![]() |
Rueckert, Elmar; Kappel, David; Tanneberg, Daniel; Pecevski, Dejan; Peters, Jan Recurrent Spiking Networks Solve Planning Tasks Journal Article In: Nature Publishing Group: Scientific Reports, vol. 6, no. 21142, 2016. @article{Rueckert2016a, | ![]() |
Sharma, David; Tanneberg, Daniel; Grosse-Wentrup, Moritz; Peters, Jan; Rueckert, Elmar Adaptive Training Strategies for BCIs Proceedings Article In: Cybathlon Symposium, 2016. @inproceedings{Sharma2016, | ![]() |
Svenja Stark, M.Sc.
Ph.D. Student at the Technical University of Darmstadt

Short bio: Svenja Stark left the TU Darmstadt team in 2020 and is now a successful high school teacher in Hessen. She joined the Intelligent Autonomous Systems Group as a PhD student in December 2016, where she was supervised by Prof. Dr. Jan Peters and Univ.-Prof. Dr. Elmar Rueckert.
She has been working on the GOAL-Robots project that aimed at developing goal-based open-ended autonomous learning robots; building lifelong learning robots.
Before joining the Autonomous Systems Labs, Svenja Stark received a Bachelor and a Master of Science degree in Computer Science from the TU Darmstadt. During her studies, she completed parts of her graduate coursework at the University of Massachusetts in Amherst. Her thesis entitled “Learning Probabilistic Feedforward and Feedback Policies for Generating Stable Walking Behaviors” was written under supervision of Elmar Rueckert and Jan Peters.
Research Interests
- Multi-task learning, meta-learning, goal-based learning, intrinsic motivation, lifelong learning, Reinforcement Learning, motor skill learning.
Contact & Quick Links
M.Sc. Svenja Stark
Doctoral Student supervised by Prof. Dr. Jan Peters and Univ.-Prof. Dr. Elmar Rueckert.
Hochschulstr. 10,
64289 Darmstadt,
Deutschland
Email: svenja@robot-learning.de
Web: https://www.rob.uni-luebeck.de/index.php?id=460
Publcations
2019 |
|
Stark, Svenja; Peters, Jan; Rueckert, Elmar Experience Reuse with Probabilistic Movement Primitives Proceedings Article In: Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), 2019., 2019. @inproceedings{Stark2019, | ![]() |
2017 |
|
Stark, Svenja; Peters, Jan; Rueckert, Elmar A Comparison of Distance Measures for Learning Nonparametric Motor Skill Libraries Proceedings Article In: Proceedings of the International Conference on Humanoid Robots (HUMANOIDS), 2017. @inproceedings{Humanoids2017Stark, | ![]() |
Thiem, Simon; Stark, Svenja; Tanneberg, Daniel; Peters, Jan; Rueckert, Elmar Simulation of the underactuated Sake Robotics Gripper in V-REP Proceedings Article In: Workshop at the International Conference on Humanoid Robots (HUMANOIDS), 2017. @inproceedings{Thiem2017b, | ![]() |