image_pdfimage_print

Learning Multimodal Solutions with Movement Primitives

Video

Link to the file

You may use this video for research and teaching purposes. Please cite the Chair of Cyber-Physical-Systems or the corresponding research paper. 

Publications

2015

Rueckert, Elmar; Mundo, Jan; Paraschos, Alexandros; Peters, Jan; Neumann, Gerhard

Extracting Low-Dimensional Control Variables for Movement Primitives Proceedings Article

In: Proceedings of the International Conference on Robotics and Automation (ICRA), 2015.

Links | BibTeX

Extracting Low-Dimensional Control Variables for Movement Primitives

Dynamic Control of a CableBot

Building a CableBot and Learning the Dynamics Model and the Controller

Controlling cable driven master slave robots is a challenging task. Fast and precise motion planning requires stabilizing struts which are disruptive elements in robot-assisted surgeries. In this work, we study parallel kinematics with an active deceleration mechanism that does not require any hindering struts for stabilization. 

Reinforcement learning is used to learn control gains and model parameters which allow for fast and precise robot motions without overshooting. The developed mechanical design as well as the controller optimization framework through learning can improve the motion and tracking performance of many widely used cable-driven master slave robots in surgical robotics.

Project Consortium

  • Montanuniversität Leoben

Related Work

H Yuan, E Courteille, D Deblaise (2015). Static and dynamic stiffness analyses of cable-driven parallel robots with non-negligible cable mass and elasticity, Mechanism and Machine Theory, 2015 – Elsevier, link.

MA Khosravi, HD Taghirad (2011). Dynamic analysis and control of cable driven robots with elastic cables, Transactions of the Canadian Society for Mechanical Engineering 35.4 (2011): 543-557, link.

Publications

2019

Rueckert, Elmar; Jauer, Philipp; Derksen, Alexander; Schweikard, Achim

Dynamic Control Strategies for Cable-Driven Master Slave Robots Proceedings Article

In: Keck, Tobias (Ed.): Proceedings on Minimally Invasive Surgery, Luebeck, Germany, 2019, (January 24-25, 2019).

Links | BibTeX

Dynamic Control Strategies for Cable-Driven Master Slave Robots

Active transfer learning with neural networks through human-robot interactions (TRAIN)

DFG Project 07/2020-01/2025

In our vision, autonomous robots are interacting with humans at industrial sites, in health care, or at our homes managing the household. From a technical perspective, all these application domains require that robots process large amounts of data of noisy sensor observations during the execution of thousands of different motor and manipulation skills. From the perspective of many users, programming these skills manually or using recent learning approaches, which are mostly operable only by experts, will not be feasible to use intelligent autonomous systems in tasks of everyday life.

In this project, we aim at improving robot skill learning with deep networks considering human feedback and guidance. The human teacher is rating different transfer learning strategies in the artificial neural network to improve the learning of novel skills by optimally exploiting existing encoded knowledge. Neural networks are ideally suited for this task as we can gradually increase the number of transferred parameters and can even transition between the transfer of task specific knowledge to abstract features encoded in deeper layers. To consider this systematically, we evaluate subjective feedback and physiological data from user experiments and elaborate assessment criteria that enable the development of human-oriented transfer learning methods. In two main experiments, we first investigate how users experience transfer learning and then examine the influence of shared autonomy of humans and robots. This will result in a methodical robot skill learning framework that adapts to the users’ needs, e.g., by adjusting the degree of autonomy of the robot to laymen requirements. Even though we evaluate the learning framework focusing on pick and place tasks with anthropomorphic robot arms, our results will be transferable to a broad range of human-robot interaction scenarios including collaborative manipulation tasks in production and assembly, but also for designing advanced controls for rehabilitation and household robots.

Project Consortium

  • Friedrich-Alexander-Universität Erlangen-Nürnberg

  • Montanuniversität Leoben

Links

Details on the research project can be found on the project webpage.

 

Publications

2021

Tanneberg, Daniel; Ploeger, Kai; Rueckert, Elmar; Peters, Jan

SKID RAW: Skill Discovery from Raw Trajectories Journal Article

In: IEEE Robotics and Automation Letters (RA-L), pp. 1–8, 2021, ISSN: 2377-3766, (© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.).

Links | BibTeX

SKID RAW: Skill Discovery from Raw Trajectories

Jamsek, Marko; Kunavar, Tjasa; Bobek, Urban; Rueckert, Elmar; Babic, Jan

Predictive exoskeleton control for arm-motion augmentation based on probabilistic movement primitives combined with a flow controller Journal Article

In: IEEE Robotics and Automation Letters (RA-L), pp. 1–8, 2021, ISSN: 2377-3766, (© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.).

Links | BibTeX

Predictive exoskeleton control for arm-motion augmentation based on probabilistic movement primitives combined with a flow controller

Cansev, Mehmet Ege; Xue, Honghu; Rottmann, Nils; Bliek, Adna; Miller, Luke E.; Rueckert, Elmar; Beckerle, Philipp

Interactive Human-Robot Skill Transfer: A Review of Learning Methods and User Experience Journal Article

In: Advanced Intelligent Systems, pp. 1–28, 2021.

Links | BibTeX

Interactive Human-Robot Skill Transfer: A Review of Learning Methods and User Experience

2020

Rottmann, N.; Kunavar, T.; Babič, J.; Peters, J.; Rueckert, E.

Learning Hierarchical Acquisition Functions for Bayesian Optimization Proceedings Article

In: International Conference on Intelligent Robots and Systems (IROS’ 2020), 2020.

Links | BibTeX

Learning Hierarchical Acquisition Functions for Bayesian Optimization

Xue, H.; Boettger, S.; Rottmann, N.; Pandya, H.; Bruder, R.; Neumann, G.; Schweikard, A.; Rueckert, E.

Sample-Efficient Covariance Matrix Adaptation Evolutional Strategy via Simulated Rollouts in Neural Networks Proceedings Article

In: International Conference on Advances in Signal Processing and Artificial Intelligence (ASPAI’ 2020), 2020.

Links | BibTeX

Sample-Efficient Covariance Matrix Adaptation Evolutional Strategy via Simulated Rollouts in Neural Networks

Vedant Dave, M.Sc.

Ph.D. Student at the Montanuniversität Leoben

Short bio: Mr. Vedant Dave started at CPS on 23rd September 2021. 

He received his Master degree in Automation and Robotics from Technische Universität Dortmund in 2021 with the study focus on Robotics and Artificial Intelligence. His thesis was entitled “Model-agnostic Reinforcement Learning Solution for Autonomous Programming of Robotic Motion”, which took place at at Mercedes-Benz AG. In the thesis, he implemented Reinforcement learning for the motion planning of manipulators in complex environments. Before that, he did his Research internship at Bosch Center for Artificial Intelligence, where he worked on Probabilistic Movement Primitives on Riemannian Manifolds.

Research Interests

  • Information Theoretic Reinforcement Learning
  • Curiosity and Empowerment
  • Multimodal learning for Robotics
  • Movement Primitives

Research Videos

Contact & Quick Links

M.Sc. Vedant Dave
Doctoral Student supervised by Univ.-Prof. Dr. Elmar Rueckert.
Montanuniversität Leoben
Franz-Josef-Straße 18, 
8700 Leoben, Austria 

Phone:  +43 3842 402 – 1903
Email:   vedant.dave@unileoben.ac.at 
Web Work: CPS-Page
Chat: WEBEX

Publications

2024

Lygerakis, Fotios; Dave, Vedant; Rueckert, Elmar

M2CURL: Sample-Efficient Multimodal Reinforcement Learning via Self-Supervised Representation Learning for Robotic Manipulation Proceedings Article

In: IEEE International Conference on Ubiquitous Robots (UR 2024), IEEE 2024.

Links | BibTeX

M2CURL: Sample-Efficient Multimodal Reinforcement Learning via Self-Supervised Representation Learning for Robotic Manipulation

Dave*, Vedant; Lygerakis*, Fotios; Rueckert, Elmar

Multimodal Visual-Tactile Representation Learning through Self-Supervised Contrastive Pre-Training Proceedings Article

In: IEEE International Conference on Robotics and Automation (ICRA 2024)., 2024, (* equal contribution).

Links | BibTeX

Multimodal Visual-Tactile Representation Learning through Self-Supervised Contrastive Pre-Training

2022

Dave, Vedant; Rueckert, Elmar

Can we infer the full-arm manipulation skills from tactile targets? Workshop

International Conference on Humanoid Robots (Humanoids 2022), 2022.

Abstract | Links | BibTeX

Can we infer the full-arm manipulation skills from tactile targets?

Dave, Vedant; Rueckert, Elmar

Predicting full-arm grasping motions from anticipated tactile responses Proceedings Article

In: International Conference on Humanoid Robots (Humanoids 2022), 2022.

Abstract | Links | BibTeX

Predicting full-arm grasping motions from anticipated tactile responses

Leonel, Rozo*; Vedant, Dave*

Orientation Probabilistic Movement Primitives on Riemannian Manifolds Proceedings Article

In: Conference on Robot Learning (CoRL), pp. 11, 2022, (* equal contribution).

Abstract | Links | BibTeX

Orientation Probabilistic Movement Primitives on Riemannian Manifolds

Linus Nwankwo, M.Sc.

Short Bio

Mr. Linus Nwankwo started as a PhD student at the Chair of Cyber-Physical-Systems (CPS) in August 2021.  Prior to joining CPS, he worked as a research intern at the Department of Electrical and Computer EngineeringTechnische Universität Kaiserslautern, Germany.

In 2020, he obtained his M.Sc. degree in Automation and Robotics, a speciality in control for Green Mechatronics (GreeM) at the University of Bourgogne Franche-Comté (UBFC), France. In his M.Sc. thesis,  he implemented a stabilisation control for a mobile inverted pendulum robot and investigated the possibility of controlling and stabilising the robot via CANopen communication network.

Research Interests

  • Robotics
    • Simultaneous localization & mapping (SLAM)
    • Path planning & autonomous navigation
  • Machine Learning
    • Large language models (LLMs) and vision language models (VLMs) 
    • Supervised, unsupervised, and reinforcement learning
    • Probabilistic learning for robotics 
  • Human-Robot Interaction (HRI)
    • Intention-aware planning for social service robots
    • Social-aware and norm learning navigation
    • LLMs and VLMs for HRI

Research Videos

Contact & Quick Links

M.Sc. Linus Nwankwo
Doctoral Student supervised by Univ.-Prof. Dr. Elmar Rueckert since August 2021.
Montanuniversität Leoben
Franz-Josef-Straße 18, 
8700 Leoben, Austria 

Phone:  +43 3842 402 – 1901 (Sekretariat CPS)
Email:   linus.nwankwo@unileoben.ac.at 
Web Work: CPS-Page
Web Private: https://sites.google.com/view/linus-nwankwo
Chat: WEBEX

Publications

2024

Nwankwo, Linus; Rueckert, Elmar

Multimodal Human-Autonomous Agents Interaction Using Pre-Trained Language and Visual Foundation Models Workshop

2024, ( In Workshop of the 2024 ACM/IEEE International Conference on HumanRobot Interaction (HRI ’24 Workshop), March 11–14, 2024, Boulder, CO, USA. ACM, New York, NY, USA).

Abstract | Links | BibTeX

Multimodal Human-Autonomous Agents Interaction Using Pre-Trained Language and Visual Foundation Models

Nwankwo, Linus; Rueckert, Elmar

The Conversation is the Command: Interacting with Real-World Autonomous Robots Through Natural Language Proceedings Article

In: HRI '24: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction., pp. 808–812, ACM/IEEE Association for Computing Machinery, New York, NY, USA, 2024, ISBN: 9798400703232, (Published as late breaking results. Supplementary video: https://cloud.cps.unileoben.ac.at/index.php/s/fRE9XMosWDtJ339 ).

Abstract | Links | BibTeX

The Conversation is the Command: Interacting with Real-World Autonomous Robots Through Natural Language

2023

Nwankwo, Linus; Rueckert, Elmar

Understanding why SLAM algorithms fail in modern indoor environments Proceedings Article

In: International Conference on Robotics in Alpe-Adria-Danube Region (RAAD). , pp. 186 – 194, Cham: Springer Nature Switzerland., 2023.

Abstract | Links | BibTeX

Understanding why SLAM algorithms fail in modern indoor environments

Nwankwo, Linus; Fritze, Clemens; Bartsch, Konrad; Rueckert, Elmar

ROMR: A ROS-based Open-source Mobile Robot Journal Article

In: HardwareX, vol. 15, pp. 1–29, 2023.

Abstract | Links | BibTeX

ROMR: A ROS-based Open-source Mobile Robot

Nikolaus Feith, M.Sc.

Ph.D. Student at the Montanuniversität Leoben

Hello, my name is N. N. and I started working at the Chair for CPS in June 2021. After finishing my Master’s degree in Mining Mechanical Engineering at the University of Leoben in June 2022, I started my PhD at the CPS Chair in July 2022.

In my PhD thesis, I am investigating the application of human expertise through Interactive Machine Learning in robotic systems.

Research Interests

  • Machine Learning
    • Interactive Machine Learning
    • Reinforcement Learning with Black-Boxes
    • Robot Learning
  • Optimization
    • Bayesian Optimization
    • CMA-ES
  • Human-Robot Interfaces
    • Augmented Reality
    • Robot Web Tools
  • Embedded Systems in Robotics
  • Cyber Physical Systems

Teaching & Thesis Supervision

Current & Past Theses

Teaching

Contact

M.Sc. Nikolaus Feith
Doctoral Student supervised by Univ.-Prof. Dr. Elmar Rueckert since July 2022.
Montanuniversität Leoben
Franz-Josef-Straße 18, 
8700 Leoben, Austria 

Phone:  +43 3842 402 – 1901 (Sekretariat CPS)
Email:   nikolaus.feith@unileoben.ac.at 
Web Work: CPS-Page
Chat: WEBEX

Publications

2024

Feith, Nikolaus; Rueckert, Elmar

Integrating Human Expertise in Continuous Spaces: A Novel Interactive Bayesian Optimization Framework with Preference Expected Improvement Proceedings Article

In: IEEE International Conference on Ubiquitous Robots (UR 2024), IEEE 2024.

Links | BibTeX

Integrating Human Expertise in Continuous Spaces: A Novel Interactive Bayesian Optimization Framework with Preference Expected Improvement

Feith, Nikolaus; Rueckert, Elmar

Advancing Interactive Robot Learning: A User Interface Leveraging Mixed Reality and Dual Quaternions Proceedings Article

In: IEEE International Conference on Ubiquitous Robots (UR 2024), IEEE 2024.

Links | BibTeX

Advancing Interactive Robot Learning: A User Interface Leveraging Mixed Reality and Dual Quaternions

Dr. Daniel Tanneberg

Ph.D. Student at the University of Luebeck

Portrait of Daniel Tanneberg, Jan. 2018

Short bio: Dr. Daniel Tanneberg passed his PhD defense on the 3rd of December in 2020. He is now working as senior researcher at the Honda Research Institute in Offenbach, Germany. 

He was co-supervised by Prof. Jan Peters from the Technische Universitaet Darmstadt and Univ.-Prof. Dr. Elmar Rueckert, the head of this lab.

Daniel has joined the Intelligent Autonomous Systems (IAS) Group at the Technische Universitaet Darmstadt in October 2015 as a Ph.D. Student. His research focused on (biologically-inspired) machine learning for robotics and neuroscience. During his Ph.D., Daniel investigated the applicability and properties of spiking and memory-augmented deep neural networks. His neural networks were applied to robotic as well as to algorithmic tasks. 

With his masters thesis with the title Neural Networks Solve Robot Planning Problems he won the prestigoues Hanns-Voith-Stiftungspreis 2017 ’Digital Solutions’.

Research Interests

  • (Biologically-inspired) Machine Learning, (Memory-augmented) Neural Networks, Deep Learning, (Stochastic) Neural Networks, Lifelong-Learning.

Contact & Quick Links

Dr. Daniel Tanneberg
Former Doctoral Student supervised by Prof. Dr. Jan Peters and Univ.-Prof. Dr. Elmar Rueckert from 10/2015 to 12/2020.
Hochschulstr. 10,
64289 Darmstadt,
Deutschland

Email:
   daniel@robot-learning.de
Web: https://www.rob.uni-luebeck.de/index.php?id=460

Publcations

2021

Tanneberg, Daniel; Ploeger, Kai; Rueckert, Elmar; Peters, Jan

SKID RAW: Skill Discovery from Raw Trajectories Journal Article

In: IEEE Robotics and Automation Letters (RA-L), pp. 1–8, 2021, ISSN: 2377-3766, (© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.).

Links | BibTeX

SKID RAW: Skill Discovery from Raw Trajectories

2020

Tanneberg, Daniel; Rueckert, Elmar; Peters, Jan

Evolutionary training and abstraction yields algorithmic generalization of neural computers Journal Article

In: Nature Machine Intelligence, pp. 1–11, 2020.

Links | BibTeX

Evolutionary training and abstraction yields algorithmic generalization of neural computers

2019

Tanneberg, Daniel; Peters, Jan; Rueckert, Elmar

Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks Journal Article

In: Neural Networks – Elsevier, vol. 109, pp. 67-80, 2019, ISBN: 0893-6080, (Impact Factor of 7.197 (2017)).

Links | BibTeX

Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks

2017

Tanneberg, Daniel; Peters, Jan; Rueckert, Elmar

Efficient Online Adaptation with Stochastic Recurrent Neural Networks Proceedings Article

In: Proceedings of the International Conference on Humanoid Robots (HUMANOIDS), 2017.

Links | BibTeX

Efficient Online Adaptation with Stochastic Recurrent Neural Networks

Thiem, Simon; Stark, Svenja; Tanneberg, Daniel; Peters, Jan; Rueckert, Elmar

Simulation of the underactuated Sake Robotics Gripper in V-REP Proceedings Article

In: Workshop at the International Conference on Humanoid Robots (HUMANOIDS), 2017.

Links | BibTeX

Simulation of the underactuated Sake Robotics Gripper in V-REP

Tanneberg, Daniel; Peters, Jan; Rueckert, Elmar

Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation Signals Proceedings Article

In: Proceedings of the Conference on Robot Learning (CoRL), 2017.

Links | BibTeX

Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation Signals

2016

Tanneberg, Daniel; Paraschos, Alexandros; Peters, Jan; Rueckert, Elmar

Deep Spiking Networks for Model-based Planning in Humanoids Proceedings Article

In: Proceedings of the International Conference on Humanoid Robots (HUMANOIDS), 2016.

Links | BibTeX

Deep Spiking Networks for Model-based Planning in Humanoids

Rueckert, Elmar; Kappel, David; Tanneberg, Daniel; Pecevski, Dejan; Peters, Jan

Recurrent Spiking Networks Solve Planning Tasks Journal Article

In: Nature Publishing Group: Scientific Reports, vol. 6, no. 21142, 2016.

Links | BibTeX

Recurrent Spiking Networks Solve Planning Tasks

Sharma, David; Tanneberg, Daniel; Grosse-Wentrup, Moritz; Peters, Jan; Rueckert, Elmar

Adaptive Training Strategies for BCIs Proceedings Article

In: Cybathlon Symposium, 2016.

Links | BibTeX

Adaptive Training Strategies for BCIs

Svenja Stark, M.Sc.

Ph.D. Student at the Technical University of Darmstadt

Portrait of Svenja Stark, Jan. 2018

Short bio: Svenja Stark left the TU Darmstadt team in 2020 and is now a successful high school teacher in Hessen. She joined the Intelligent Autonomous Systems Group as a PhD student in December 2016, where she was supervised by Prof. Dr. Jan Peters and Univ.-Prof. Dr. Elmar Rueckert. 

She has been working on the GOAL-Robots project that aimed at developing goal-based open-ended autonomous learning robots; building lifelong learning robots.

Before joining the Autonomous Systems Labs, Svenja Stark received a Bachelor and a Master of Science degree in Computer Science from the TU Darmstadt. During her studies, she completed parts of her graduate coursework at the University of Massachusetts in Amherst. Her thesis entitled “Learning Probabilistic Feedforward and Feedback Policies for Generating Stable Walking Behaviors” was written under supervision of Elmar Rueckert and Jan Peters.

Research Interests

  • Multi-task learning, meta-learning, goal-based learning, intrinsic motivation, lifelong learning, Reinforcement Learning, motor skill learning.

Contact & Quick Links

M.Sc. Svenja Stark
Doctoral Student supervised by Prof. Dr. Jan Peters and Univ.-Prof. Dr. Elmar Rueckert. 
Hochschulstr. 10,
64289 Darmstadt,
Deutschland

Email:   svenja@robot-learning.de
Web: https://www.rob.uni-luebeck.de/index.php?id=460

Publcations

2019

Stark, Svenja; Peters, Jan; Rueckert, Elmar

Experience Reuse with Probabilistic Movement Primitives Proceedings Article

In: Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), 2019., 2019.

Links | BibTeX

Experience Reuse with Probabilistic Movement Primitives

2017

Stark, Svenja; Peters, Jan; Rueckert, Elmar

A Comparison of Distance Measures for Learning Nonparametric Motor Skill Libraries Proceedings Article

In: Proceedings of the International Conference on Humanoid Robots (HUMANOIDS), 2017.

Links | BibTeX

A Comparison of Distance Measures for Learning Nonparametric Motor Skill Libraries

Thiem, Simon; Stark, Svenja; Tanneberg, Daniel; Peters, Jan; Rueckert, Elmar

Simulation of the underactuated Sake Robotics Gripper in V-REP Proceedings Article

In: Workshop at the International Conference on Humanoid Robots (HUMANOIDS), 2017.

Links | BibTeX

Simulation of the underactuated Sake Robotics Gripper in V-REP

Honghu Xue, M.Sc.

Ph.D. Student at the University of Luebeck

Portrait of Honghu Xue

Short bio: Mr. Honghu Xue investigates in his doctoral thesis deep reinforcement learning approaches for  planning and control. His methods are applied to mobile robots and the FRANKA EMIKA robot arm. He started his thesis in March 2019.

Honghu Xue received his M.Sc. in Embedded Systems Engineering at Albert-Ludwigs-University of Freiburg with the study focus on Reinforcement Learning, Machine Learning and AI.

Research Interests

  • Deep Reinforcement Learning: Model-Based RL, Sample-efficient RL, Long-time-horizon RL, Efficient Exploration Strategies in MDP, Distributional RL, Policy Search.
  • Deep & Machine Learning: Learning Transition Model in MDP (featuring visual input and modelling the stochasticity of the environment), Super-resolution Image using DL, Time-Sequential Model for Partial Observability.

Research Videos

Contact & Quick Links

M.Sc. Honghu Xue
Doctoral Student supervised by Univ.-Prof. Dr. Elmar Rueckert since March 2019.
Ratzeburger Allee 160,
23562 Lübeck,
Deutschland

Phone:  +49 451 3101 – 5213
Email:   xue@rob.uni-luebeck.de
Web:https://www.rob.uni-luebeck.de/index.php?id=460

CV of M.Sc. Honghu Xue
DBLP
Frontiers Network
Github
Google Citations
LinkedIn
ORCID
Rearch Gate

Meeting Notes

Publcations

2023

Yadav, Harsh; Xue, Honghu; Rudall, Yan; Bakr, Mohamed; Hein, Benedikt; Rueckert, Elmar; Nguyen, Thinh

Deep Reinforcement Learning for Autonomous Navigation in Intralogistics Workshop

2023, (European Control Conference (ECC) Workshop, Extended Abstract.).

Abstract | Links | BibTeX

Deep Reinforcement Learning for Autonomous Navigation in Intralogistics

2022

Xue, Honghu; Song, Rui; Petzold, Julian; Hein, Benedikt; Hamann, Heiko; Rueckert, Elmar

End-To-End Deep Reinforcement Learning for First-Person Pedestrian Visual Navigation in Urban Environments Proceedings Article

In: International Conference on Humanoid Robots (Humanoids 2022), 2022.

Abstract | Links | BibTeX

End-To-End Deep Reinforcement Learning for First-Person Pedestrian Visual Navigation in Urban Environments

Herzog, Rebecca; Berger, Till M; Pauly, Martje Gesine; Xue, Honghu; Rueckert, Elmar; Munchau, Alexander; B"aumer, Tobias; Weissbach, Anne

Cerebellar transcranial current stimulation-an intraindividual comparison of different techniques Journal Article

In: Frontiers in Neuroscience, 2022.

Links | BibTeX

Cerebellar transcranial current stimulation-an intraindividual comparison of different techniques

Xue, Honghu; Hein, Benedikt; Bakr, Mohamed; Schildbach, Georg; Abel, Bengt; Rueckert, Elmar

Using Deep Reinforcement Learning with Automatic Curriculum Learning for Mapless Navigation in Intralogistics Journal Article

In: Applied Sciences (MDPI), Special Issue on Intelligent Robotics, 2022, (Supplement: https://cloud.cps.unileoben.ac.at/index.php/s/Sj68rQewnkf4ppZ).

Abstract | Links | BibTeX

Using Deep Reinforcement Learning with Automatic Curriculum Learning for Mapless Navigation in Intralogistics

2021

Xue, Honghu; Herzog, Rebecca; Berger, Till M.; Bäumer, Tobias; Weissbach, Anne; Rueckert, Elmar

Using Probabilistic Movement Primitives in analyzing human motion differences under Transcranial Current Stimulation Journal Article

In: Frontiers in Robotics and AI , vol. 8, 2021, ISSN: 2296-9144.

Abstract | Links | BibTeX

Using Probabilistic Movement Primitives in analyzing human motion differences under Transcranial Current Stimulation

Cansev, Mehmet Ege; Xue, Honghu; Rottmann, Nils; Bliek, Adna; Miller, Luke E.; Rueckert, Elmar; Beckerle, Philipp

Interactive Human-Robot Skill Transfer: A Review of Learning Methods and User Experience Journal Article

In: Advanced Intelligent Systems, pp. 1–28, 2021.

Links | BibTeX

Interactive Human-Robot Skill Transfer: A Review of Learning Methods and User Experience

2020

Akbulut, M Tuluhan; Oztop, Erhan; Seker, M Yunus; Xue, Honghu; Tekden, Ahmet E; Ugur, Emre

ACNMP: Skill Transfer and Task Extrapolation through Learning from Demonstration and Reinforcement Learning via Representation Sharing Proceedings Article

In: 2020.

Abstract | Links | BibTeX

ACNMP: Skill Transfer and Task Extrapolation through Learning from Demonstration and Reinforcement Learning via Representation Sharing

Rottmann, N.; Bruder, R.; Xue, H.; Schweikard, A.; Rueckert, E.

Parameter Optimization for Loop Closure Detection in Closed Environments Proceedings Article

In: Workshop Paper at the International Conference on Intelligent Robots and Systems (IROS), pp. 1–8, 2020.

Links | BibTeX

Parameter Optimization for Loop Closure Detection in Closed Environments

Xue, H.; Boettger, S.; Rottmann, N.; Pandya, H.; Bruder, R.; Neumann, G.; Schweikard, A.; Rueckert, E.

Sample-Efficient Covariance Matrix Adaptation Evolutional Strategy via Simulated Rollouts in Neural Networks Proceedings Article

In: International Conference on Advances in Signal Processing and Artificial Intelligence (ASPAI’ 2020), 2020.

Links | BibTeX

Sample-Efficient Covariance Matrix Adaptation Evolutional Strategy via Simulated Rollouts in Neural Networks