Publications
Publication List with Images
2025 |
|
Dave, Vedant; Rueckert, Elmar Skill Disentanglement in Reproducing Kernel Hilbert Space Proceedings Article In: In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2025. Links | BibTeX | Tags: Deep Learning, neural network, Reinforcement Learning @inproceedings{Dave2025bb, | ![]() |
Oezdenizci, Ozan; Rueckert, Elmar; Legenstein, Robert Privacy-Aware Lifelong Learning Proceedings Article In: International Conference on Learning Representations (ICLR), 2025. Links | BibTeX | Tags: Deep Learning, machine learning @inproceedings{Oezdenizci2025, | ![]() |
2024 |
|
Neubauer, Melanie; Rueckert, Elmar Semi-Autonomous Fast Object Segmentation and Tracking Tool for Industrial Applications Proceedings Article In: IEEE International Conference on Ubiquitous Robots (UR 2024), IEEE 2024. Links | BibTeX | Tags: computer vision, Deep Learning, Recycling @inproceedings{Neubauer2024, | ![]() |
2023 |
|
Yadav, Harsh; Xue, Honghu; Rudall, Yan; Bakr, Mohamed; Hein, Benedikt; Rueckert, Elmar; Nguyen, Ngoc Thinh Deep Reinforcement Learning for Mapless Navigation of Autonomous Mobile Robot Proceedings Article In: International Conference on System Theory, Control and Computing (ICSTCC), 2023, (October 11-13, 2023, Timisoara, Romania.). Links | BibTeX | Tags: Autonomous Navigation, Deep Learning, Reinforcement Learning @inproceedings{Yadav2023b, | ![]() |
Yadav, Harsh; Xue, Honghu; Rudall, Yan; Bakr, Mohamed; Hein, Benedikt; Rueckert, Elmar; Nguyen, Thinh Deep Reinforcement Learning for Autonomous Navigation in Intralogistics Workshop 2023, (European Control Conference (ECC) Workshop, Extended Abstract.). Abstract | Links | BibTeX | Tags: Autonomous Navigation, Deep Learning, mobile navigation, SLAM @workshop{Yadav2023, Even with several advances in autonomous mobile robots, navigation in a highly dynamic environment still remains a challenge. Classical navigation systems, such as Simultaneous Localization and Mapping (SLAM), build a map of the environment and constructing maps of highly dynamic environments is impractical. Deep Reinforcement Learning (DRL) approaches have the ability to learn policies without knowledge of the maps or the transition models of the environment. The aim of our work is to investigate the potential of using DRL to control an autonomous mobile robot to dock with a load carrier. This paper presents an initial successful training result of the Soft Actor-Critic (SAC) algorithm, which can navigate a robot toward an open door only based on the 360° LiDAR observations. Ongoing work is using visual sensors for load carrier docking. | ![]() |
2022 |
|
Xue, Honghu; Song, Rui; Petzold, Julian; Hein, Benedikt; Hamann, Heiko; Rueckert, Elmar End-To-End Deep Reinforcement Learning for First-Person Pedestrian Visual Navigation in Urban Environments Proceedings Article In: International Conference on Humanoid Robots (Humanoids 2022), 2022. Abstract | Links | BibTeX | Tags: Autonomous Navigation, Deep Learning, mobile navigation @inproceedings{Xue2022b, We solve a visual navigation problem in an urban setting via deep reinforcement learning in an end-to-end manner. A major challenge of a first-person visual navigation problem lies in severe partial observability and sparse positive experiences of reaching the goal. To address partial observability, we propose a novel 3D-temporal convolutional network to encode sequential historical visual observations, its effectiveness is verified by comparing to a commonly-used frame-stacking approach. For sparse positive samples, we propose an improved automatic curriculum learning algorithm NavACL+, which proposes meaningful curricula starting from easy tasks and gradually generalizes to challenging ones. NavACL+ is shown to facilitate the learning process, greatly improve the task success rate on difficult tasks by at least 40% and offer enhanced generalization to different initial poses compared to training from a fixed initial pose and the original NavACL algorithm. | ![]() |
Xue, Honghu; Hein, Benedikt; Bakr, Mohamed; Schildbach, Georg; Abel, Bengt; Rueckert, Elmar Using Deep Reinforcement Learning with Automatic Curriculum Learning for Mapless Navigation in Intralogistics Journal Article In: Applied Sciences (MDPI), Special Issue on Intelligent Robotics, 2022, (Supplement: https://cloud.cps.unileoben.ac.at/index.php/s/Sj68rQewnkf4ppZ). Abstract | Links | BibTeX | Tags: Autonomous Navigation, Deep Learning, mobile navigation, Reinforcement Learning @article{Xue2022, We propose a deep reinforcement learning approach for solving a mapless navigation problem in warehouse scenarios. The automatic guided vehicle is equipped with LiDAR and frontal RGB sensors and learns to reach underneath the target dolly. The challenges reside in the sparseness of positive samples for learning, multi-modal sensor perception with partial observability, the demand for accurate steering maneuvers together with long training cycles. To address these points, we proposed NavACL-Q as an automatic curriculum learning together with distributed soft actor-critic. The performance of the learning algorithm is evaluated exhaustively in a different warehouse environment to check both robustness and generalizability of the learned policy. Results in NVIDIA Isaac Sim demonstrates that our trained agent significantly outperforms the map-based navigation pipeline provided by NVIDIA Isaac Sim in terms of higher agent-goal distances and relative orientations. The ablation studies also confirmed that NavACL-Q greatly facilitates the whole learning process and a pre-trained feature extractor manifestly boosts the training speed. | ![]() |
2020 |
|
Akbulut, M Tuluhan; Oztop, Erhan; Seker, M Yunus; Xue, Honghu; Tekden, Ahmet E; Ugur, Emre ACNMP: Skill Transfer and Task Extrapolation through Learning from Demonstration and Reinforcement Learning via Representation Sharing Proceedings Article In: 2020. Abstract | Links | BibTeX | Tags: Deep Learning, movement primitives, Transfer Learning @inproceedings{nokey, To equip robots with dexterous skills, an effective approach is to first transfer the desired skill via Learning from Demonstration (LfD), then let the robot improve it by self-exploration via Reinforcement Learning (RL). In this paper, we propose a novel LfD+RL framework, namely Adaptive Conditional Neural Movement Primitives (ACNMP), that allows efficient policy improvement in novel environments and effective skill transfer between different agents. This is achieved through exploiting the latent representation learned by the underlying Conditional Neural Process (CNP) model, and simultaneous training of the model with supervised learning (SL) for acquiring the demonstrated trajectories and via RL for new trajectory discovery. Through simulation experiments, we show that (i) ACNMP enables the system to extrapolate to situations where pure LfD fails; (ii) Simultaneous training of the system through SL and RL preserves the shape of demonstrations while adapting to novel situations due to the shared representations used by both learners; (iii) ACNMP enables order-of-magnitude sample-efficient RL in extrapolation of reaching tasks compared to the existing approaches; (iv) ACNMPs can be used to implement skill transfer between robots having different morphology, with competitive learning speeds and importantly with less number of assumptions compared to the state-of-the-art approaches. Finally, we show the real-world suitability of ACNMPs through real robot experiments that involve obstacle avoidance, pick and place and pouring actions. | ![]() |
Compact List without Images
Journal Articles |
Xue, Honghu; Hein, Benedikt; Bakr, Mohamed; Schildbach, Georg; Abel, Bengt; Rueckert, Elmar Using Deep Reinforcement Learning with Automatic Curriculum Learning for Mapless Navigation in Intralogistics Journal Article In: Applied Sciences (MDPI), Special Issue on Intelligent Robotics, 2022, (Supplement: https://cloud.cps.unileoben.ac.at/index.php/s/Sj68rQewnkf4ppZ). @article{Xue2022, We propose a deep reinforcement learning approach for solving a mapless navigation problem in warehouse scenarios. The automatic guided vehicle is equipped with LiDAR and frontal RGB sensors and learns to reach underneath the target dolly. The challenges reside in the sparseness of positive samples for learning, multi-modal sensor perception with partial observability, the demand for accurate steering maneuvers together with long training cycles. To address these points, we proposed NavACL-Q as an automatic curriculum learning together with distributed soft actor-critic. The performance of the learning algorithm is evaluated exhaustively in a different warehouse environment to check both robustness and generalizability of the learned policy. Results in NVIDIA Isaac Sim demonstrates that our trained agent significantly outperforms the map-based navigation pipeline provided by NVIDIA Isaac Sim in terms of higher agent-goal distances and relative orientations. The ablation studies also confirmed that NavACL-Q greatly facilitates the whole learning process and a pre-trained feature extractor manifestly boosts the training speed. |
Proceedings Articles |
Dave, Vedant; Rueckert, Elmar Skill Disentanglement in Reproducing Kernel Hilbert Space Proceedings Article In: In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2025. @inproceedings{Dave2025bb, |
Oezdenizci, Ozan; Rueckert, Elmar; Legenstein, Robert Privacy-Aware Lifelong Learning Proceedings Article In: International Conference on Learning Representations (ICLR), 2025. @inproceedings{Oezdenizci2025, |
Neubauer, Melanie; Rueckert, Elmar Semi-Autonomous Fast Object Segmentation and Tracking Tool for Industrial Applications Proceedings Article In: IEEE International Conference on Ubiquitous Robots (UR 2024), IEEE 2024. @inproceedings{Neubauer2024, |
Yadav, Harsh; Xue, Honghu; Rudall, Yan; Bakr, Mohamed; Hein, Benedikt; Rueckert, Elmar; Nguyen, Ngoc Thinh Deep Reinforcement Learning for Mapless Navigation of Autonomous Mobile Robot Proceedings Article In: International Conference on System Theory, Control and Computing (ICSTCC), 2023, (October 11-13, 2023, Timisoara, Romania.). @inproceedings{Yadav2023b, |
Xue, Honghu; Song, Rui; Petzold, Julian; Hein, Benedikt; Hamann, Heiko; Rueckert, Elmar End-To-End Deep Reinforcement Learning for First-Person Pedestrian Visual Navigation in Urban Environments Proceedings Article In: International Conference on Humanoid Robots (Humanoids 2022), 2022. @inproceedings{Xue2022b, We solve a visual navigation problem in an urban setting via deep reinforcement learning in an end-to-end manner. A major challenge of a first-person visual navigation problem lies in severe partial observability and sparse positive experiences of reaching the goal. To address partial observability, we propose a novel 3D-temporal convolutional network to encode sequential historical visual observations, its effectiveness is verified by comparing to a commonly-used frame-stacking approach. For sparse positive samples, we propose an improved automatic curriculum learning algorithm NavACL+, which proposes meaningful curricula starting from easy tasks and gradually generalizes to challenging ones. NavACL+ is shown to facilitate the learning process, greatly improve the task success rate on difficult tasks by at least 40% and offer enhanced generalization to different initial poses compared to training from a fixed initial pose and the original NavACL algorithm. |
Akbulut, M Tuluhan; Oztop, Erhan; Seker, M Yunus; Xue, Honghu; Tekden, Ahmet E; Ugur, Emre ACNMP: Skill Transfer and Task Extrapolation through Learning from Demonstration and Reinforcement Learning via Representation Sharing Proceedings Article In: 2020. @inproceedings{nokey, To equip robots with dexterous skills, an effective approach is to first transfer the desired skill via Learning from Demonstration (LfD), then let the robot improve it by self-exploration via Reinforcement Learning (RL). In this paper, we propose a novel LfD+RL framework, namely Adaptive Conditional Neural Movement Primitives (ACNMP), that allows efficient policy improvement in novel environments and effective skill transfer between different agents. This is achieved through exploiting the latent representation learned by the underlying Conditional Neural Process (CNP) model, and simultaneous training of the model with supervised learning (SL) for acquiring the demonstrated trajectories and via RL for new trajectory discovery. Through simulation experiments, we show that (i) ACNMP enables the system to extrapolate to situations where pure LfD fails; (ii) Simultaneous training of the system through SL and RL preserves the shape of demonstrations while adapting to novel situations due to the shared representations used by both learners; (iii) ACNMP enables order-of-magnitude sample-efficient RL in extrapolation of reaching tasks compared to the existing approaches; (iv) ACNMPs can be used to implement skill transfer between robots having different morphology, with competitive learning speeds and importantly with less number of assumptions compared to the state-of-the-art approaches. Finally, we show the real-world suitability of ACNMPs through real robot experiments that involve obstacle avoidance, pick and place and pouring actions. |
Workshops |
Yadav, Harsh; Xue, Honghu; Rudall, Yan; Bakr, Mohamed; Hein, Benedikt; Rueckert, Elmar; Nguyen, Thinh Deep Reinforcement Learning for Autonomous Navigation in Intralogistics Workshop 2023, (European Control Conference (ECC) Workshop, Extended Abstract.). @workshop{Yadav2023, Even with several advances in autonomous mobile robots, navigation in a highly dynamic environment still remains a challenge. Classical navigation systems, such as Simultaneous Localization and Mapping (SLAM), build a map of the environment and constructing maps of highly dynamic environments is impractical. Deep Reinforcement Learning (DRL) approaches have the ability to learn policies without knowledge of the maps or the transition models of the environment. The aim of our work is to investigate the potential of using DRL to control an autonomous mobile robot to dock with a load carrier. This paper presents an initial successful training result of the Soft Actor-Critic (SAC) algorithm, which can navigate a robot toward an open door only based on the 360° LiDAR observations. Ongoing work is using visual sensors for load carrier docking. |