1

15.02.2023 Meeting Notes

Meeting Details

Date: 15th February 2022

Time : 13:30 – 14:00

Location : Chair of CPS, Montanuniverität Leoben

Participants: Univ.-Prof. Dr. Elmar Rueckert, Vedant Dave

Agenda

  1. Check the formulation of Iterative Empowerment.
  2. Information Bottleneck for Non-Markovian environments.

Topic 1: Iterative Empowerment

  1. Implementation of formulation in gridworld.
  2. Comparision with prior approaches and other curiosity modules.

Topic 2: Information bottleneck for Non-Markovian environments

  1. Idea formulation.
  2. Study Information bottleneck and related papers thotoughly.

Literature

To be added

Next Meeting

TBD 




04.11.2022 Meeting Notes

Meeting Details

Date: 3rd October 2022

Time : 08:30 – 09:00

Location : Chair of CPS, Montanuniverität Leoben

Participants: Univ.-Prof. Dr. Elmar Rueckert, Vedant Dave

Agenda

  1. Extension idea formulation for Journal Paper
  2. Work on exploration and curiosity

Topic 1: Science Robotics Paper

  1. Bi-level Probabilistic Movement Primitives due to uneven error propagation in different stages.
  2. Read literature [1] and see if we find something.

Topic 3: Dynamic Exploration and Curiosity

  1. We got out baseline [2] and now we try to implement this paper.
  2. Try the same model on more complex environments and more out-of-box goals.
  3. In process, first implement [3].

Literature

  1. R. Lioutikov, G. Maeda, F. Veiga, K. Kersting and J. Peters, “Inducing Probabilistic Context-Free Grammars for the Sequencing of Movement Primitives,” 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 5651-5658, doi: 10.1109/ICRA.2018.8460190.
  2. Mendonca, Russell, et al. “Discovering and achieving goals via world models.” Advances in Neural Information Processing Systems 34 (2021): 24379-24391.
  3. Hafner, Danijar, et al. “Learning latent dynamics for planning from pixels.” International conference on machine learning. PMLR, 2019.

Next Meeting

TBD 




M.Sc. thesis: Benjamin Schödinger on A framework for learning Vision and Tactile correlation

Supervisor: Vedant Dave, M.Sc; Univ.-Prof. Dr Elmar Rückert
Start date: 1st May 2022                          Finsihed: 18th October 2022

Theoretical difficulty: Mid
Practical difficulty: Mid

Abstract

Tactile perception is one of the basic senses in humans that utilize almost at every instance. We predict the touch of the object even before touching it, only through vision. If a novel object is encountered, we predict the tactile sensation even before touching. The goal of this project is to predict tactile response that would be experienced if this grasp were performed on the object. This is achieved by extracting the features of the visual data and the tactile information and then learning the mapping between those features. 

We use Intel RealSense depth camera D435i for capturing images of the objects and Seed RH8D Hand with tactile sensors to capture the tactile data in real time(15 dimensional data). The main objective is to perform well on the novel object which have some shared feature representation of the previously seen objects.

Plan

  • Literature Research
  • Architecture Development
  • Dataset Collection from Real Robot.
  • Application in Real Robot.
  • Master Thesis Writing
  • Research Paper Writing (Optional)

Related Work

[1] B. S. Zapata-Impata, P. Gil, Y. Mezouar and F. Torres, “Generation of Tactile Data From 3D Vision and Target Robotic Grasps,” in IEEE Transactions on Haptics, vol. 14, no. 1, pp. 57-67, 1 Jan.-March 2021, doi: 10.1109/TOH.2020.3011899.

[2] Z. Abderrahmane, G. Ganesh, A. Crosnier and A. Cherubini, “A Deep Learning Framework for Tactile Recognition of Known as Well as Novel Objects,” in IEEE Transactions on Industrial Informatics, vol. 16, no. 1, pp. 423-432, Jan. 2020, doi: 10.1109/TII.2019.2898264.

Thesis Document

A Framework for Learning Visual and Tactile Correlation




Digital Competencies – Learning Python and some Mathematics

Getting started with Python

This tutorial gives an instruction on installing Cuda and enabling Cuda acceleration using Pytorch in Win10. Installation in Linux or Mac systems are all possible. An additional .py file will verify whether the current computer configuration uses the Cuda or not. The following instruction assumes that you have already installed Python IDE, e.g., Anaconda, Pycharm, Visual Studio…

Step 1: Check which Cuda version is supported by your current GPUs under this website. From the left figure, we can see that A100 supports Cuda 11.0. It is also reported from other blogs/ forums that A100 can support Cuda 11.1. In this post, we install Cuda 11.1.

Step 2: Download Nvidia Cuda Toolkit 11.1 (the same version as Cuda in Step 1) from the website. In Win10, for instance, we follow up the choice as shown right. The size of exe(local) is around 3.1GB. After downloading, run the .exe and perform installation. It may take some minutes to complete installation.


Step 3: On the homepage of Pytorch, choose the appropriate options as shown in the left figure. IMPORTANT: The cuda version must be the same as in Step 1. It is also recommended to use Stable version. After finishing the , copy the command into Anaconda Powershell Prompt or other command prompt where you install packages for Python. Waiting for the installation, which may require larger than 1GB disk space and takes some minutes for installation. You could also find historical version of Pytorch in that homepage.

Verify your installation with .py file

You could download a cuda-test.py file and run it. If the result shows ‘cuda’, then you can enjoy the Cuda acceleration for training neural networks!

Using Multiple GPUs for further acceleration

Running Pytorch with Multiple GPUs can further increase the efficiency. We have 8 GPU cards and can be used parallely for training. Please refer to (1) (2) (3) for details. 




30.09.2021 Meeting Notes

Meeting Details

Date : 30th September 2022

Time : 11:30 – 12:30

Location : Chair of CPS, Montanuniverität Leoben

Participants: Univ.-Prof. Dr. Elmar Rueckert, Vedant Dave

Agenda

  1. Humanoids paper ready from Reviewers comments
  2. Extend the Conference paper for the Journal
  3. Active Exploration with Forward and Inverse Model learning

Topic 1: Humanoids Paper

  1. Change the paper according to the reviews.
  2. Add Real-world Experiments.

Topic 2: Science Robotics Paper

  1. Extend the paper for learning objects at different locations.
  2. Conduct experiments with multiple objects on the table.
  3. Enable object tracking and extend it.
  4. Extension to Riemannian Manifold to reduce the Orientation errors.

Topic 3: Active Exploration

  1. Survey on Exploration strategies and Empowerment.
  2. Trying to work on relationships between Maximum Entropy of Latent variables and Tasks.
  3. Trying to find literature on Learning Phase Jumps.
  4. Goal Babbling.

Literature

Inverse Dynamic Predictions

  1. S. Bechtle, B. Hammoud, A. Rai, F. Meier and L. Righetti, “Leveraging Forward Model Prediction Error for Learning Control,” 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 4445-4451, doi: 10.1109/ICRA48506.2021.9561396..
  2. Eysenbach, Benjamin, et al. “Diversity is all you need: Learning skills without a reward function.” arXiv preprint arXiv:1802.06070 (2018).
  3. Klyubin, Alexander S., Daniel Polani, and Chrystopher L. Nehaniv. “All else being equal be empowered.” European Conference on Artificial Life. Springer, Berlin, Heidelberg, 2005.

Next Meeting

TBD 




13.09.2021 Meeting Notes

Meeting Details

Date : 13th September 2022

Time : 12:30 – 1:30

Location : Chair of CPS, Montanuniverität Leoben

Participants: Univ.-Prof. Dr. Elmar Rueckert, Vedant Dave

Agenda

  1. Learning Consistent Forward and Inverse Dynamics.

Topic 1: Idea Development

  1. Thinking in terms of Closed loop systems and feedback controllers.
  2. Regularizing Forward model via Inverse model.
  3. Single-step and Multi-step prediction models.
  4. Comparing Multi-step predictions with Movement Primitives.

Topic 2: Toy Example

  1. Generate a toy dataset(Temperature) with just single parameter.
  2. Try forward model to approximate out-of-distribution testing data.
  3. If it fails, try to regularize it with Inverse model and check if it works out.

Literature

Inverse Dynamic Predictions

  1. Cooper, Richard. (2010). Forward and Inverse Models in Motor Control and Cognitive Control. Proceedings of the International Symposium on AI Inspired Biology – A Symposium at the AISB 2010 Convention.
  2. Moore, Andrew. “Fast, robust adaptive control by learning only forward models.” Advances in neural information processing systems 4 (1991).

Next Meeting

TBD 




01.09.2021 Meeting Notes

Meeting Details

Date : 1st September 2022

Time : 11:00 – 12:00

Location : Chair of CPS, Montanuniverität Leoben

Participants: Univ.-Prof. Dr. Elmar Rueckert, Vedant Dave

Agenda

  1. Learning Consistent Forward and Inverse Dynamics.

Topic 1: Learning Forward and Inverse Dynamics with Cycle Consistency

  1. Develop a framework to learn forward and inverse model of the system simultaneously.
  2.  Search tasks where both models are required.
  3. Test on the datatset from [1].

Topic 2: Binding Simulation and Reality Gap

  1. Working with forward model in the simulation and correcting it with inverse model from real system.
  2. Develop tasks for contact 

Literature

Inverse Dynamic Predictions

  1. Elmar Rueckert et al. “Learning inverse dynamics models in O(n) time with LSTM networks”. In: 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids). 2017, pp. 811–816.
  2. Vaisakh Shaj et al. “Action-Conditional Recurrent Kalman Networks For Forward and Inverse Dynamics Learning”. In: Proceedings of the 2020 Conference on Robot Learning. Ed. by Jens Kober, Fabio Ramos, and Claire Tomlin. Vol. 155. Proceedings of Machine Learning Research. PMLR, Nov. 2021, pp. 765–781.
  3. Moritz Reuss et al. “End to-End Learning of Hybrid Inverse Dynamics Models for Precise and Compliant Impedance Control”. In: Proceedings of Robotics: Science and Systems. New York City, NY, USA, June 2022.

Next Meeting

TBD 




How to use Sensor Glove with Robot Hand

Repository Clone

 

Additional Requirements

Connection with PC

First step is to make sure that the Sensor glove is connected to the USB0 and Robot Hand is connected to USB1. If this is not in order, we might have to change it inside the files and aslo in rosserial_python library. 

Connecting with ROS

  • Initiate Roscore with command: roscore
  • Run the Rosserial Python command to initiate the Serial connection between with the hand through Python:

          rosrun rosserial_python serial_node.py tcp

           You will see something like this:

  • After this, run the Arduino file to initiate the calibration. If the Serial connection is finished, you will see something like this:

Connecting with the Robot Hand

In order to connect with the hand, just run this file:

roslaunch rh8d start_rh8d.launch