1

UR3 passwords

Robot serial number:20225300304

Passwords:

  • safety: 0000



Meeting Notes July 2023

Meeting 06/07

Research

  • Investigating Representation Collapse in Reinforcement Learning Agents from Vision
    • plan/structure?
    • what RL algorithms?
      • visual data
      • Gehart Neumann, Marc Toussaint, Joustus Piater (Innsburg)
      • Define a research question
      • Focus on some domain
      •  
  • Unnormalized Contrastive learning
    • All CL models use l2 normalization of the representation
      • Stability: Normalizing the representations ensures that they all have the same magnitude. This can make the learning process more stable, as it prevents the model from assigning arbitrarily large or small magnitudes to the representations.

      • Focus on direction: By constraining the representations to have a fixed magnitude, the learning process focuses on the direction of the vectors in the embedding space. This is often what we care about in tasks like contrastive learning, where the goal is to make the representations of similar inputs point in similar directions.

      • Computational convenience: As mentioned earlier, many computations, such as the dot product between two vectors, are easier to perform and interpret in normalized spaces.

      • Interpretability: Normalized representations are often more interpretable, as the angle between two vectors can be directly interpreted as a measure of similarity or dissimilarity.

    • BUT, this come to the expense of
      • Decreased Capacity: With normalization, the model’s capacity to represent data is reduced since it can only rely on the direction of vectors in the embedding space. This limitation may result in the model being less able to capture complex patterns in the data.
      • Missing Magnitude Information: The absence of magnitude information in normalized vectors removes the ability to convey meaningful data properties such as confidence levels or other relevant characteristics. Normalization discards this information, limiting the model’s understanding of the data.
    • IDEA: remove the l2 regularization
      • Regularize the model to penalize large magnitudes.
      • Scale the representations to a desired range.
      • Design a custom loss function considering both direction and magnitude
  • Breaking Binary: Towards a Continuum of Conceptual Similarities in Self-Supervised Learning
    • will take more time to set-up
    • will leave it for later

PhD Registration

  • registered

M.Sc. Students/Interns

  • Iye Szin presenting next week her work until now.

ML Course

  • Publish Video Tutorial on pytorch

Miscellaneous

  • Summer School in Cambridge
    • Poster?

Meeting 25/07

Research

  • Goal-oriented working mode:
    • define subgoals and milestones
    • (make sure that you can evaluate them, and define criteria of success, scores, etc.)
    • till 17.08.2023 10:00
  • Define topic, sub-problem, open challenge, your approach, toy task, full experiment
  • RAAD2024, 20.12.2023 concept paper with first results
  • Spring 2024 A+ robotics conference paper on simulation experiments.
  • Summer 2024 A+ robotics paper on real robot experiments

M.Sc. Students/Interns

Miscellaneous

 




Meeting Notes June 2023

Meeting 15/06

Research

  • reviews for ECAI (2/6) (Vedant is working on one of them)
  • Research leads:
    1. Dimensionality Collapse of Visual Representations in Reinforcement Learning
    2. Improve SwAV architecture by creating better latent space clusters with the use of Sparse Autoencoders

PhD Registration

  • waiting for admission office response

M.Sc. Students/Interns

  • Iye Szin has a working prototype

ML Course

  • Tutorial on pytorch
  • pending grading for assignments 5 and 6

Miscellaneous

  • Summer School
    • Registration done
    • Air tickets booked
    • accommodation booked
  • English course got postponed

 

Meeting 22/06

Research

  • Reviews for ECAI 2023 done.

M.Sc. Students/Interns

ML Course

Miscellaneous

 

Meeting 29/06

Research

M.Sc. Students/Interns

ML Course

Miscellaneous




Meeting Notes May 2023

Meeting 11/05

Research

  • submitted CR-VAE paper to ECAI
  • Research leads:
    1. Dimensionality Collapse of Visual Representations in Reinforcement Learning
    2. Improve SwAV architecture by creating better latent space clusters with the use of Sparse Autoencoders

PhD Registration

  • Signed Application
  • Will hand it over to the Admissions office

M.Sc. Students/Interns

  • Possible PhD position for Iye Szin
  • Early June first draft presentation

ML Course

  • Assignment 5

Miscellaneous

  • Kleinwassertal

 

Meeting 25/05

Research

Literature Review

  1. Dimensionality Collapse of Visual Representations in Reinforcement Learning
  2. Improve SwAV architecture by creating better latent space clusters with the use of Sparse Autoencoders

 ECAI review papers

  • 6 papers assigned = 16hours(2 days)/paper = 96 hours(12 days)
  • More feasible to review 2 papers.
  • deadline 16 June

M.Sc. Students/Interns

ML Course

  • Graded up to assignment 4
  • Assignment 6 out

Miscellaneous

 




Meeting Notes April 2023

Meeting 20/04

Research

  • reviewing paper for IROS
  • working on CR-VAE paper
  • experiment for the SL competition

PhD Registration

  • waiting for Toussaint’s response
  • maybe contact other professors?
    • rudolf

M.Sc. Students/Interns

  • Iye Szin
    • SL competition; deadline May 1

ML Course

  • grades for assignment 2 out

Miscellaneous

  • Summer Schools
    • ProbAI accepted (registration until 26/04)
    • ETH & RLSS waiting list
  • May-June Leaves
    • 19 May
    • 30 May – 6 June
  • Move May 1 to May 10 vacation

 

Meeting 27/04

Research

  • working on CR-VAE paper
  • image encoder for the SL competition

PhD Registration

  • Mentor: Rudolf Lioutikov
  • Application need signature from Rudolf

M.Sc. Students/Interns

  • Iye Szin
    • SL competition; deadline May 1

ML Course

  • Assignment 4: Regression

Miscellaneous

  • Summer Schools
    • Accepted:
      • M2LSS registered
      • ProbAI declined it
    • Rejected
      • ETH
      • RLSS
    • Applied:
      • LxMLS
      • Ellis Recommendation letter
  • internship application



Meeting Notes March 2023

Meeting 10/03

Research

  • Plan to participate in the air hockey challenge
  • Literature review for the right model

PhD Registration

  • todo: prepare Email

M.Sc. Students/Interns

  • Iye Szin work plan
  • Internship will lead to her thesis

ML Assistantship

  • Syllabus
  • Prepare exercises 

ML Course

  • Moodle to upload files (discussed)
  • Link to latex for the report (done)

Miscellaneous

  • No time to attend the research seminar, ML course takes too much of my time. (discussed)
  • 2 days work from home 31.05 & 01.06
  • Vacation 02.06 – 11.06
  • Medium GPUs for WS in the lab (RTX 3060 or 3070)

 

Meeting 13/03

Research

  • Rebuttal

ML Course

  • Assignment 1 preparation

Meeting 23/03

Research

  • respond to ICML Chairs about reviewer 1
  • Searched for alternative conferences
    • ECAI
    • BCCV
  • Literature review on SSL problems
  • RL Revision

M.Sc. Students/Interns

  • Iye Szin steady progress

Ph.D. registration

  • Email send to Toussaint

ML Course

  • Assignment 1 grades
  • post pdf

Miscellaneous

  • Summer School Applications
  • Paper Review accepted for IROS 2023
  • fill the form for IAS retreat

 

Meeting 30/03​

Research

  • waiting for ICML final decision
  • when out, I will compile the comments
    • data augmentation influence on MI
    • etc
  • submit to
  • ECAI
    • ICVS ranking is C
  • Next on: Dimensionality collapse in representation learning
    • currently reading about it
  • Air hockey challenge
    • start with SAC
    • continue with a model-based RL method, like world models

M.Sc. Students/Interns

  • Iye Szin struggling with ROS2 but in a logical frame

Ph.D. registration

  • Email sent to Toussaint. Waiting for responce

ML Course

  • Assignment 3 is out

Miscellaneous

  • Summer School Applications
  • Paper Review for IROS 2023
  • submitted the application for IAS retreat

 

Li Jing, Pascal Vincent, Yann LeCun, & Yuandong Tian (2021). Understanding Dimensional Collapse in Contrastive Self-supervised Learning. arXiv preprint arXiv:2110.09348.



Self-supervised Learning for Few-Shot Learning – Internship Position

Start date: Open

Location: Leoben

Job Type: Internship

Duration: 3-6 months, depending on the level of applicant’s proficiency on the asked qualifications.

Keywords: Self-supervised learning, Few-shot learning, Deep learning, PyTorch, Research

Supervisors:

Job Description

We are looking for a highly motivated research intern to work on the development of novel self-supervised learning algorithms to improve few-shot learning. The intern will be responsible for conducting research on self-supervised learning techniques such as contrastive learning and generative models, and their applications to few-shot learning. The intern will also be responsible for implementing and evaluating these algorithms on benchmark datasets.

Responsibilities

  • Conduct research on self-supervised learning techniques for few-shot learning.
  • Develop novel self-supervised learning algorithms and evaluate their performance on benchmark datasets.
  • Implement and fine-tune deep learning models for few-shot learning using self-supervised pre-training.
  • Collaborate with the research team to design and carry out experiments and analyze results.
  • Contribute to writing research papers and technical reports.

Qualifications

  • Currently pursuing a Bachelor’s or Master’s degree in Computer Science,
    Electrical Engineering, Mechanical Engineering, Mathematics or related
    fields.
  • Strong programming skills in Python and experience with deep learning frameworks such as PyTorch or TensorFlow.
  • Familiarity with self-supervised learning techniques such as contrastive learning and generative models.
  • Knowledge of few-shot learning and transfer learning is a plus.
  • Strong problem-solving skills and ability to work independently and collaboratively.
  • Good written and verbal communication skills in English.

Opportunities and Benefits of the Internship

This internship provides an excellent opportunity to gain hands-on experience in cutting-edge research on self-supervised learning for few-shot learning, working with a highly collaborative and supportive team. The intern will also have the opportunity to co-author research papers and technical reports, and participate in conferences and workshops.

Application

Send us your CV accompanied by a letter of motivation at fotios.lygerakis@unileoben.ac.at with the subject: “Internship Application | Self-supervised Learning”

Funding

We will support you during your application for an internship grant. Below we list some relevant grant application details.

CEEPUS grant (European for undergrads and graduates)

Find details on the Central European Exchange Program for University Studies program at https://grants.at/en/ or at https://www.ceepus.info.

In principle, you can apply at any time for a scholarship. However, also your country of origin matters and there exist networks of several countries that have their own contingent.

Ernst Mach Grant (Worldwide for PhDs and Seniors)

Find details on the program at https://grants.at/en/ or at https://oead.at/en/to-austria/grants-and-scholarships/ernst-mach-grant.

Rest Funding Resourses

Apply online at http://www.scholarships.at/




Meeting Notes February 2023

Meeting 02/02

Research

  • Follow up CR-VAE
    • Files on the papers folder
    • Create simple code to run experiments as described on paper
      • Upload on gitea
    • Create a webpage for CR-VAE paper
    • Wait for reviews (March 13)
    • Rebuttal (March 19)
  • Extend the representation learning work towards disentanglement
    • Literature Review
    • Dig deeper into Transformers
  • Literature Review on SOTA RL algorithms
    • Read and implement basic and SOTA RL algorithms
      • Can be the base of an RL course too.
  • Use CR-VAE with SOTA RL algorithms
    • First experiments with SAC
    • Explore sample efficiency
    • Explore gradient flow ablations
  • Develop an AR-ROS2 framework
    • Create a minimal working example of manipulating a physical robot (UR3) with Hololens2

M.Sc. Students/Interns

  • Melanie
    • Thesis Review
    • Code submission
  • Sign Language project
    • Define the project more clearly
      • Feedback needed
    • Send study details to the applicant
  • AR project
    • Is it within the scope of our research?

ML Assistantship

  • Syllabus
  • Prepare exercises 

Miscellaneous

  • Ph.D. registration
    • Mentor
      • Ortner Ronald?
      • Other UNI?
  • Retreats
    • expectations/requirements
  • Summer School
  • Neural Coffee (ML Reading Group)
    • When: Every Friday 10:00-12:00
    • Where: CPS Kitchen (?)
    • Poster
  • Floor and Desk Lamps

Meeting 16/02

Research

  • create a new research draft
    • implement CURL
    • substitute contrastive learning with CR-VAE representations
  • Literature review on unsupervised learning (Hinton’s work) to find out ankles that have room for improvement
    • write a journal on that

Summer School

  • Cv &  motivation letter feedback
  • Applied

M.Sc. Students/Interns

  • Melanie: thesis review done
  • Iye Szin:
    • Gave her resources to study (ML/NN/ROS2)
    • Discussed a plan for internship

Ph.D. registration

  • PhD in Computer Science
    • Not possible
    • probably doesn’t matter(?)
  • Call with Dean of Studies
  • Mentor
    • I would like someone exposed to sample-efficient and robust Reinforcement Learning. Hopefully to Robot Learning too
    • Someone that can also extend my scientific network of people  
    • Can I ask professors from other universities?
  • Mentor Candidates
    • Marc Toussaint, Learning and Intelligent Systems lab, TU Berlin, Germany
    • Abhinav Valada, Robot Learning Lab, University of Freiburg, Germany
    • Georgia Chalvatzaki, IAS, TU Darmstadt, Germany
    • Edward Johns, Robot Learning Lab, Imperial College London, UK
    • Sepp Hochreiter, Institute of Machine Learning, JKU Linz, Austria
  • Write a paper with a mentor

ML Course

  • Jupyter notebooks or old code? If Jyputer notebooks, why not google collab?
  • What will the context of lectures be so that I can prepare exercises accordingly?
    • lectures are up
  • 20% of the final exam is from the lab exercises
  • Decide on the lecture format
  • Find an appropriate dataset

Miscellaneous

Science Breakfast @MUL: 14/02 11:00-12:00

Anymal Robot at Mining chair on 15/02?

Effective Communication In Academia Seminar

  • Feedback on CPS presentation template:
    • Size: Make the slide size the same as PowerPoint (more rectangular).
    • Outline (left outline)
      • We could skip the subsections. Keep only the higher sections
      • Make the fonts darker. They are not easily visible on a projector
    • Colors
      • Color of boxes (frames) must become darker, otherwise it is not easily distinguishable from the white background on a projector
  • Idea: Create a Google Slide template
    • Easier to use
    • Can add arrows, circles, etc
    • Easier with tables

Meeting 28/02

Research

  • air-hokey challenge

M.Sc. Students/Interns

  • Iye Szin:
    • starts 2 March
    • Elmar has to sign documents (permanent position)
    • Allocation of HW
    • transpornder

Ph.D. registration

  • Mentor can be from anywhere
  • Mentor has to be a recognized scientist (with a “venia docendi” if he/she is from the German-speaking world)
  • No courses or ects needed
  • the mentor must not be a reviewer of your thesis. He can be an examiner, though.
  • Email to Marc Toussaint?
  • Officially: no obligations
  • Unofficially: propose common reasearch

ML Course

  • Google Collab
    • Uses the jupyter format.
    • Runs online
    • Even supports limited access to GPU/TPU
    • Speeds up learning process
  • Do we need latex?
    • yes
  • Update slides for the Lab accordingly
  • Submission at a folder in the cloud
    • ipynb file
    • report
    • zipped and named : firstname_lastname_m00000_assignment1.zip
  • Online lectures -> webex more stable
  • Google slides template
  • Grading
    • 100 pts
    • latex report: +10
    • optional exercise: +20
  • tweetback: 3 questions

Miscellaneous

    • IAS retreat
    • Melanie’s presentation



Self-Supervised Learning Techniques for Improving Unsupervised Representation Learning [M.Sc. Thesis/Int. CPS project]

Abstract

The need for efficient and compact representations of sensory data such as visual and textual has grown significantly due to the exponential growth in the size and complexity of the data. Self-supervised learning techniques, such as autoencoders, contrastive learning, and transformer, have shown significant promise in learning such representations from large unlabeled datasets. This research aims to develop novel self-supervised learning techniques inspired by these approaches to improve the quality and efficiency of unsupervised representation learning.

Description

The study will begin by reviewing the state-of-the-art self-supervised learning techniques and their applications in various domains, including computer vision and natural language processing. Next, a set of experiments will be conducted to develop and evaluate the proposed techniques on standard datasets in these domains.

The experiments will focus on learning compact and efficient representations of sensory data using autoencoder-based techniques, contrastive learning, and transformer-based approaches. The performance of the proposed techniques will be evaluated based on their ability to improve the accuracy and efficiency of unsupervised representation learning tasks.

The research will also investigate the impact of different factors such as the choice of loss functions, model architecture, and hyperparameters on the performance of the proposed techniques. The insights gained from this study will help in developing guidelines for selecting appropriate self-supervised learning techniques for efficient and compact representation learning.

Overall, this research will contribute to the development of novel self-supervised learning techniques for efficient and compact representation learning of sensory data. The proposed techniques will have potential applications in various domains, including computer vision, natural language processing, and other sensory data analysis tasks.

Qualifications

  • Currently pursuing a Bachelor’s or Master’s degree in Computer Science,
    Electrical Engineering, Mechanical Engineering, Mathematics, or related
    fields.
  • Strong programming skills in Python
  • Experience with deep learning frameworks such as PyTorch or TensorFlow.
  • Good written and verbal communication skills in English.
  • (optional) Familiarity with unsupervised learning techniques such as contrastive learning, self-supervised learning, and generative models

Interested?

If this topic excites you you, please contact Fotios Lygerakis by email at fotios.lygerakis@unileoben.ac.at or simple visit us at our chair in the Metallurgie building, 1st floor.




HRI-SL: Human-Robot Interaction with Sign Language

Start date: Open

Location: Leoben

Position Types: Thesis/Internship

Duration: 3-6 months, depending on the level of applicant’s proficiency on the asked qualifications.

 

Keywords: Human-Robot Interaction (HRI), Human Gesture Recognition, Sign Language, Robotics, Computer Vision, Large Language Models (LLMs), Behavior Cloning, Reinforcement Learning, Digital Twin, ROS-2

Supervisor:

You can work on this project either by doing a B.Sc or M.Sc. thesis or an internship*.

Abstract

As the interaction with robots becomes an integral part of our daily lives, there is an escalating need for more human-like communication methods with these machines. This surge in robotic integration demands innovative approaches to ensure seamless and intuitive communication. Incorporating sign language, a powerful and unique form of communication predominantly used by the deaf and hard-of-hearing community, can be a pivotal step in this direction. 

By doing so, we not only provide an inclusive and accessible mode of interaction but also establish a non-verbal and non-intrusive way for everyone to engage with robots. This evolution in human-robot interaction will undoubtedly pave the way for more holistic and natural engagements in the future.

DALL·E 2023-02-09 17.32.48 - robot hand communicating with sign language

Project Description

The implementation of sign language in human-robot interaction will not only improve the user experience but will also advance the field of robotics and artificial intelligence.

This project will encompass 4 crucial elements.

  1. Human Gesture Recognition with CNNs and/or Transformers – Recognizing human gestures in sign language through the development of deep learning methods utilizing a camera.
    • Letter-level
    • Word/Gloss-level
  2. Chat Agent with Large Language Models (LLMs) – Developing a gloss chat agent.
  3. Finger Spelling/Gloss gesture with Robot Hand/Arm-Hand –
    • Human Gesture Imitation
    • Behavior Cloning
    • Offline Reinforcement Learning
  4. Software Engineering – Create a seamless human-robot interaction framework using sign language.
    • Develop a ROS-2 framework
    • Develop a robot digital twin on simulation
  5. Human-Robot Interaction Evaluation – Evaluate and adopt the more human-like methods for more human-like interaction with a robotic signer.
1024-1364
Hardware Set-Up for Character-level Human-Robot Interaction with Sign language.
Example of letter-level HRI with sign language: Copying agent

Qualifications

  • Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mechanical Engineering, Mathematics or related fields.
  • Strong programming skills in Python and experience with deep learning frameworks such as PyTorch or TensorFlow.
  • Experience working with robotics hardware.
  • Knowledge of computer vision and image processing techniques
  • Strong problem-solving skills and ability to work independently and collaboratively.
  • Good written and verbal communication skills in English.
  • Passion for creating technology that is accessible and inclusive for everyone
  • Experience in working on research projects or coursework related to robotics or artificial intelligence is a plus

Opportunities

This project provides an excellent opportunity to gain hands-on experience in cutting-edge research, working with a highly collaborative and supportive team. The student/intern will also have the opportunity to co-author research papers and technical reports, and participate in conferences and workshops.

Application/Questions

Send us your CV accompanied by a letter of motivation at fotios.lygerakis@unileoben.ac.at with the subject: “Internship/Thesis Application | Sign Language Robot Hand”

Funding

* This project does not offer a funded position. Below we list some relevant grant application details.

CEEPUS grant (European for undergrads and graduates)

Find details on the Central European Exchange Program for University Studies program at https://grants.at/en/ or at https://www.ceepus.info.

In principle, you can apply at any time for a scholarship. However, also your country of origin matters and there exist networks of several countries that have their own contingent.

Ernst Mach Grant (Worldwide for PhDs and Seniors)

Find details on the program at https://grants.at/en/ or at https://oead.at/en/to-austria/grants-and-scholarships/ernst-mach-grant.

Rest Funding Resourses

Apply online at http://www.scholarships.at/