image_pdfimage_print

Internship/Thesis in Machine Learning

Do you have a passion for machine learning and want to gain real-world experience? Are you eager to learn from leading researchers in the field? If so, then this internship is for you!

You can work on this project either by doing a B.Sc or M.Sc. thesis or an internship.

Job Description

We are seeking a highly motivated interns to join our team. The internship will focus on applying self-supervised methods (contrastive and non-contrastive) to computer vision, representation learning and data fusion problems. You will have the opportunity to contribute to a research project with the potential to improve current models employed in our chair.

Start date: Open

Location: Leoben

Duration: 3-6 months

Supervisors:

Keywords:

  • Self-supervised learning
  • Autoencoders
  • Contrastive Learning
  • Energy-based Models
  • Deep learning
  • PyTorch
  • Research

Responsibilities

  • Dive headfirst into the deep learning pipeline, tackling data preparation, model development, training, and evaluation across computer vision, representation learning and data fusion.
  • Conduct in-depth literature reviews, staying on the forefront of advancements in these fields.
  • Craft compelling presentations and reports to effectively communicate your research findings.
  • Collaborate closely with your supervisors and team members, fostering a dynamic learning environment.
  • Gain deeper experience with industry-standard deep learning libraries (e.g., TensorFlow, PyTorch).

Qualifications

  • Currently pursuing a Bachelor’s or Master’s degree in Computer Science,
    Electrical Engineering, Mechanical Engineering, Mathematics or related
    fields.
  • Strong foundation in machine learning concepts (e.g., supervised learning, unsupervised learning, neural networks, etc)
  • Strong programming skills in Python and experience with deep learning frameworks such as PyTorch or TensorFlow.
  • Excellent analytical and problem-solving skills.
  • Effective communication and collaboration skills to work seamlessly within the research team.
  • Good written and verbal communication skills in English.

Opportunities and Benefits of the Internship

  • Get a taste of a research environment and collaborate with other researchers in the field of machine learning.
  • Gain invaluable hands-on experience at the forefront of deep learning research.
  • Participate in a diverse team of researchers.
  • Explore the cutting-edge applications of deep learning on computer vision, representation learning and data fusion.
  • Make a significant contribution to meaningful research projects that advance our Chair’s capabilities.
  • Strengthen your resume and network with leading researchers in the field.

Application

Send us your CV accompanied by a letter of motivation at fotios.lygerakis@unileoben.ac.at with the subject: “Internship Application | Machine Learning”

Funding

We will support you during your application for an internship grant. Below we list some relevant grant application details.

CEEPUS grant (European for undergrads and graduates)

Find details on the Central European Exchange Program for University Studies program at https://grants.at/en/ or at https://www.ceepus.info.

In principle, you can apply at any time for a scholarship. However, also your country of origin matters and there exist networks of several countries that have their own contingent.

Ernst Mach Grant (Worldwide for PhDs and Seniors)

Rest Funding Resourses

Apply online at http://www.scholarships.at/

Meeting Notes February 2023

Meeting 02/02

Research

  • Follow up CR-VAE
    • Files on the papers folder
    • Create simple code to run experiments as described on paper
      • Upload on gitea
    • Create a webpage for CR-VAE paper
    • Wait for reviews (March 13)
    • Rebuttal (March 19)
  • Extend the representation learning work towards disentanglement
    • Literature Review
    • Dig deeper into Transformers
  • Literature Review on SOTA RL algorithms
    • Read and implement basic and SOTA RL algorithms
      • Can be the base of an RL course too.
  • Use CR-VAE with SOTA RL algorithms
    • First experiments with SAC
    • Explore sample efficiency
    • Explore gradient flow ablations
  • Develop an AR-ROS2 framework
    • Create a minimal working example of manipulating a physical robot (UR3) with Hololens2

M.Sc. Students/Interns

  • Melanie
    • Thesis Review
    • Code submission
  • Sign Language project
    • Define the project more clearly
      • Feedback needed
    • Send study details to the applicant
  • AR project
    • Is it within the scope of our research?

ML Assistantship

  • Syllabus
  • Prepare exercises 

Miscellaneous

  • Ph.D. registration
    • Mentor
      • Ortner Ronald?
      • Other UNI?
  • Retreats
    • expectations/requirements
  • Summer School
  • Neural Coffee (ML Reading Group)
    • When: Every Friday 10:00-12:00
    • Where: CPS Kitchen (?)
    • Poster
  • Floor and Desk Lamps

Meeting 16/02

Research

  • create a new research draft
    • implement CURL
    • substitute contrastive learning with CR-VAE representations
  • Literature review on unsupervised learning (Hinton’s work) to find out ankles that have room for improvement
    • write a journal on that

Summer School

  • Cv &  motivation letter feedback
  • Applied

M.Sc. Students/Interns

  • Melanie: thesis review done
  • Iye Szin:
    • Gave her resources to study (ML/NN/ROS2)
    • Discussed a plan for internship

Ph.D. registration

  • PhD in Computer Science
    • Not possible
    • probably doesn’t matter(?)
  • Call with Dean of Studies
  • Mentor
    • I would like someone exposed to sample-efficient and robust Reinforcement Learning. Hopefully to Robot Learning too
    • Someone that can also extend my scientific network of people  
    • Can I ask professors from other universities?
  • Mentor Candidates
    • Marc Toussaint, Learning and Intelligent Systems lab, TU Berlin, Germany
    • Abhinav Valada, Robot Learning Lab, University of Freiburg, Germany
    • Georgia Chalvatzaki, IAS, TU Darmstadt, Germany
    • Edward Johns, Robot Learning Lab, Imperial College London, UK
    • Sepp Hochreiter, Institute of Machine Learning, JKU Linz, Austria
  • Write a paper with a mentor

ML Course

  • Jupyter notebooks or old code? If Jyputer notebooks, why not google collab?
  • What will the context of lectures be so that I can prepare exercises accordingly?
    • lectures are up
  • 20% of the final exam is from the lab exercises
  • Decide on the lecture format
  • Find an appropriate dataset

Miscellaneous

Science Breakfast @MUL: 14/02 11:00-12:00

Anymal Robot at Mining chair on 15/02?

Effective Communication In Academia Seminar

  • Feedback on CPS presentation template:
    • Size: Make the slide size the same as PowerPoint (more rectangular).
    • Outline (left outline)
      • We could skip the subsections. Keep only the higher sections
      • Make the fonts darker. They are not easily visible on a projector
    • Colors
      • Color of boxes (frames) must become darker, otherwise it is not easily distinguishable from the white background on a projector
  • Idea: Create a Google Slide template
    • Easier to use
    • Can add arrows, circles, etc
    • Easier with tables

Meeting 28/02

Research

  • air-hokey challenge

M.Sc. Students/Interns

  • Iye Szin:
    • starts 2 March
    • Elmar has to sign documents (permanent position)
    • Allocation of HW
    • transpornder

Ph.D. registration

  • Mentor can be from anywhere
  • Mentor has to be a recognized scientist (with a “venia docendi” if he/she is from the German-speaking world)
  • No courses or ects needed
  • the mentor must not be a reviewer of your thesis. He can be an examiner, though.
  • Email to Marc Toussaint?
  • Officially: no obligations
  • Unofficially: propose common reasearch

ML Course

  • Google Collab
    • Uses the jupyter format.
    • Runs online
    • Even supports limited access to GPU/TPU
    • Speeds up learning process
  • Do we need latex?
    • yes
  • Update slides for the Lab accordingly
  • Submission at a folder in the cloud
    • ipynb file
    • report
    • zipped and named : firstname_lastname_m00000_assignment1.zip
  • Online lectures -> webex more stable
  • Google slides template
  • Grading
    • 100 pts
    • latex report: +10
    • optional exercise: +20
  • tweetback: 3 questions

Miscellaneous

    • IAS retreat
    • Melanie’s presentation

Self-Supervised Learning Techniques for Improving Unsupervised Representation Learning [M.Sc. Thesis/Int. CPS project]

Abstract

The need for efficient and compact representations of sensory data such as visual and textual has grown significantly due to the exponential growth in the size and complexity of the data. Self-supervised learning techniques, such as autoencoders, contrastive learning, and transformer, have shown significant promise in learning such representations from large unlabeled datasets. This research aims to develop novel self-supervised learning techniques inspired by these approaches to improve the quality and efficiency of unsupervised representation learning.

Description

The study will begin by reviewing the state-of-the-art self-supervised learning techniques and their applications in various domains, including computer vision and natural language processing. Next, a set of experiments will be conducted to develop and evaluate the proposed techniques on standard datasets in these domains.

The experiments will focus on learning compact and efficient representations of sensory data using autoencoder-based techniques, contrastive learning, and transformer-based approaches. The performance of the proposed techniques will be evaluated based on their ability to improve the accuracy and efficiency of unsupervised representation learning tasks.

The research will also investigate the impact of different factors such as the choice of loss functions, model architecture, and hyperparameters on the performance of the proposed techniques. The insights gained from this study will help in developing guidelines for selecting appropriate self-supervised learning techniques for efficient and compact representation learning.

Overall, this research will contribute to the development of novel self-supervised learning techniques for efficient and compact representation learning of sensory data. The proposed techniques will have potential applications in various domains, including computer vision, natural language processing, and other sensory data analysis tasks.

Qualifications

  • Currently pursuing a Bachelor’s or Master’s degree in Computer Science,
    Electrical Engineering, Mechanical Engineering, Mathematics, or related
    fields.
  • Strong programming skills in Python
  • Experience with deep learning frameworks such as PyTorch or TensorFlow.
  • Good written and verbal communication skills in English.
  • (optional) Familiarity with unsupervised learning techniques such as contrastive learning, self-supervised learning, and generative models

Interested?

If this topic excites you you, please contact Fotios Lygerakis by email at fotios.lygerakis@unileoben.ac.at or simple visit us at our chair in the Metallurgie building, 1st floor.

HRI-SL: Human-Robot Interaction with Sign Language

Start date: Open

Location: Leoben

Position Types: Thesis/Internship

Duration: 3-6 months, depending on the level of applicant’s proficiency on the asked qualifications.

 

Keywords: Human-Robot Interaction (HRI), Human Gesture Recognition, Sign Language, Robotics, Computer Vision, Large Language Models (LLMs), Behavior Cloning, Reinforcement Learning, Digital Twin, ROS-2

Supervisor:

You can work on this project either by doing a B.Sc or M.Sc. thesis or an internship*.

Abstract

As the interaction with robots becomes an integral part of our daily lives, there is an escalating need for more human-like communication methods with these machines. This surge in robotic integration demands innovative approaches to ensure seamless and intuitive communication. Incorporating sign language, a powerful and unique form of communication predominantly used by the deaf and hard-of-hearing community, can be a pivotal step in this direction. 

By doing so, we not only provide an inclusive and accessible mode of interaction but also establish a non-verbal and non-intrusive way for everyone to engage with robots. This evolution in human-robot interaction will undoubtedly pave the way for more holistic and natural engagements in the future.

DALL·E 2023-02-09 17.32.48 - robot hand communicating with sign language

Project Description

The implementation of sign language in human-robot interaction will not only improve the user experience but will also advance the field of robotics and artificial intelligence.

This project will encompass 4 crucial elements.

  1. Human Gesture Recognition with CNNs and/or Transformers – Recognizing human gestures in sign language through the development of deep learning methods utilizing a camera.
    • Letter-level
    • Word/Gloss-level
  2. Chat Agent with Large Language Models (LLMs) – Developing a gloss chat agent.
  3. Finger Spelling/Gloss gesture with Robot Hand/Arm-Hand –
    • Human Gesture Imitation
    • Behavior Cloning
    • Offline Reinforcement Learning
  4. Software Engineering – Create a seamless human-robot interaction framework using sign language.
    • Develop a ROS-2 framework
    • Develop a robot digital twin on simulation
  5. Human-Robot Interaction Evaluation – Evaluate and adopt the more human-like methods for more human-like interaction with a robotic signer.
1024-1364
Hardware Set-Up for Character-level Human-Robot Interaction with Sign language.
Example of letter-level HRI with sign language: Copying agent

Qualifications

  • Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mechanical Engineering, Mathematics or related fields.
  • Strong programming skills in Python and experience with deep learning frameworks such as PyTorch or TensorFlow.
  • Experience working with robotics hardware.
  • Knowledge of computer vision and image processing techniques
  • Strong problem-solving skills and ability to work independently and collaboratively.
  • Good written and verbal communication skills in English.
  • Passion for creating technology that is accessible and inclusive for everyone
  • Experience in working on research projects or coursework related to robotics or artificial intelligence is a plus

Opportunities

This project provides an excellent opportunity to gain hands-on experience in cutting-edge research, working with a highly collaborative and supportive team. The student/intern will also have the opportunity to co-author research papers and technical reports, and participate in conferences and workshops.

Application/Questions

Send us your CV accompanied by a letter of motivation at fotios.lygerakis@unileoben.ac.at with the subject: “Internship/Thesis Application | Sign Language Robot Hand”

Funding

* This project does not offer a funded position. Below we list some relevant grant application details.

CEEPUS grant (European for undergrads and graduates)

Find details on the Central European Exchange Program for University Studies program at https://grants.at/en/ or at https://www.ceepus.info.

In principle, you can apply at any time for a scholarship. However, also your country of origin matters and there exist networks of several countries that have their own contingent.

Ernst Mach Grant (Worldwide for PhDs and Seniors)

Rest Funding Resourses

Apply online at http://www.scholarships.at/

Sign Language Robot Hand [M.Sc. Thesis/Int. CPS Project]

Abstract

Human-Robot Interaction using Sign Language is a project that aims to revolutionize the way we communicate with machines. With the increasing use of robots in our daily lives, it is important to create a more natural and intuitive way for humans to communicate with them.

Sign language is a unique and powerful form of communication that is widely used by the deaf and hard-of-hearing community. By incorporating sign language into robot interaction, we can create a more inclusive and accessible technology for everyone.

Moreover, sign language will provide a new and innovative way to interact with robots, making it possible for people to control and communicate with them in a way that is both non-verbal and non-intrusive.

Note: This project is also offered as Internship position.

DALL·E 2023-02-09 17.32.48 - robot hand communicating with sign language

Thesis Description

The implementation of sign language in human-robot interaction will not only improve the user experience but will also advance the field of robotics and artificial intelligence. This project has the potential to bring about a new era of human-robot interaction, where machines and humans can communicate in a more natural and human-like way. Therefore, the Human-Robot Interaction using Sign Language project is a crucial step toward creating a more accessible and user-friendly technology for everyone.

This thesis will encompass three crucial elements. The first part will focus on recognizing human gestures in sign language through the development of deep learning methods utilizing a camera. The second part will involve programming a robotic hand to translate text back into gestures. Finally, the third part will bring together the first two components to create a seamless human-robot interaction framework using sign language.

Qualifications

  • Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mechanical Engineering, Mathematics or related fields.
  • Strong programming skills in Python
  • Experience with deep learning frameworks such as PyTorch or TensorFlow.
  • Experience working with robotics hardware
  • Knowledge of computer vision and image processing techniques
  • Good written and verbal communication skills in English.

Interested?

If this project sounds like fun to you, please contact Fotios Lygerakis by email at fotios.lygerakis@unileoben.ac.at or simple visit us at our chair in the Metallurgie building, 1st floor.

Meeting Notes December 2022

Meeting 01/12

Updates

  • experiments with caltech 101
    • too small dataset. Network needs pretraining
    • too big bictures. problems with gpu memory when training and big storage space when saving models
  • refactor code to better scale for more evaluation techniques
  • reviewed XAI methods.
  • Further literature review for representation learning

MS Student Updates

  • Melanie
    • Image Segmentation on Steel Defect dataset
    • Next on: Deep Optical Flow

Other activities

  • Hololens 2 review
  • plan to publish the AR project as internship position
    • LinkedIn -> CPS page?
    • MUL
    • Emails
  • Share Christmas video with the public relations team of MUL
  • linked in account ->  page

Next on

  • do experiments for interpolating latent space
  • run experiments with more datasets
  • reconstruct the paper for
  • literature review on representation learning
  • seminar talk for latent space representation and explainability in neural networks, feature maps. organize meetings (1 paper per week)

Meeting 08/12

Updates

  •  

MS Student Updates

  • Melanie
    •  
    • Next on:

Other activities

  • PhD in Computer Science

Next on

  •  

Meeting Notes November 2022

Meeting 03/11​

Done

  • Virtual machine setup & running experiments
    • speed is x3 slower than RTX 3090

Started working on

  • develop experiments with new dataset (Caltech101)
  • assess experiments
  • test transfer learning capabilities of CR-VAE

MS Students Updates

  • Melanie
    • Object Detection using Resnet
  • Julian
    • basic tutorials on ROS
    • Teleoperation of Turtlebot using PS5 controller
    • UR3 on ROS using MoveIt

Next on

  • description of caltech
  • talk with Konrad about the cluster
  • Update paper with correct results and send it to CVPR
  • Hyperparameter grid search experiments
  • literature review on representation learning
  • experiments with artificial datasets
  • develop methods
    • contrastive learning for spiking neural networks.
    • mode-seeking kl divergence
 
 

Meeting 11/11​

Progress

  • Updated draft of the CR-VAE paper.
  • Missed author registration deadline for CVPR >_<*
  • Run experiments with simple AE / CR-AE
  • Test if InfoMax objective actually works -> it doesn’t.
  • New implementation of the loss function
  • Tried input normalization and MSE for reconstruction error
  • Assess experiments
  • Hyperparameter grid search experiments for CR-VAE
  • Literature review on representation learning (review paper)
  • Going through ROS 2 documentation
  • Get acquanted with UR3

MS Students Updates

  • Melanie
    • Object Detection using Resnet
    • Next on: Image Segmentation
  • Julian
    • Teleoperation of Turtlebot using PS5 controller
    • Next on: UR3 on Gazebo using MoveIt

Other activities

  • Discussion with Sahar and Vedant about how Sahar could frame her Reinforcement learning research problem.

Next on

  • assess experiments from grid search with the new loss function
    • hopefully there will be some distinct difference of the 3 methods
  • further assess the value of MI as an auxilary task for unsupervised representation learnning
    • show that MI in InfoMax actually introduce noise
  • Denoising AE/CR-AE/VAE/CR-VAE with augmented images.
  • develop experiments with new dataset (Caltech101)
  • description of caltech
  • test transfer learning capabilities of CR-VAE
  • literature review on representation learning
  • experiments with artificial datasets
  • develop methods
    • contrastive learning for spiking neural networks.
    • mode-seeking kl divergence
     
Journal ideas
  • Find the best CL method for CR-VAE
  • Transfer learning
 

Meeting 17/11

Progress

  • gpu grid setbacks
  • caltech101 dataset
  • ROS2 refresh
  • new results

MS Students Updates

  • Melanie
    • Image Segmentation on Steel Defect dataset
    • Next on: Contrastive Learning
  • Julian
    • Sick
    • Next on: UR3 on Gazebo using MoveIt

Other activities

  • Study abroad fair speech

Next on

  • study the big performance gap in KLD between CR-VAE and VAE
  • further assess the value of MI as an auxilary task for unsupervised representation learnning
    • show that MI in InfoMax actually introduce noise
  • Denoising AE/CR-AE/VAE/CR-VAE with augmented images.
  • develop experiments with new dataset (Caltech101)
  • description of caltech
  • test transfer learning capabilities of CR-VAE
  • literature review on representation learning
  • experiments with artificial datasets
  • develop methods
    • contrastive learning for spiking neural networks.
    • mode-seeking kl divergence
     
Journal ideas
  • Find the best CL method for CR-VAE
  • Transfer learning
  • develop a contrastive regularization layer for NN

Meeting 23/11

Updates

  • AAAI paper submission update
    • received an email that the file was never uploaded even though I have a verification email. Still in the process of figuring out
  • new results on smaller architecture -> more distinct results

MS Students Updates

  • Melanie
    • Image Segmentation on Steel Defect dataset
    • Next on: Contrastive Learning
  • Julian
    • Dropped
    • Subject was not aligned with his program
    • working at the lab did not fit his schedule

Other activities

  • christmas & hololens 2 unboxing videos
  • storage place or display for PS5 controler, hololense, etc?
  • plan to publish the AR project as internship position
    • LinkedIn -> CPS page?
    • MUL
    • Emails

Next on

  • reconstruct the paper
  • caltech101 dataset
  • literature review on representation learning
  • Hololens 2 review
  • seminar talk for latent space representation and explainability in neural networks, feature maps. organize meetings (1 paper per week)

Meeting 30/11​

Updates

  • experiments with caltech 101
    • too small dataset. Network needs pretraining
    • too big bictures. problems with gpu memory when training and big storage space when saving models
  • refactor code to better scale for more evaluation techniques
  • reviewed XAI methods.
  • Further literature review for representation learning

MS Student Updates

  • Melanie
    • Image Segmentation on Steel Defect dataset
    • Next on: Deep Optical Flow

Other activities

  • Hololens 2 review
  • plan to publish the AR project as internship position
    • LinkedIn -> CPS page?
    • MUL
    • Emails
  • Share Christmas video with the public relations team of MUL
  • linked in account ->  page

Next on

  • do experiments for interpolating latent space
  • run experiments with more datasets
  • reconstruct the paper for
  • literature review on representation learning
  • seminar talk for latent space representation and explainability in neural networks, feature maps. organize meetings (1 paper per week)

Meeting Notes October 2022

Meeting 21/10

Done

  • experiment assessing with small custom architecture

Next on

  • find a new controller
  • set up computer for Melanie & Julian
  • Virtual machine setup

Meeting 25/10

Done

  • preliminary experiment assessing with resnet architecture
  • schedule new experiments on resnet architecure
  • preparation and meetings with MS studentsmeetings notes per month

Next on

  • assess experiments
  • literature review on representation learning
  • Virtual machine setup
  • experiments with artificial datasets
  • develop methods
    • contrastive learning for spiking neural networks.
    • mode-seeking kl divergence

Introduction to Productivity, Flexibility and Team Work

Increase your Productivity

Schedule your weekly tasks, meetings, courses or activities!

Increase your Flexibility

Access your files from any computer, tablet or phone!

Work as a Team

Edit together in real-time with easy sharing, and use comments, suggestions, and action items to keep things moving. Or use @-mentions to pull relevant people, files, and events into your online files for rich collaboration.

Important Links

Meeting Notes 14.10.2022

Participants

Niko, Fotis, Linus, Vedant

Agenda

  • First discussion on the projects structure of CPS Hub
  • Initial plan & examples

Notes

  • Components (e.g. UR3, RH8D_hand, Glove, Hololens2) are independent repositories
  • Projects (e.g. TacProMPs, HololensTeleop) are independent repositories that use the above repos.
  • No custom messages without previous team meeting
  • Use Foxy ROS2