No time to attend the research seminar, ML course takes too much of my time. (discussed)
2 days work from home 31.05 & 01.06
Vacation 02.06 – 11.06
Medium GPUs for WS in the lab (RTX 3060 or 3070)
Meeting 13/03
Research
Rebuttal
ML Course
Assignment 1 preparation
Meeting 23/03
Research
respond to ICML Chairs about reviewer 1
Searched for alternative conferences
ECAI
BCCV
Literature review on SSL problems
RL Revision
M.Sc. Students/Interns
Iye Szin steady progress
Ph.D. registration
Email send to Toussaint
ML Course
Assignment 1 grades
post pdf
Miscellaneous
Summer School Applications
Paper Review accepted for IROS 2023
fill the form for IAS retreat
Meeting 30/03
Research
waiting for ICML final decision
when out, I will compile the comments
data augmentation influence on MI
etc
submit to
ECAI
ICVS ranking is C
Next on: Dimensionality collapse in representation learning
currently reading about it
Air hockey challenge
start with SAC
continue with a model-based RL method, like world models
M.Sc. Students/Interns
Iye Szin struggling with ROS2 but in a logical frame
Ph.D. registration
Email sent to Toussaint. Waiting for responce
ML Course
Assignment 3 is out
Miscellaneous
Summer School Applications
Paper Review for IROS 2023
submitted the application for IAS retreat
Li Jing, Pascal Vincent, Yann LeCun, & Yuandong Tian (2021). Understanding Dimensional Collapse in Contrastive Self-supervised Learning. arXiv preprint arXiv:2110.09348.
Internship/Thesis in Machine Learning
|
Do you have a passion for machine learning and want to gain real-world experience? Are you eager to learn from leading researchers in the field? If so, then this internship is for you!
You can work on this project either by doing a B.Sc or M.Sc. thesis or an internship.
Job Description
We are seeking a highly motivated interns to join our team. The internship will focus on applying self-supervised methods (contrastive and non-contrastive) to computer vision, representation learning and data fusion problems. You will have the opportunity to contribute to a research project with the potential to improve current models employed in our chair.
Dive headfirst into the deep learning pipeline, tackling data preparation, model development, training, and evaluation across computer vision, representation learning and data fusion.
Conduct in-depth literature reviews, staying on the forefront of advancements in these fields.
Craft compelling presentations and reports to effectively communicate your research findings.
Collaborate closely with your supervisors and team members, fostering a dynamic learning environment.
Gain deeper experience with industry-standard deep learning libraries (e.g., TensorFlow, PyTorch).
Qualifications
Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mechanical Engineering, Mathematics or related fields.
Strong foundation in machine learning concepts (e.g., supervised learning, unsupervised learning, neural networks, etc)
Strong programming skills in Python and experience with deep learning frameworks such as PyTorch or TensorFlow.
Excellent analytical and problem-solving skills.
Effective communication and collaboration skills to work seamlessly within the research team.
Good written and verbal communication skills in English.
Opportunities and Benefits of the Internship
Get a taste of a research environment and collaborate with other researchers in the field of machine learning.
Gain invaluable hands-on experience at the forefront of deep learning research.
Participate in a diverse team of researchers.
Explore the cutting-edge applications of deep learning on computer vision, representation learning and data fusion.
Make a significant contribution to meaningful research projects that advance our Chair’s capabilities.
Strengthen your resume and network with leading researchers in the field.
Application
Send us your CV accompanied by a letter of motivation at fotios.lygerakis@unileoben.ac.at with the subject: “Internship Application | Machine Learning”
In principle, you can apply at any time for a scholarship. However, also your country of origin matters and there exist networks of several countries that have their own contingent.
The need for efficient and compact representations of sensory data such as visual and textual has grown significantly due to the exponential growth in the size and complexity of the data. Self-supervised learning techniques, such as autoencoders, contrastive learning, and transformer, have shown significant promise in learning such representations from large unlabeled datasets. This research aims to develop novel self-supervised learning techniques inspired by these approaches to improve the quality and efficiency of unsupervised representation learning.
Description
The study will begin by reviewing the state-of-the-art self-supervised learning techniques and their applications in various domains, including computer vision and natural language processing. Next, a set of experiments will be conducted to develop and evaluate the proposed techniques on standard datasets in these domains.
The experiments will focus on learning compact and efficient representations of sensory data using autoencoder-based techniques, contrastive learning, and transformer-based approaches. The performance of the proposed techniques will be evaluated based on their ability to improve the accuracy and efficiency of unsupervised representation learning tasks.
The research will also investigate the impact of different factors such as the choice of loss functions, model architecture, and hyperparameters on the performance of the proposed techniques. The insights gained from this study will help in developing guidelines for selecting appropriate self-supervised learning techniques for efficient and compact representation learning.
Overall, this research will contribute to the development of novel self-supervised learning techniques for efficient and compact representation learning of sensory data. The proposed techniques will have potential applications in various domains, including computer vision, natural language processing, and other sensory data analysis tasks.
Qualifications
Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mechanical Engineering, Mathematics, or related fields.
Strong programming skills in Python
Experience with deep learning frameworks such as PyTorch or TensorFlow.
Good written and verbal communication skills in English.
(optional) Familiarity with unsupervised learning techniques such as contrastive learning, self-supervised learning, and generative models
Interested?
If this topic excites you you, please contact Fotios Lygerakis by email at fotios.lygerakis@unileoben.ac.at or simple visit us at our chair in the Metallurgie building, 1st floor.
HRI-SL: Human-Robot Interaction with Sign Language
|
Start date: Open
Location: Leoben
Position Types: Thesis/Internship
Duration: 3-6 months, depending on the level of applicant’s proficiency on the asked qualifications.
Keywords: Human-Robot Interaction (HRI), Human Gesture Recognition, Sign Language, Robotics, Computer Vision, Large Language Models (LLMs), Behavior Cloning, Reinforcement Learning, Digital Twin, ROS-2
You can work on this project either by doing a B.Sc or M.Sc. thesis or an internship*.
Abstract
As the interaction with robots becomes an integral part of our daily lives, there is an escalating need for more human-like communication methods with these machines. This surge in robotic integration demands innovative approaches to ensure seamless and intuitive communication. Incorporating sign language, a powerful and unique form of communication predominantly used by the deaf and hard-of-hearing community, can be a pivotal step in this direction.
By doing so, we not only provide an inclusive and accessible mode of interaction but also establish a non-verbal and non-intrusive way for everyone to engage with robots. This evolution in human-robot interaction will undoubtedly pave the way for more holistic and natural engagements in the future.
Project Description
The implementation of sign language in human-robot interaction will not only improve the user experience but will also advance the field of robotics and artificial intelligence.
This project will encompass 4 crucial elements.
Human Gesture Recognition with CNNs and/or Transformers – Recognizing human gestures in sign language through the development of deep learning methods utilizing a camera.
Letter-level
Word/Gloss-level
Chat Agent with Large Language Models (LLMs) – Developing a gloss chat agent.
Finger Spelling/Gloss gesture with Robot Hand/Arm-Hand –
Human Gesture Imitation
Behavior Cloning
Offline Reinforcement Learning
Software Engineering – Create a seamless human-robot interaction framework using sign language.
Develop a ROS-2 framework
Develop a robot digital twin on simulation
Human-Robot Interaction Evaluation – Evaluate and adopt the more human-like methods for more human-like interaction with a robotic signer.
Hardware Set-Up for Character-level Human-Robot Interaction with Sign language.Example of letter-level HRI with sign language: Copying agent
Qualifications
Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mechanical Engineering, Mathematics or related fields.
Strong programming skills in Python and experience with deep learning frameworks such as PyTorch or TensorFlow.
Experience working with robotics hardware.
Knowledge of computer vision and image processing techniques
Strong problem-solving skills and ability to work independently and collaboratively.
Good written and verbal communication skills in English.
Passion for creating technology that is accessible and inclusive for everyone
Experience in working on research projects or coursework related to robotics or artificial intelligence is a plus
Opportunities
This project provides an excellent opportunity to gain hands-on experience in cutting-edge research, working with a highly collaborative and supportive team. The student/intern will also have the opportunity to co-author research papers and technical reports, and participate in conferences and workshops.
Application/Questions
Send us your CV accompanied by a letter of motivation at fotios.lygerakis@unileoben.ac.at with the subject: “Internship/Thesis Application | Sign Language Robot Hand”
Funding
* This project does not offer a funded position. Below we list some relevant grant application details.
CEEPUS grant (European for undergrads and graduates)
In principle, you can apply at any time for a scholarship. However, also your country of origin matters and there exist networks of several countries that have their own contingent.
Sign Language Robot Hand [M.Sc. Thesis/Int. CPS Project]
|
Abstract
Human-Robot Interaction using Sign Language is a project that aims to revolutionize the way we communicate with machines. With the increasing use of robots in our daily lives, it is important to create a more natural and intuitive way for humans to communicate with them.
Sign language is a unique and powerful form of communication that is widely used by the deaf and hard-of-hearing community. By incorporating sign language into robot interaction, we can create a more inclusive and accessible technology for everyone.
Moreover, sign language will provide a new and innovative way to interact with robots, making it possible for people to control and communicate with them in a way that is both non-verbal and non-intrusive.
The implementation of sign language in human-robot interaction will not only improve the user experience but will also advance the field of robotics and artificial intelligence. This project has the potential to bring about a new era of human-robot interaction, where machines and humans can communicate in a more natural and human-like way. Therefore, the Human-Robot Interaction using Sign Language project is a crucial step toward creating a more accessible and user-friendly technology for everyone.
This thesis will encompass three crucial elements. The first part will focus on recognizing human gestures in sign language through the development of deep learning methods utilizing a camera. The second part will involve programming a robotic hand to translate text back into gestures. Finally, the third part will bring together the first two components to create a seamless human-robot interaction framework using sign language.
Qualifications
Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mechanical Engineering, Mathematics or related fields.
Strong programming skills in Python
Experience with deep learning frameworks such as PyTorch or TensorFlow.
Experience working with robotics hardware
Knowledge of computer vision and image processing techniques
Good written and verbal communication skills in English.
Interested?
If this project sounds like fun to you, please contact Fotios Lygerakis by email at fotios.lygerakis@unileoben.ac.at or simple visit us at our chair in the Metallurgie building, 1st floor.
Meeting Notes December 2022
|
Meeting 01/12
Updates
experiments with caltech 101
too small dataset. Network needs pretraining
too big bictures. problems with gpu memory when training and big storage space when saving models
refactor code to better scale for more evaluation techniques
reviewed XAI methods.
Further literature review for representation learning
MS Student Updates
Melanie
Image Segmentation on Steel Defect dataset
Next on: Deep Optical Flow
Other activities
Hololens 2 review
plan to publish the AR project as internship position
LinkedIn -> CPS page?
MUL
Emails
Share Christmas video with the public relations team of MUL
seminar talk for latent space representation and explainability in neural networks, feature maps. organize meetings (1 paper per week)
Meeting Notes October 2022
|
Meeting 21/10
Done
experiment assessing with small custom architecture
Next on
find a new controller
set up computer for Melanie & Julian
Virtual machine setup
Meeting 25/10
Done
preliminary experiment assessing with resnet architecture
schedule new experiments on resnet architecure
preparation and meetings with MS studentsmeetings notes per month
Next on
assess experiments
literature review on representation learning
Virtual machine setup
experiments with artificial datasets
develop methods
contrastive learning for spiking neural networks.
mode-seeking kl divergence
Introduction to Productivity, Flexibility and Team Work
|
Increase your Productivity
Schedule your weekly tasks, meetings, courses or activities!
Increase your Flexibility
Access your files from any computer, tablet or phone!
Work as a Team
Edit together in real-time with easy sharing, and use comments, suggestions, and action items to keep things moving. Or use @-mentions to pull relevant people, files, and events into your online files for rich collaboration.