image_pdfimage_print

Mixed Reality Robot Teleoperation with Hololens 2 – Internship Position

Start date: Open

Location: Leoben

Job Type: Internship

Duration: 3-6 months, depending on the level of applicant’s proficiency on the asked qualifications.

Keywords: Mixed Reality, Augmented Reality, Unity 3D, Robotic Manipulators, ROS 2, Hololens 2.

Supervisors:

Job Description

The Mixed Reality (AR) interface for intuitive programming of robotic manipulators project is a unique opportunity to work on cutting-edge research in the field of robotics and mixed reality. This project aims to create an intuitive programming interface for robotic manipulators using Unity 3D and ROS 2 robotic framework.

By participating in this internship, you will gain valuable skills in mixed reality development, robotics, and software engineering, and contribute to a project with the potential to revolutionize the way we program and control robotic manipulators.

Note: This project is also offered under the Integrated CPS project course or as B.Sc or M.Sc. Thesis

Responsibilities

As an intern, you will work on developing a mixed reality (AR) interface using Unity 3D that enables intuitive programming of robotic manipulators. You will work on integrating this interface with the ROS 2 robotic framework to control a robotic manipulator (UR3).

This internship will give you hands-on experience in developing mixed reality applications and working with robotics hardware. You will collaborate with a supportive team of researchers and engineers to solve challenging problems in mixed reality and robotics. You will also have the opportunity to learn about the unique challenges and opportunities involved in creating innovative and intuitive interfaces for programming robotic systems.

Qualifications

  • Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mechanical Engineering or related fields.
  • Strong programming skills in C# and Unity 3D
  • Familiarity with ROS or other robotic frameworks
  • Knowledge of 3D modeling and animation
  • Experience with mixed reality development and programming
  • Good written and verbal communication skills in English.
  • Passion for creating innovative and intuitive interfaces for programming robotic systems
  • Experience in working on research projects or coursework related to robotics or mixed reality is a plus

Opportunities and Benefits of the Internship

This internship provides an excellent opportunity to gain hands-on experience in cutting-edge research on unsupervised learning for few-shot learning, working with a highly collaborative and supportive team. The intern will also have the opportunity to co-author research papers and technical reports, and participate in conferences and workshops.

Application

Send us your CV accompanied by a letter of motivation at fotios.lygerakis@unileoben.ac.at with the subject: “Internship Application | Mixed Reality Robot Teleoperation”

Funding

We will support you during your application for an internship grant. Below we list some relevant grant application details.

CEEPUS grant (European for undergrads and graduates)

Find details on the Central European Exchange Program for University Studies program at https://grants.at/en/ or at https://www.ceepus.info.

In principle, you can apply at any time for a scholarship. However, also your country of origin matters and there exist networks of several countries that have their own contingent.

Ernst Mach Grant (Worldwide for PhDs and Seniors)

Rest Funding Resourses

Apply online at http://www.scholarships.at/

Mixed Reality Robot Teleoperation with Hololens 2 [Thesis/Int. CPS Project ]

Description

Mixed Reality (AR) interface based on Unity 3D for intuitive programming of robotic manipulators (UR3). The interface will be implemented within on the ROS 2 robotic framework.

Note: This project is also offered as Internship position.

https://www.youtube.com/watch?v=-MfNrxHXwow

Abstract

Robots will become a necessity for every business in the near future. Especially companies that rely heavily on the constant manipulation of objects will need to be able to constantly repurpose their robots to meet the ever changing demands. Furthermore, with the rise of Machine Learning, human collaborators or ” robot teachers” will need a more intuitive interface to communicate with them, either when interacting with them or when teaching them.

In this project we will develop a novel Mixed (Augmented) Reality Interface for teleoperating the UR3 robotic manipulator. For this purpose we will use AR glasses to augment the user’s reality with information about the robot and enable intuitive programming of the robot. The interface will be implemented on a ROS 2 framework for enhanced scalability and better integration potential to other devices.

Outcomes

This thesis will result to an innovative graphical interface that enables non-experts to program a robotic manipulator.

The student will get valuable experience in the Robot Operating System (ROS) framework and developing graphical interfaces on Unity. The student will also get a good understanding of robotic manipulators (like UR3) and develop a complete engineering project.

Qualifications

  • Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mechanical Engineering or related fields.
  • Good programming skills in C# and Unity 3D
  • Familiarity with ROS or other robotic frameworks
  • Basic knowledge of 3D modeling and animation
  • Good written and verbal communication skills in English.
  • (optional) Experience with mixed reality development and programming

Interested?

If this project sounds like fun to you, please contact Fotios Lygerakis by email at fotios.lygerakis@unileoben.ac.at or simple visit us at our chair in the Metallurgie building, 1st floor.

Retreat notes and progress untill 05.09.2022

Agenda

Next steps after AAAI submission

Upcoming research questions to answer

  1. Normalize total loss
  2. What is the performance of CR-VAE with ResNet architecture on MNIST and CIFAR-10 Datasets?
  3. What is the performance of MoCo on MNIST and CIFAR-10 Datasets?
  4. How does CR-VAE-BIG compare with MoCo?
  5. What is better, SGD or Adam? Why?
  6. What is better, E2E or Modular? Why?
  7. How can we train on ImageNet? Maybe alternative datasets?
  8. New architecture: decoder input -> concatenated latent representations from q and k encoders.
  9. Can we incorporate all representation techniques into one?

post paper submission setbacks

  • KL divergence computation was wrong. When fixed, performance was different
  • With a weight factor of 1 for the KL divergence, the learned features performance in classification task diminish.
  • This report shows this problem.
  • Same behavior for CR-VAE. Untill the reconstruction and the contrastive losses are in the same scale with the KLD loss, the performance will continue to deviate. This happens because KLD dominates numerically the total loss.
  • Way to mitigate it:
    • Descending beta value
      • currently exploring different scheduling techniques
      • report
  • Note: CR-VAE does not seem novel now
  • Normalizing total loss (weight loss inversively with their magnitude) might lead to better performance

Meeting Notes -Melanie-Prof Rueckert 22.07.2022

Agenda

  1. Data / Visualization framework in Python
    • API framework/usage guidelines
    • input  format
    • output format
    • Gui to inspect the data.
      • Original Data view
      • subsection view (using the rotated and subselected image parts).
      • Slider to adjust the time.
      • Play function.
      • Replay speed adjustment.
      • Basic statistics of the shown data (e.g. histogramms of the two images, min, max, mean, boxplots, number of blobs [1], …).
  2. Symmetry measurements
    • Develop measures and visualization tools to detect asymmetries between the two images.
    • Find examples of such asymmetries.
    • Analyze them.
  3. Occlusion removal
    • Classical CV approach
    • Learning-based approach(De-Occlusion)
  4. Abnormality detection
  5. Thesis writing

Topic 1: Data / Visualization framework in Python

Deliverables due to September

use pandas dataframe library

Topic 2: Symmetry measurements 

Deliverables due to October:

  • Develop automatic detection methods and selection tools for your guidance
    • e.g., highlight these events in your time-line in the gui with red bars, or create a list of events that can be selected

Topic 3: Occlusion removal

Concerned about its appicability to this project. we cannot assume it will work with non-face data

If we don’t know the dynamics of the liquid we cannot reconstruct maintaining the true underlying information.

due to November

https://arxiv.org/pdf/1612.08534.pdf

https://github.com/zhaofang0627/face-deocc-lstm

Next Steps

  • Abnormality detection
    • due December
  • Thesis writing
    • due January

Next Meeting: TBA

Meeting Notes – Melanie 19.07.2022

Agenda

  • Presentation on Metallurgy review
  • Next Steps
    • Study dense NN
      • MNIST dataset
    • Study CNNs
      • Classification
      • Bounding Box
      • Segmentation
      • Feature matching
    • Autoencoders
      • Anomaly detection
    • FlowNet 2.0

Topic 1: Presentation on Metallurgy review

Great introductory presentation on each book’s(3) content

No need to study the math on the properties of mixtures on the second book

Next Steps

  1. Present an introduction to NN/CNNs
  2. Small jupyter tutorial on DNN/CNNs
  3. Presentation of FlowNet paper

Next Meeting: Tue 26 July

Meeting Notes 15.07.2022

Agenda

  • Present Paper Concept: Contrastive VAE
    • https://docs.google.com/presentation/d/1zBnog1A9mlHpZ4sFhwS12w6UBMrNYRfKgxCbLvcysUA/edit?usp=sharing

Topic 1: ConVAE

Present concept, math and next steps

Notes

  • Add an intermediate step in the introduction: “Why we want the latent representations”
  • Motivate Unsupervised Learning
  • Consider using a different distribution for the prior: L1 norm for example
  • What has changed in the behavior of the ConVAE in comparison with VAE from an Information Theory perspective
  • Mathematical proof that it can work in all datasets
  • Train later on ImageNet

Meeting Notes – Melanie 14.07.2022

Agenda

  • Dataset updates
  • Progress
    • Metallurgy review
  • Next Steps
    • Study dense NN
    • Study CNNs
      • Classification
      • Bounding Box
      • Segmentation
      • Feature matching
    • Autoencoders
      • Anomaly detection
    • FlowNet 2.0
  • Tools
    • Pycharm
    • Pytotch
  • meeting schedule
    • day/time
    • place

Topic 1: Dataset updates

On Friday’s meeting with Voest

Topic 2: Progress

Focus on the part of the data comes from.

make a presentation. 6-7 slides each book

Topic 3: Next steps

  • Study dense NN
  • Study CNNs
    • Classification
  • FlowNet 2.0

 

Topic 3: Tools

  • Pycharm
  • Pytotch

 

Topic 4: Meeting Schedule

Next Steps

  1. Presentation of Metallourgy SOTA
  2. Present an introduction to NN/CNNs
  3. Small jupyter tutorial on DNN/CNNs
  4. Presentation of FlowNet paper

Next Meeting: Tue 19 July