1

How to build a professional low-cost lightboard for teaching

Making Virtual Lectures Interactive

Giving virtual lectures can be exciting. Inspired by numerous blog posts of colleagues all over the world (e.g., [1], [2]), I decided to turned an ordinary glass desk into a light board. The total costs were less than 100 EUR.

Below you can see some snapshots of the individual steps.

Details to the Lightboard Construction

The light board construction is based on

  • A glas pane, 8mm thick. Hint: do not use acrylic glass or glas panes thinner than 8mm. I got an used glass/metal desk for 20EUR.
  • LED stripes from YUNBO 4mm width, e.g. from [4] for 13EUR. Hint: Larger LED strips, which you can typically get at DIY markets have width of 10mm. These strips do not fit into the transparent u profile.
  • Glass clamps for 8mm glass, e.g., from onpira-sales [5] for 12EUR.
  • Transparent U profiles from a DIY store, e.g., the 4005011040225 from HORNBACH [6] for 14EUR.
  • 4 castor wheels with breaks, e.g. from HORNBACH no. 4002350510587 for 21EUR.

Details to the Markers, the Background and the Lighting

Some remarks are given below on the background, the lighting and the markers.

  • I got well suited flourescent markers, e.g., from [6] for 12EUR. Hint: Compared to liquid chalk, these markers do not produce any noise during the writing and are far more visible.
  • The background blind is of major importance. I used an old white roller blind from [7] and turned it into a black blind using 0.5l of black paint. Hint: In the future, I will use a larger blind with a width of 3m. A larger background blind is required to build larger lightboards (mine is 140x70mm). Additionally, the distance between the glass pane and the blind could be increased (in my current setting I have a distance of 55cm).
  • Lighting is important to illuminate the presenter. I currently use two small LED spots. However, in the future I will use professional LED studio panels with blinds, e.g. [8]. Hint: The blinds are important to prevent illuminating the black background.
  • The LED stripes run at 12Volts. However, my old glass pane had many scratches, which become fully visible at the maximum power. To avoid these distracting effects, I found an optimal setting with 8Volts worked best for my old glass pane.

Details to the Software and to the Microphone

At the University, we are using CISCO’s tool WEBEX for our virtual lectures. The tool is suboptimal for interactive lightboard lectures, however, with some additional tools, I converged to a working solution.

  • Camera streaming app, e.g., EPOCCAM for the iphones or IRIUN for android phones. Hint: the smartphone is mounted on a tripod using a smartphone mount.
  • On the client side, a driver software is required. Details can be found when running the smartphone app.
  • On my mac, I am running the app Quick Camera to get a real time view of the recording. The viewer is shown in a screen mounted to the ceiling. Hint: The screen has to be placed such that no reflections are shown in the recordings.
  • In the WEBEX application, I select the IRIUN (virtual) webcam as source and share the screen with the quick camera viewer app.
  • To ensure an undamped audio signal, I am using a lavalier microphone like that one [9].
  • For offline recordings, apple’s quicktime does a decent job. Video and audio sources can be selected correctly. Hint: I also tested VLC, however, the lag of 2-3 seconds was perceived suboptimal by the students (a workaround with proper command line arguments was not tested).

An Example Lecture

And that’s how it looks …




Sai Puneeth Reddy Gottam, M.Sc.

Ph.D. Student at the Montanuniversität Leoben

Short bio: Mr. Sai Puneeth Reddy Gottam started at CPS on 1st of July in 2025. 

He received his Master degree in Automation and Robotics from RWTH Aachen University in 2024 with the study focus on Robotics and Machine Learning. His thesis was entitled “Adaptive feature tracking in visual odometry using self-supervised learning for
challenging environments”, which took place at Space Application Services NV/SA, Brussels. In the thesis, he implemented Self-supervised learning improving feature detection for Visual Odometry in complex large scale environments. Before that, he also did his Research internship at Space Application Services, where he worked on synthetic data generation and object detection for vessel detection.

Research Interests

  • Machine Learning
  • Robotics
  • Computer Vision

Contact & Quick Links

M.Sc. Sai Puneeth Reddy Gottam
Doctoral Student supervised by Univ.-Prof. Dr. Elmar Rueckert.
Montanuniversität Leoben
Roseggerstrasse 11 , 
8700 Leoben, Austria 

Phone:  +49 1636348289
Email:   sai.gottam@unileoben.ac.at 
Web Work: CPS-Page
Chat: WEBEX

Personal Website
GitHub
Google Citations
LinkedIn
ORCID
Research Gate

Publications




B.Sc. Thesis: Richard Marecek on evelopment of an Automated Data Acquisition and Monitoring System for Sensor-Based Testing

Supervisor: Univ.-Prof. Dr Elmar Rückert

Project:§27 Hopbas Pipe
Start date: 1st of May 2024

Theoretical difficulty: low
Practical difficulty: high

Topic

The goal of this project is to develop a webinterface to record a very large number of various sensors, to visualize the data and to store it into a database. 

To connect and transform the sensor data a programmable hardware device, theAdvantech ADAM 6017-D is used. 

Tasks

  • Literature research of state of the art, see references
  • Lab prototype environment for recording multiple sensors (different bus protocols, analog and digital)
  • Dataset recording, visualization
  • User interface to add sensors and to start tests
  • Evaluation on of different tests (varying number of sensors, ADAM devices, etc.).
  • Thesis writing.

References

  • TASMOTA
  • MQTT
  • MODBus

Bachelor Thesis

The final bachelor thesis document can be downloaded here




Innovationslabor für Automation, Robotik und KI

Projektpartner:

  • Prof. Thomas Thurner, Lehrstuhl für Automation & Messtechnik
  • Prof. Elmar Rückert, Lehrstuhl für Cyber-Physical-Systems

Forschungsinnovationen sind die treibende Kraft für eine moderne nachhaltige Kreislaufwirtschaft. Um diese Innovationen zu entwickeln, haben die Lehrstühle für Cyber-Physical-Systems (CPS) und Automation und Messtechnik (A&M), ein neues Innovationsforschungslabor für Recycling (INFOR) im Haus der Digitalisierung aufgebaut. 

Abbildung 1: Roboter und Kreisförderanlage des Innovationsforschungslabor für Recycling (INFOR) im Haus der Digitalisierung – Digital Science Lab.

Projektziele

Folgende Projektziele werden adressiert:

Kreislaufwirtschaft und Nachhaltigkeit verstehen wir als Bestandteil unsere DNA and der Montanuniversität, welche wir mit besonders innovativen multidisziplinären Ansätzen aus dem Bereich der Digitalisierung / Robotik sowie der Recyclingtechnik und der Reststoffverwertung gerecht werden. Diese Themen werde im Innovationsforschungslabor praxisnahe untersucht.  

Digitalisierung ist ein Querschnittsthema welches quer über alle Schwerpunktbereiche der Montanuniversität besonders relevant ist. Auch für die Unterstützung der lokalen Industrie werden Digitalisierungsthemen an der Montanuniversität die Wettbewerbsfähigkeit der nationalen Industriebetriebe stärken, beispielsweise durch Nutzung von neuen Technologien die an der Montanuniversität erforscht werden, oder durch direkt zielgerichtete Unterstützung in kooperativen Projekten und Kooperationen.

Die Robotik ist als einer der zentralen Megatrends sowohl in Industrie als auch im Consumer-Bereich. Humanoide Roboter sind beispielsweise als neue Technologie in den Startlöchern, mit bereits sichtbarem exponentiellem Wachstum und beeindruckenden Prognosen, z.b., laut der  Bank of Amerika sollen die globalen Verkaufszahlen von Humanoiden Robotern im Jahr 2030 bei 1 Million Einheiten liegen, und prädiziert wird für 2060 die beeindruckende Anzahl von 3 Milliarden humanoiden Robotern.

Im Bereich des Recycling werden neue Konzepte durch robotische Lösungen, aber auch durch den Einsatz von Sensorik und Maschinellem Lernen, insbesondere aber durch die Kombination der genannten Disziplinen in multidisziplinären gemeinsamen Arbeiten untersucht und entwickelt.

Anwendungen und Forschungsfragen

Zentrale Forschungsfragen welche direkt die Kompetenzen und Forschungsbereiche der beiden beteiligten Lehrstühle betreffen:

  • CPS: Einsatz von KI und maschinellem Lernen in der Robotik, insbesondere bei humanoiden Robotern
  • CPS: Entwicklung KI-basierter Modelle zur sensorgestützten Erkennung und Klassifizierung in Echtzeit
  • CPS & A&M: Mensch-Maschine-Interaktion durch Sprachmodelle, AR-/VR-Systeme zur Unterstützung manueller Tätigkeiten an Förderanlagen (z. B. Markierung gefährlicher oder relevanter Objekte)
  • A&M: Taktile Sensorik und neuartige Greifsysteme, einschließlich künstlicher Hände für humanoide Robotik
  • CPS & A&M: Robotisches Greifen für industrielle Anwendungen – mit Fokus auf das Recycling
  • A&M: Regelung robotischer Systeme, insbesondere für haptisches Greifen
  • A&M: Sensorik für die robotische Sortierung von Reststoffen im Recyclingbereich

Kooperationen und öffentliche Events

Das Innovationsforschungslabor für Recycling bietet die Möglichkeit, komplexe Aufgabenstellungen und Fragestellungen unter realitätsnahen Bedingungen zu untersuchen. Es erlaubt die autonome und semi-autonome Erfassung großer Datenmengen und dient als Testumgebung für innovative Sensor- und Greiftechnologien.

Ein besonderer Schwerpunkt liegt auf der Zusammenarbeit mit Industriepartnern zur Erprobung humanoider Roboter im industriellen Umfeld.

Aktuell wird das Labor in folgenden Industriekooperation genützen:

  • Infineon Technologies Austria AG



M.Sc. Thesis – Bernd Burghauser: Benchmarking SLAM and supervised learning methods in challenging real-world environments.

Supervisor: Linus Nwankwo, M.S.c;
Univ.-Prof. Dr Elmar Rückert
Start date: ASAP, e.g., 1st of October 2021

Theoretical difficulty: low
Practical difficulty: high

Introduction

The SLAM problem as described in [3] is the problem of building a map of the environment while simultaneously estimating the robot’s position relative to the map given noisy sensor observations. Probabilistically, the problem is often approached by leveraging the Bayes formulation due to the uncertainties in the robot’s motions and observations. 

SLAM has found many applications not only in navigation, augmented reality, and autonomous vehicles e.g. self-driving cars, and drones but also in indoor & outdoor delivery robots, intelligent warehousing etc. While many possible solutions have been presented in the literature to solve the SLAM problem, in challenging real-world scenarios with features or geometrically constrained characteristics, the reality is far different.

 

Some of the most common challenges with SLAM are the accumulation of errors over time due to inaccurate pose estimation (localization errors) while the robot moves from the start location to the goal location; the high computational cost for image, point cloud processing and optimization [1]. These challenges can cause a significant deviation from the actual values and at the same time leads to inaccurate localization if the image and cloud processing is not processed at a very high frequency [2]. This would also impair the frequency with which the map is updated and hence the overall efficiency of the SLAM algorithm will be affected.

For this thesis, we propose to investigate in-depth the visual or LiDAR SLAM approach using our state-of-the-art Intel Realsense cameras and light detection and ranging sensors (LiDAR). For this, the following concrete tasks will be focused on:

Tentative Work Plan

  • study the concept of visual or LiDAR-based SLAM as well as its application in the survey of an unknown environment.
  • 2D/3D mapping in both static and dynamic environments.
  • localise the robot in the environment using the adaptive Monte Carlo localization (AMCL) approach.
  •  write a path planning algorithm to navigate the robot from starting point to the destination avoiding collision with obstacles.
  • real-time experimentation, simulation (MATLAB, ROS & Gazebo, Rviz, C/C++, Python etc.) and validation.

About the Laboratory

Robotics & AI-Lab of the Chair of Cyber-Physical Systems is a research innovative lab focusing on robotics, artificial intelligence, machine and deep learning, embedded smart sensing systems and computational models. To support its research and training activities, the laboratory currently has:

  • additive manufacturing unit (3D and laser printing technologies).
  • metallic production workshop.
  • robotics unit (mobile robots, robotic manipulators, robotic hands, unmanned aerial vehicles (UAV))
  • sensors unit (Intel Realsense (LiDAR, depth and tracking cameras), Inertial Measurement Unit (IMU), OptiTrack cameras etc.)
  • electronics and embedded systems unit (Arduino, Raspberry Pi, e.t.c)

Expression of Interest

Students interested in carrying out their Master of Science (M.Sc.) or Bachelor of Science (B.Sc.) thesis on the above topic should immediately contact or visit the Chair of Cyber Physical Systems.

Phone: +43 3842 402 – 1901 

Map: click here

References

[1]  V.Barrile, G. Candela, A. Fotia, ‘Point cloud segmentation using image processing techniques for structural analysis’, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W11, 2019 

[2]  Łukasz Sobczak , Katarzyna Filus , Adam Domanski and Joanna Domanska, ‘LiDAR Point Cloud Generation for SLAM Algorithm Evaluation’, Sensors 2021, 21, 3313. https://doi.org/10.3390/ s21103313.

[3]  Wolfram Burgard, Cyrill Stachniss, Kai Arras, and Maren Bennewitz , ‘SLAM: Simultaneous
Localization and Mapping’,  http://ais.informatik.uni-freiburg.de/teaching/ss12/robotics/slides/12-slam.pdf

Master Thesis

The final master thesis document can be downloaded here




M.Sc. Thesis, Adiole Promise Emeziem: Language-Grounded Robot Autonomy through Large Language Models and Multimodal Perception

Supervisor: Linus Nwankwo, M.Sc.;
Univ.-Prof. Dr Elmar Rückert
Start date:  As soon as possible

 

Theoretical difficulty: mid
Practical difficulty: High

Abstract

The goal of this thesis is to enhance the method proposed in [1] to enable autonomous robots to effectively interpret open-ended language commands, plan actions, and adapt to dynamic environments.

The scope is limited to grounding the semantic understanding of large-scale pre-trained language and multimodal vision

language models to physical sensor data that enables autonomous agents to execute complex, long-horizon tasks without task-specific programming. The expected outcomes include a unified framework for language-driven autonomy, a method for cross-modal alignment, and real-world validation.

Tentative Work Plan

To achieve the objectives, the following concrete tasks will be focused on:

  • Backgrounds and Setup:
    • Study LLM-for-robotics papers (e.g., ReLI [1], Code-as-Policies [2], ProgPrompt [3]), vision-language models (CLIP, LLaVA).
    • Set up a ROS/Isaac Sim simulation environment and build a robot model (URDF) for the simulation (optional if you wish to use an existing one).
    • Familiarise with how LLMs and VLMs can be grounded for short-horizon robotic tasks (e.g., “Move towards the {color} block near the {object}”), in static environments.
    • Recommended programming tools: C++, Python, Matlab.
  • Modular Pipeline Design:
    • Speech/Text (Task Instruction) ⇾ LLM (Task Planning) ⇾ CLIP (Object Grounding) ⇾  Motion Planner (e.g., move towards the {colour} block near the {object}) ⇾ Execution (In simulation or real-world environment).
    •  
  • Intermediate Presentation:
    • Present the results of your background study or what you must have done so far.
    • Detailed planning of the next steps.
    •  
  • Implementation & Real-World Testing (If Possible):
    • Test the implemented pipeline with a Gazebo-simulated quadruped or differential drive robot.
    • Perform real-world testing of the developed framework with our Unitree Go1 quadruped robot or with our Segway RMP 220 Lite robot.
    • Analyse and compare the model’s performance in real-world scenarios versus simulations with the different LLMs and VLMs pipelines.
    • Validate with 50+ language commands in both simulation and the real world.
    •  
  • Optimise the Pipeline for Optimal Performance and Efficiency (Optional):
    • Validate the model to identify bottlenecks within the robot’s task environment.
    •  
  • Documentation and Thesis Writing:
    • Document the entire process, methodologies, and tools used.
    • Analyse and interpret the results.
    • Draft the thesis, ensuring that the primary objectives are achieved.
      • Chapters: Introduction, Background (LLMs/VLMs in robotics), Methodology, Results, Conclusion.
    • Deliverables: Code repository, simulation demo video, thesis document.
    •  
  • Research Paper Writing (optional)
    •  

References

[1] Nwankwo L, Ellensohn B, Özdenizci O, Rueckert E. ReLI: A Language-Agnostic Approach to Human-Robot Interaction. arXiv preprint arXiv:2505.01862. 2025 May 3.

[2] Liang J, Huang W, Xia F, Xu P, Hausman K, Ichter B, Florence P, Zeng A. Code as policies: Language model programs for embodied control. In2023 IEEE International Conference on Robotics and Automation (ICRA) 2023 May 29 (pp. 9493-9500). IEEE.

[3] Singh I, Blukis V, Mousavian A, Goyal A, Xu D, Tremblay J, Fox D, Thomason J, Garg A. Progprompt: Generating situated robot task plans using large language models. In2023 IEEE International Conference on Robotics and Automation (ICRA) 2023 May 29 (pp. 11523-11530). IEEE.

[4] Nwankwo L, Rueckert E. The Conversation is the Command: Interacting with Real-World Autonomous Robots Through Natural Language. In Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction 2024 Mar 11 (pp. 808-812).




BSc. Thesis, Weiyi Lin – Image augmentation and its impacts on reinforcement learning models

Supervisor: Vedant Dave, M.Sc.
Univ.-Prof. Dr Elmar Rückert
Start date:  3rd April 2025

 

Theoretical difficulty: mid
Practical difficulty: low

Abstract

Due to the tendency of reinforcement learning models to overfit to training data, data augmentation has become a widely adopted technique for visual reinforcement learning tasks for its capability of enhancing the performance and generalization of agents by increasing the diversity of training data. Often, different tasks benefit from different types of augmentations, and selecting them requires prior knowledge of the environment. This thesis aims to explore how various augmentation strategies can impact the performance and generalization of agents in visual environments, including visual augmentations and context-aware augmentations.

Tentative Work Plan

  • Literature research.
  • Understanding of concepts of visual RL models (SVEA).
  • Implementing and testing different augmentations.
  • Observation and documentation of results.
  • Thesis writing.

Related Work

[1] N. Hansen and X. Wang, “Generalization in Reinforcement Learning by Soft Data Augmentation,” 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 2021, pp. 13611-13617, doi: 10.1109/ICRA48506.2021.9561103

[2] Hansen, Nicklas, Hao Su, and Xiaolong Wang. “Stabilizing deep q-learning with convnets and vision transformers under data augmentation.” Advances in neural information processing systems 34 (2021): 3680-3693.

[3] Almuzairee, Abdulaziz, Nicklas Hansen, and Henrik I. Christensen. “A Recipe for Unbounded Data Augmentation in Visual Reinforcement Learning.” Reinforcement Learning Conference.




M.Sc Thesis: Fritze Clemens – A Dexterous Multi-Finger Robotic Manipulator Framework for Intuitive Teleoperation and Contact-Rich Imitation Learning

Supervisor: M.Eng Fotios Lygerakis, Univ.-Prof. Dr. Elmar Rückert

Theoretical difficulty: mid
Practical difficulty: hard

 

Abstract

Robotic manipulation in dynamic environments requires systems that can adapt
to uncertainties and learn from limited human input. This thesis presents a dexterous
multi-finger robotic framework that integrates intuitive teleoperation with
self-supervised visuotactile representation learning to enable contact-rich imitation
learning. Central to the system is a Franka Emika Panda robotic arm paired with a
multi-fingered LEAP Hand equipped with high-resolution GelSight Mini tactile sensors.
A Meta Quest 3 teleoperation interface captures natural human demonstrations while
collecting multimodal data, including visual, tactile, and joint-state inputs, to train
the self-supervised encoders.

The study evaluates two representation learning methods, BYOL and MViTac, under
low-data conditions. Extensive experiments on complex manipulation tasks — such as
pick-and-place, battery insertion, and book opening—demonstrate that BYOL-trained
encoders consistently outperform both MViTac and a ResNet18 baseline, achieving
a 60% success rate on the challenging spiked cylinder task. Key findings highlight
the critical role of tactile feedback quality, with GelSight sensors delivering robust
tactile impressions compared to lower-resolution alternatives. Furthermore, parameter
studies reveal how system settings (e.g., reject buffers, movement thresholds) and
demonstration selection critically influence task performance.

Despite challenges in scenarios requiring precise visual-tactile coordination, the
results validate the potential of self-supervised learning to reduce human annotation
effort and facilitate a smooth transition from teleoperated control to autonomous
execution. This work provides valuable insights into the integration of hardware and
software components, as well as control strategies, demonstrating BYOL’s potential as
a promising approach for tactile representation learning in advancing autonomous
robotic manipulation.

Milestones

Teleoperation test of the LEAP Hand:

https://cps.unileoben.ac.at/wp/LeapHandTest.mp4

Visual encoder test:

https://cps.unileoben.ac.at/wp/VisualEncoderTest.mp4

First version of the FrankaArm-control test:

https://cps.unileoben.ac.at/wp/FrankaArmTest.mp4

Dataset collection / teleoperation of the whole setup:

https://cps.unileoben.ac.at/wp/CompleteSetup.mp4

Fully autonomous task execution:

https://cps.unileoben.ac.at/wp/AutonomousTaskExecution.mp4




1 PhD Position – Manipulation & Perception in Recycling

The Chair of Cyber-Physical Systems at Montanuniversität Leoben is offering a fully funded PhD position (100% employment) starting as soon as possible.

Employment Type: Full-time doctoral student (40 hours/week)

Salary: €3,714.80/month (14 times per year), Salary Group B1 according to Uni-KV

Duration: The position includes the opportunity to complete a PhD

About the Position

We are at the forefront of developing cutting-edge machine learning algorithms for detecting, tracking, and classifying material flows using various advanced sensing technologies, including:

• RGB cameras
• 3D imaging
• LiDAR
• Hyperspectral cameras
• RAMAN devices
• Tactile sensors

The resulting model predictions are used for automated data labeling, real-time process monitoring, and autonomous object manipulation.

This PhD research will focus on multiple aspects of these topics, with a special emphasis on multimodal sensing and robotic grasping. The goal is to enhance robotic perception and interaction by integrating machine learning with tactile sensing technologies.

What we offer

•A dynamic and collaborative research environment in artificial intelligence and robotics

•The opportunity to develop your own research ideas and work on cutting-edge projects

• Access to state-of-the-art lab facilities

•International research collaborations and conference travel opportunities

•Targeted career guidance for a successful academic and research career

Plus a great lab space shown in this image.

 

Requirements

Master’s degree in Computer Science, Physics, Telematics, Statistics, Mathematics, Electrical Engineering, Mechanics, Robotics, or a related field

Strong motivation for scientific research and publications

Ability to work independently and collaboratively in an interdisciplinary team

Interest in writing a PhD dissertation

Desired additional qualifications

• Programming experience in C, C++, C#, Java, MATLAB, Python, or a similar language

• Familiarity with AI libraries and frameworks (e.g., TensorFlow, PyTorch)

• Strong English communication skills (written and spoken)

• Willingness to travel for research collaborations and technical presentations

Application & Materials

A complete application includes:

1. Curriculum Vitae (CV) (detailed)

2. Letter of Motivation

3. Master’s Thesis (PDF or link)

4. Academic Certificates (Bachelor’s and Master’s degrees)

Optional but beneficial:

5. Letter(s) of Recommendation

6. Contact Information for References (name, email, phone)

7. Previous Publications (PDFs or links)

Application deadline: Open until the position is filled.

Online Application via Email: Please send your application files to rueckert@unileoben.ac.at

The Montanuniversität Leoben intends to increase the number of women on its faculty and therefore specifically invites applications by women. Among equally qualified applicants women will receive preferential consideration.




Bundesministerium für Landesverteidigung

Forschungsprojekt Nr. 991 “#command21 – Joint Environmental Denied Interface (JEDI)”

Das Projekt ist im Januar 2025 gestartet und befasst sich mit der Erforschung von KI-Modellen zur Gestenerkennung und Steuerung von autonomen Systemen. 

Für die Mensch-Maschine-Interaktion kommen multiple Systeme, wie zum Beispiel VR/AR-Brillen, zum Einsatz.  

Laufende Projekte, Bachelor- und Masterarbeiten

  • Offene Bachelor- und Masterarbeiten zur “Sensorfusion für multimodale Gestenerkennung”
  • Offene Bachelor- und Masterarbeiten zur “Prozess- und Workflow-Modellierung für die Interaktion mit autonomen Systemen”



Business Trip – Insurance

Update (as of 5 Feb 2025)

The university has an insurance for all employees for business trips. Thus, whenever you officially applied for a business trip (via MUOnline) and after the trip is granted, you are insured. The insurance includes many aspects, including medical treatment, retrieval, lost bags, etc.  Note: Thus, even for trips to the USA, you do not need a private insurance.