1

17.10.2022 – Introduction to CAD Software

Why do I need CAD Software?

  • Computer-Aided Design (CAD) is the cornerstone of how you design and build things. It allows the user to digitally create, visualise, and simulate 2D or 3D models of real-world products before it is being manufactured.
  • CAD models allow users to iterate and optimize designs to meet design intent.
  • The use of CAD software facilitates the testing of real-world conditions, loads, and constraints, which increases the quality of the product.
  • CAD software helps to explore ideas and visualise the concept.
  • Improve the quality, precision of the design, and communication in the design process.
  • Analyse real-world scenarios by computer-aided analysis
  • Create a database for product development and manufacturing.

Some Practical Applications of CAD Software

Source: https://learnsolidworks.com/
Source: https://automation.siemens.com/
Source: https://leocad.org/

Automobile parts can be modelled, visualised, revised, and improved on the screen before being manufactured.

Electrical schematics, control circuit diagrams, PCBs, and integrated circuits (ICs)  can be designed and developed with ECAD software 

With CAD software, architects can visualise and simulate their entire project using real-world parameters, without needing to build any physical structuress or models. 

What CAD software do I need?

Something free

  • FreeCAD
  •  TinkerCAD
  • Fusion 360
  • Onshape
  • Solid Edge
  • Blender
  • SketchUp

My design goes with me wherever I go (cloud-based)

  • Onshape
  • TinkerCAD
  • AutoCAD Web
  • SelfCAD
  • Vectary
  • SketchUp

Something more advanced and professional

  • AutoCAD
  • Autodesk Inventor
  • SolidWorks
  • Fusion 360
  • Solid Edge
  • CATIA
  • Onshape
  • Shapr3D
  • Creo

Windows OS

  • AutoCAD
  • Autodesk Inventor
  • Solidworks
  • Fusion 360
  • CATIA
  • Creo
  • Solid Edge
  • Shapr3D
  • Blender

Linux OS

  • NX Advanced Designer
  • Blender

MacOS

  • AutoCAD
  • Autodesk Inventor
  • Fusion 360
  • Shapr3D
  • Blender
  • NX Advanced Designer

iOS, Android

  • AutoCAD
  • Autodesk Inventor
  • Shapr3D

Where can I learn CAD?

  1. Coursera:  https://coursera.org/courses?query=cad
  2. Udemy:  https://udemy.com/topic/autocad/
  3. MyCADSite:  https://mycadsite.com/
  4. Skill Share:    https://skillshare.com/search?query=solidworks
  5. CAD-Tutorials.de:  https://cad-tutorials.de/
  6. Youtube:  https://youtube.com/watch?v=cAgpDFTHxpY
  7. CADTutor:  http://cadtutor.net/
  8. PTC Training:  https://ptc.com/en/ptc-university/training-catalogs
  9. Autodesk Tinkercad: https://tinkercad.com/



07.10.2022 – Innovative Research Discussion

Meeting notes on the 4th of October, 2022

Location: Chair of CPS

Date & Time: 7th October, 2022, 09:15 am to 10:25 pm

Participants: Univ.-Prof. Dr. Elmar Rueckert, Linus Nwankwo, M.Sc.

Agenda

  1. Discussion on  ROS-Mobile Control
  2. Discussion on ODrive torque control

ROS-Mobile and ODrive torque control

  1.  Re-implement the o2s control approach to accommodate the information in the attached figure.
  2.  Write the Arduino code taking into account the rotation matrices
  3. Implement the open-loop torque control approach




04.10.2022 – Innovative Research Discussion

Meeting notes on the 4th of October, 2022

Location: Chair of CPS

Date & Time: 4th October, 2022, 12:15 am to 1:15 pm

Participants: Univ.-Prof. Dr. Elmar Rueckert, Linus Nwankwo, M.Sc.

Agenda

  1. Discussion of the research progress
  2. Discussion on the Hardware-X journal publication 
  3. Discussion of the next conference publication

Journal Publication : Hardware-X

  1.  Put the HardwareX manuscript on Arxiv to enable us to: 
  • cite it in the subsequent article
  • obtain the DOI
  • update the publication in our cloud with the Arxiv number

Conference Paper: O2S: Open Source Open Shuttle – A comparison of SLAM algorithms

  1.  Start working on the conference paper using the shared Latex template provided
  • Compare the various 2D SLAM algorithms
  •  Establish the key metrics for the evaluation
  • Evaluate their performance in real-world with the O2S
  • Evaluate visual SLAM (optional)
  • Check how 2D SLAM can be combined with RGB-D cameras with a deep neural network to improve the map quality (optional)

Future Steps

Intention signalling to improve human-robot interaction (HRI)

Next Meeting

Yet to be defined




29.08.2022 – Innovative Research Discussion

Meeting notes on the 29th of August, 2022

Location: Chair of CPS

Date & Time: 29th August, 2022, 11:30 am to 12.00 noon

Participants: Univ.-Prof. Dr. Elmar Rueckert, Linus Nwankwo, M.Sc.

Agenda

  1. Discussion of the research progress
  2. Discussion on the Hardware-X journal publication 
  3. Discussion of the next project after hardware-X submission

Topic 1: Journal Publication – Hardware-X

  1.  Re-design the robot base for the hardware-X publication
  2. Publish the following file:
    • CAD files including the URDF
    • ROS nodes and schematics
    • Show some experimental results e.g real-time control and monitoring with ROS-Mobile and Android-based devices; remote control from Arduino and joystick etc.
    • Perform SLAM and demonstrate some experiment.

Topic 2:  Conference paper for possible publication in IROS, ECMR etc:

  1.  Open source open shuttle (O2S) for SLAM and navigation applications:  – Compare the existing SLAM algorithms for efficient navigation in a cluttered and dynamic environment
  2. O2S intention signalling: – for safe human-robot interaction (HRI) using intention signalling.
  3. Mobile robot teleoperation through hand gestures approach (possible for publication at the National European conference )
  4. Deep navigation with O2S (proposal for Christoper M.Sc. thesis – subject to changes pending the outcome of the meeting on Friday)

Future Steps

Survey of open-shuttles in logistics:  Possible for A.I based journal publication

Next Meeting

Yet to be defined




24.03.2022 – Innovative Research Discussion

Meeting notes on the 23rd of March 2022

Location: Chair of CPS

Date & Time: 23rd March. 2022, 12 pm to 1.00 pm

Participants: Univ.-Prof. Dr. Elmar Rueckert, Linus Nwankwo, M.Sc.

 

Agenda

  1. Discussion of the research progress
  2. Discussion of related literature for 2D Lidar-based SLAM and path planning for improved autonomous navigation

Topic 1: SLAM + Path Planning Algorithms

  1.  Compare Lidar-based SLAM with RGBD-based
  2. Select the appropriate algorithm to build the SLAM system
  3. Evaluate the performances of the chosen SLAM + path planning algorithm in terms of:
    • computational efficiency
    • collision avoidance in a dynamic environment.

Topic 2: Real-time Implementation

  1.  Compare our two mobile bases: 
  2. Implement the chosen SLAM algorithm to solve the problem of loop closure that we are currently facing.
  3.  Send the robot to a specific goal location within the laboratory
  4.  

Future Steps

Toward improved autonomous navigation in real-time with any of our mobile bases.

Next Meeting

Yet to be defined




B.Sc. Thesis: Fritze Clemens on Gesture-Based Mobile Robot Teleoperation for Navigation Application

Supervisor: Linus Nwankwo, M.Sc.;
Univ.-Prof. Dr Elmar Rückert
Start date: 10th January 2022

Theoretical difficulty: mid
Practical difficulty: mid

Abstract

Nowadays, robots used for survey of indoor and outdoor environments are either teleoperated in fully autonomous mode where the robot makes a decision by itself and have complete control of its actions; semi-autonomous mode where the robot’s decisions and actions are both manually (by a human) and autonomously (by the robot) controlled; and in full manual mode where the robot actions and decisions are manually controlled by humans. In full manual mode, the robot can be operated using a teach pendant, computer keyboard, joystick, mobile device, etc.

Although the Robot Operating System (ROS) has provided roboticists with easy and efficient tools to teleoperate or command robots with both hardware and software compatibility on the ROS framework, however, there is a need to provide an alternative approach to encourage a non-robotic expert to interact with a robot. The human hand-gesture approach does not only enables the robot users to teleoperate the robot by demonstration but also enhances user-friendly interaction between robots and humans.

In the context of this thesis, the application of human hand gestures is proposed to teleoperate our mobile robot using embedded computers and inertial measurement sensors. First, we will teleoperate the robot on the ROS platform and then via hand gestures, leveraging on the framework developed by [1] and [2].

Tentative Work Plan

To achieve our aim, the following concrete tasks will be focused on:

  • Design and generate the Unified Robot Description Format (URDF) for the mobile robot
  • Simulate the mobile robot within the ROS framework (Gazebo, Rviz)
  • Set up the interfaces and serial connection between ROS and the robot control devices
  • Develop an algorithm to teleoperate the robot in the ROS using hand gesture
  • Use the algorithm to perform Simultaneous Localization and Mapping (SLAM) for indoor applications (simulation only)

References

[1] Nils Rottmann et al,  https://github.com/ROS-Mobile/ROS-Mobile-Android, 2020

[2] Wei Zhang, Hongtai Cheng, Liang Zhao, Lina Hao, Manli Tao and ChaoqunXiang, “A gesture-based Teleoperation System for Compliant Robot Motion“, Appl. Sci. 20199(24), 5290; https://doi.org/10.3390/app9245290

Thesis Document

Gesture Based Mobile Robot Teleoperation for Navigation Application




09.12.2021 – Innovative Research Discussion

Meeting on the 9th of December 2021

Location: Chair of CPS

Date & Time: 9th Dec. 2021, 1pm to 1.30pm

Participants: Univ.-Prof. Dr. Elmar Rueckert, Linus Nwankwo, M.Sc.

Agenda

  1. Discussion of the research progress
  2. Discussion of related literature for robot intention communication and human robot interaction

Topic 1: Robot Intention Communication and Human Robot Interaction

  1. Study the given literature to identify the areas of possible improvement
  2. Investigate the integration of LED projector for robot intention communication to humans
  3. Implement a visual signal algorithm to communicate the robot’s intention and direction of motion to humans

Topic 2: Probabilistic SLAM

  1.  Review the  AMCL approach to track the pose of our robot against the map of the environment while mapping.
  2.  Design a SLAM algorithm based on the probabilistic framework and evaluate the performance in terms of loop closure detection, accurate pose estimation and computational cost.

Future Steps

Integrate deep Learning approach with Faster R-CNN model to:

  • detect for example if the person interacting with robot is wearing a face mask
  • classify the face mask worn based on the recommended or not

Next Meeting

Yet to be scheduled




13.10.2021 – Innovative Research Discussion

Meeting on the 13th of October 2021

Location: Chair of CPS

Date & Time: 13th October. 2021, 1pm to 2pm

Participants: Univ.-Prof. Dr. Elmar Rueckert, Linus Nwankwo, M.Sc.

 

Agenda

  1. Review of the previous meeting action points
  2. Discussion of the research progress and possible improvement

Topic 1: Review of the SLAM state-of-the-art

  1. Review the current literature to identify the areas of possible improvement
  2. Investigate the possible integration of RGB-D sensors with thermal sensors for indoor or outdoor SLAM
  3. Evaluate the impact of semantic segmentation and pose estimation on the quality of the map in dynamic environment

Topic 2: Existing Algorithms Performance Evaluation

  1. Implement a loop closure trigger for possible use with any of the SLAM algorithm
  2.  Implement the following SLAM algorithms and evaluate the performances in terms of optimality, accurate pose estimation, computational cost, etc.
  • RTABMap
  • Hector SLAM
  • GMapping

Next Steps

Implement own SLAM algorithm robust against:

  • Odometry Lost
  • Variation in weather and illumination
  • Weak dynamic object detection

Next Meeting

Yet to be scheduled




M.Sc. Thesis – Bernd Burghauser: Simultaneous Localization and Mapping (SLAM) in challenging real-world environments.

Supervisor: Linus Nwankwo, M.S.c;
Univ.-Prof. Dr Elmar Rückert
Start date: ASAP, e.g., 1st of October 2021

Theoretical difficulty: low
Practical difficulty: high

Introduction

The SLAM problem as described in [3] is the problem of building a map of the environment while simultaneously estimating the robot’s position relative to the map given noisy sensor observations. Probabilistically, the problem is often approached by leveraging the Bayes formulation due to the uncertainties in the robot’s motions and observations. 

SLAM has found many applications not only in navigation, augmented reality, and autonomous vehicles e.g. self-driving cars, and drones but also in indoor & outdoor delivery robots, intelligent warehousing etc. While many possible solutions have been presented in the literature to solve the SLAM problem, in challenging real-world scenarios with features or geometrically constrained characteristics, the reality is far different.

 

Some of the most common challenges with SLAM are the accumulation of errors over time due to inaccurate pose estimation (localization errors) while the robot moves from the start location to the goal location; the high computational cost for image, point cloud processing and optimization [1]. These challenges can cause a significant deviation from the actual values and at the same time leads to inaccurate localization if the image and cloud processing is not processed at a very high frequency [2]. This would also impair the frequency with which the map is updated and hence the overall efficiency of the SLAM algorithm will be affected.

For this thesis, we propose to investigate in-depth the visual or LiDAR SLAM approach using our state-of-the-art Intel Realsense cameras and light detection and ranging sensors (LiDAR). For this, the following concrete tasks will be focused on:

Tentative Work Plan

  • study the concept of visual or LiDAR-based SLAM as well as its application in the survey of an unknown environment.
  • 2D/3D mapping in both static and dynamic environments.
  • localise the robot in the environment using the adaptive Monte Carlo localization (AMCL) approach.
  •  write a path planning algorithm to navigate the robot from starting point to the destination avoiding collision with obstacles.
  • real-time experimentation, simulation (MATLAB, ROS & Gazebo, Rviz, C/C++, Python etc.) and validation.

About the Laboratory

Robotics & AI-Lab of the Chair of Cyber-Physical Systems is a research innovative lab focusing on robotics, artificial intelligence, machine and deep learning, embedded smart sensing systems and computational models. To support its research and training activities, the laboratory currently has:

  • additive manufacturing unit (3D and laser printing technologies).
  • metallic production workshop.
  • robotics unit (mobile robots, robotic manipulators, robotic hands, unmanned aerial vehicles (UAV))
  • sensors unit (Intel Realsense (LiDAR, depth and tracking cameras), Inertial Measurement Unit (IMU), OptiTrack cameras etc.)
  • electronics and embedded systems unit (Arduino, Raspberry Pi, e.t.c)

Expression of Interest

Students interested in carrying out their Master of Science (M.Sc.) or Bachelor of Science (B.Sc.) thesis on the above topic should immediately contact or visit the Chair of Cyber Physical Systems.

Phone: +43 3842 402 – 1901 

Map: click here

References

[1]  V.Barrile, G. Candela, A. Fotia, ‘Point cloud segmentation using image processing techniques for structural analysis’, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W11, 2019 

[2]  Łukasz Sobczak , Katarzyna Filus , Adam Domanski and Joanna Domanska, ‘LiDAR Point Cloud Generation for SLAM Algorithm Evaluation’, Sensors 2021, 21, 3313. https://doi.org/10.3390/ s21103313.

[3]  Wolfram Burgard, Cyrill Stachniss, Kai Arras, and Maren Bennewitz , ‘SLAM: Simultaneous
Localization and Mapping’,  http://ais.informatik.uni-freiburg.de/teaching/ss12/robotics/slides/12-slam.pdf




18.08.2021 – Innovative Research Discussion

Meeting on the 18th of August 2021

Location: Chair of CPS

Date & Time: 18th Aug. 2021, 1pm to 2pm

Participants: Univ.-Prof. Dr. Elmar Rueckert, Linus Nwankwo, M.Sc.

Agenda

  1. Review of the previous meeting action points
  2. Discussion of the first research topic
  3. Robot for Control, SLAM, Navigation & Motion Planning Research

Topic 1: Design and Implementation of an Environment Survey Robot

Considerations

Safe human – robot interaction : The robot must have to project the direction of motion in the working environment (enabling humans to note the direction of motion of the robot).

Navigation and Motion Planning: Move around the environment with smooth turning motion (no jerky) and gradual stopping motion. The robot must at the same time detect and avoid obstacles at the workspace.

Localization and Mapping: Update the map of the unknown environment and at the same time keeping track of its current location with respect to the map.

Topic 2: Robots into consideration for the tasks

  1. Segway RMP Mobile Platform – RMP Lite 220/400 
  2. Segway RMP Mobile Platform – RMP Smart 260 – request for technical details sent (awaiting reply)

Next Steps

  1. Investigate the interconnection of Realsense cameras as a Kalman filter
  2. Experimentation with the Lego EV3

Next Meeting

Next meeting was scheduled on Thursday, 26th August, 2021. 11:00 am – 12:00 pm.