Register now After registration you will be able to apply for this opportunity online.
Fusion of Event and Optical Flow Camera Data for Visual Inertial Odometry
Visual Inertial Odometry (VIO) describes the process of determining the movement trajectory of a mobile agent (e.g. drone, car, VR headset) from image data recorded by a camera combined with inertial measurements from an IMU. It has been shown that the VIO estimation benefits from additional event camera data. The goal of this project is to implement a fast and energy-efficient fusion strategy for event data and classical camera data on either an FPGA or a System on Chip (GAP 9).
Keywords: Visual Inertial Odometry, System on Chip, Software, Computer Vision, Embedded Systems, Event Camera, Sensor Fusion, FPGA
In this project, you will be able to evaluate different sensor fusion algorithms for use in a VIO pipeline. You will select the most promising algorithm and implement it on low-level hardware like an FPGA or a multi-core SoC. In a subsequent step, you will evaluate the tracking accuracy and resource utilization against ground truth (motion capture system) and implementation from related work.
**Prerequisites**
- Interest in embedded systems.
- Solid programming experience in C/C++.
- Optional: Prior low-level coding experience or experience with hardware description languages.
- Optional: Background knowledge in computer vision.
**Character**
- 10% Theory
- 10% Algorithm Evaluation
- 50% Algorithm Implementation on GAP9 or an FPGA
- 20% Measurement and Analysis of Implementation
- 10% Documentation
In this project, you will be able to evaluate different sensor fusion algorithms for use in a VIO pipeline. You will select the most promising algorithm and implement it on low-level hardware like an FPGA or a multi-core SoC. In a subsequent step, you will evaluate the tracking accuracy and resource utilization against ground truth (motion capture system) and implementation from related work.
**Prerequisites**
- Interest in embedded systems. - Solid programming experience in C/C++. - Optional: Prior low-level coding experience or experience with hardware description languages. - Optional: Background knowledge in computer vision.
**Character**
- 10% Theory - 10% Algorithm Evaluation - 50% Algorithm Implementation on GAP9 or an FPGA - 20% Measurement and Analysis of Implementation - 10% Documentation
- Review different fusion algorithms for image and event data.
- Get familiarized with the GAP9 SoC or the selected FPGA.
- Implement the fused VIO pipeline on GAP9 or on the FPGA.
- Evaluate the VIO implementation under various conditions.
- Review different fusion algorithms for image and event data. - Get familiarized with the GAP9 SoC or the selected FPGA. - Implement the fused VIO pipeline on GAP9 or on the FPGA. - Evaluate the VIO implementation under various conditions.
Jonas Kühne (kuehnej@ethz.ch), Dr. Christian Vogt (christian.vogt@pbl.ee.ethz.ch)
Jonas Kühne (kuehnej@ethz.ch), Dr. Christian Vogt (christian.vogt@pbl.ee.ethz.ch)