Institute for Visual ComputingOpen Opportunities3D hand pose forecasting is a new benchmark introduced by HoloAssist [1]. Existing action forecasting work mostly focuses on providing semantic labels of future actions and does not provide explicit 3D guidance on hand poses. Predicting 3D hand poses can be useful for various applications, and it can augment instructions and spatially guide users in different tasks. In this benchmark, we take 3 seconds inputs similar to other 3D body location forecasting literature and forecast the continuous 3D hand poses for the next 0.5, 1.0, and 1.5 seconds. The evaluation metric is the average of mean per joint position error over time in centimeters compared to ground truth. To have a proper evaluation metric that can help 3D action guidance, we remove the mistakes from the action sequences and only forecast 3D hand pose for the correct labels.
[1] Wang, X., Kwon, T., Rad, M., Pan, B., Chakraborty, I., Andrist, S., ... & Pollefeys, M. (2023). Holoassist: an egocentric human interaction dataset for interactive ai assistants in the real world. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 20270-20281). - Computer Vision, Virtual Reality and Related Simulation
- ETH Zurich (ETHZ), Master Thesis, Semester Project
| Action recognition is an essential task in computer vision and has numerous applications in various fields, including robotics, surveillance, and healthcare. The recognition of actions involves the analysis of temporal and spatial information within a video sequence. Current state-of-the-art methods use 3D hand and object poses for action recognition, where the object's corners are commonly used for representation. However, this approach has limitations in accurately modeling the hand-object interaction. In [1], we show that leveraging hand-object contact-map representation helps improve action recognition. However, this representation can be learned implicitly for the task of action recognition.
[1] https://arxiv.org/pdf/2309.10001.pdf - Computer Vision, Virtual Reality and Related Simulation
- ETH Zurich (ETHZ), Master Thesis, Semester Project
| The recent development of LLMs (Large Language Models), such as ChatGPT and Llama, opens up new possibilities for understanding procedural actions. In the past, action recognition was restricted to the classification of visual frames. However, with LLMs, the model can observe the whole action sequence in a more effective way and even predict the future actions [1]. In this project, students will explore how LLMs can improve action recognition in procedural tasks. Specifically, given a high-level procedural task (e.g., making coffee, copying a paper), students will use existing pretrained action recognition models to predict the top 5 actions for each clip and feed them into the LLMs to refine and correct the predicted actions. As a comparison, students will also establish a baseline using simple machine learning and statistical methods to correct actions.
[1] Palm: Predicting Actions through Language Models @ Ego4D Long-Term Action Anticipation Challenge 2023, CVPR'23 workshop
- Computer Vision, Text Processing
- ETH Zurich (ETHZ), Master Thesis, Semester Project
| Reading text manuals to set up and manipulate devices takes a lot of time and is not intuitive when it comes to 3D instruction. Despite the advent of Mixed Reality (MR) devices, 3D instruction is still limited and expensive to set up. In this project, we will develop an app, an adaptive 3D hand guidance system that projects instructional 3D hand poses in MR devices with pre-recorded instructional videos using MR devices. - Computer Vision, Virtual Reality and Related Simulation
- ETH Zurich (ETHZ), Master Thesis, Semester Project
| The goal of this project is to use language prompts to help find object parts in 3D. - Computer Vision
- Master Thesis, Semester Project
| The objective of this project is to determine the metric relative pose between two images using object-to-object matches. - Computer Vision
- Master Thesis, Semester Project
| We extend the lamar.ethz.ch benchmark to develop accurate SLAM methods that can co-register drones, legged robots, wheeled robots, smartphones, and mixed reality headsets based on visual SLAM. - Computer Vision, Intelligent Robotics
- Bachelor Thesis, Master Thesis, Semester Project
| Fast moving objects are defined as objects that move over significant distances over exposure time of a single image or video frame. Thus, they look significantly blurred. Detection, tracking, and deblurring of such objects have been studied in recent years. However, there are still no methods for robust retrieval of such objects in large image collections. - Computer Graphics, Computer Vision, Image Processing, Neural Networks, Genetic Alogrithms and Fuzzy Logic, Pattern Recognition
- Master Thesis
| Extend the recent Marigold in different aspects - Computer Vision
- Master Thesis
| The goal of this project is to implement an 6DoF object pose estimation method that utilizes the embedded sensors of head-mounted devices like the Microsoft HoloLens to improve the accuracy of the 6DoF pose estimation. The proposed method will be thoroughly evaluated and compared against single-view, stereo, and multi-view baselines. - Computer Vision
- ETH Zurich (ETHZ), Master Thesis
|
|