Register now After registration you will be able to apply for this opportunity online.
Foundation Models for Event-based Segmentation
Work on design, implementation, and validation of famous foundation models (CLIP and Segment Anything Model - SAM) in context of Event-based Segmentation.
In the field of event-based vision, the key challenge lies in efficiently processing the asynchronous stream of data generated by event-based sensors. These sensors, inspired by the biological mechanisms of the human retina, capture the dynamics of a scene with high temporal resolution and low latency. The project proposes to work on foundation models for Event-based Segmentation. This approach is aimed at mitigating the challenges posed by the scarcity of labeled data in event-based vision. The project will focus on creating models capable of understanding and segmenting complex visual scenes by using novel learning methodologies. This innovative methodology has the potential to significantly expand the capabilities of event-based vision systems, particularly in dynamic and unstructured environments.
In the field of event-based vision, the key challenge lies in efficiently processing the asynchronous stream of data generated by event-based sensors. These sensors, inspired by the biological mechanisms of the human retina, capture the dynamics of a scene with high temporal resolution and low latency. The project proposes to work on foundation models for Event-based Segmentation. This approach is aimed at mitigating the challenges posed by the scarcity of labeled data in event-based vision. The project will focus on creating models capable of understanding and segmenting complex visual scenes by using novel learning methodologies. This innovative methodology has the potential to significantly expand the capabilities of event-based vision systems, particularly in dynamic and unstructured environments.
The primary goal of this project is to design, implement, and validate foundation models (CLIP, SAM) for Event-based Segmentation. Interesting joint usage of both foundation models will be explored. Applicants should have a solid machine learning background, strong programming skills (Python, C++) and experience in frameworks such as PyTorch or JAX.
The primary goal of this project is to design, implement, and validate foundation models (CLIP, SAM) for Event-based Segmentation. Interesting joint usage of both foundation models will be explored. Applicants should have a solid machine learning background, strong programming skills (Python, C++) and experience in frameworks such as PyTorch or JAX.
Nikola Zubic (zubic@ifi.uzh.ch), Manasi Muglikar (muglikar@ifi.uzh.ch)
Nikola Zubic (zubic@ifi.uzh.ch), Manasi Muglikar (muglikar@ifi.uzh.ch)