Register now After registration you will be able to apply for this opportunity online.
Domain Transfer between Events and Frames for Motor Policies
The goal of this project is to develop a shared embedding space for events and frames, enabling the training of a motor policy on simulated frames and deployment on real-world event data.
Recent robotics breakthroughs mainly use motor policies trained in simulation to perform impressive maneuvers in the real world. This project seeks to capitalize on the high-temporal resolution of event cameras to enhance the robustness of motor policies by integrating event data as a sensor modality. However, current methods for generating events in simulation are inefficient, requiring the rendering of multiple frames at a high frame rate. The primary goal of this project is to develop a shared embedding space for events and frames, enabling training on simulated frames and deployment on real-world event data. The project offers opportunities to test the proposed approach on various robotic platforms, such as quadrotors and miniature cars, depending on the project's progress.
Recent robotics breakthroughs mainly use motor policies trained in simulation to perform impressive maneuvers in the real world. This project seeks to capitalize on the high-temporal resolution of event cameras to enhance the robustness of motor policies by integrating event data as a sensor modality. However, current methods for generating events in simulation are inefficient, requiring the rendering of multiple frames at a high frame rate. The primary goal of this project is to develop a shared embedding space for events and frames, enabling training on simulated frames and deployment on real-world event data. The project offers opportunities to test the proposed approach on various robotic platforms, such as quadrotors and miniature cars, depending on the project's progress.
Participants will build upon the foundations laid by previous student projects (published at ECCV22) and leverage insights from the domain of Unsupervised Domain Adaptation (UDA) literature to transfer motor policies from frames to events. The project will involve validating the approach in simulation, with potential real-world experiments conducted in our drone arena. Emphasis will be placed on demonstrating the advantages of event cameras in challenging environments, such as low-light conditions and high-dynamic scenes. Given the use of various deep learning methods for task transfer, a strong background in deep learning is essential for prospective participants. If you are interested, we are happy to provide more details.
Participants will build upon the foundations laid by previous student projects (published at ECCV22) and leverage insights from the domain of Unsupervised Domain Adaptation (UDA) literature to transfer motor policies from frames to events. The project will involve validating the approach in simulation, with potential real-world experiments conducted in our drone arena. Emphasis will be placed on demonstrating the advantages of event cameras in challenging environments, such as low-light conditions and high-dynamic scenes. Given the use of various deep learning methods for task transfer, a strong background in deep learning is essential for prospective participants. If you are interested, we are happy to provide more details.