Summary of Project
For sensing techniques used for robotic dexterous manipulation, there are two primary modalities: vision and tactile. The robot can be guided to approach the target objects based on visual feedback, while tactile feedback can provide useful information of interaction between objects and the environment. These two modalities are complementary to each other for contact-rich tasks. To this end, this project aims to explore multimodal representation learning, which could be a useful tool for developing a robust and task-related representation to facilitate reinforcement learning with high efficiency.
Academic criteria: A first-class undergraduate degree or a master degree.
Applicants will also need to meet the University’s English Language requirements by obtaining an IELTS score of at least 6.5 overall, with a minimum of 6.0 in each skills component.
Desirable Applicants will have:
• Strong programming skills
• Solid skills in theoretical analysis
• Strong communication skills in oral and written English
• Interest in autonomous robotics, machine learning, computer vision, multi-sensor fusion
Dr. Dandan Zhang, Prof. Nathan Lepora