Multi-Scale Embodied Intelligence Lab
Multimodal Perception (Sensing)
Interactive Learning (Decision-Making)
We aim to embed embodied intelligence in large-scale robots, including medical robotics for surgeries to assistive robotics for homecare applications. We also explore micro-robotic innovations at the cellular scale, including developing microsurgical tools for precise micro-manipulation. Beyond individual scales, we also explore how these multi-scale robotic systems can integrate and interface with each other, ensuring seamless operations across dimensions.
The ultimate goal is to develop the next-generation robots empowered by artificial intelligence with super-human capabilities. We envision that intelligent robots will reshape our world by providing tangible benefits in our daily life, contributing to the healthcare system or assisting humans in hazardous environments.
Embodied intelligence is a crucial concept for the autonomy and adaptivity of systems operating in a physical world that is evidently uncertain and complex. Although advancements in Artificial Intelligence (AI) have given rise to general-purpose intelligent agents, the intricate and unpredictable nature of the real world demands robots that do more than just ‘think’ based on computational intelligence. Robots must be able to precisely 'perceive' and dynamically ‘act’ to adapt to different scenarios through direct interactions with their environments. To this end, our work focuses on creating and manipulating intelligent robots that can function efficiently in varied environments.
Enable seamlessly human-robot shared control
Enhance the level of autonomy for multi-scale robotics systems
Research on intelligent robotics lies at the intersection of machine learning and robotics.
The Multi-Scale Embodied Intelligence Lab focuses on developing embodied intelligence across multiple scales, ranging from micro/nanobots to large-scale robotic systems, primarily targeting healthcare applications.
My major research interests lie at the intersection of robotics and machine learning. I envision that intelligent robots will reshape our world by providing tangible benefits in our daily life, contributing to the healthcare system, or assisting humans in hazardous environments. In addition, I also work on micro-robotics, teleoperation, and human-robot shared control.
Medical robots and instruments are expected to be smaller, and smarter for wider clinical uptake. Intelligent robotic platforms and accurate micromanipulation techniques are worth developing since they can empower operators with superhuman capabilities as well as reduce their physical workload during repetitive and tedious micro-surgical tasks. Despite continuous technical evolution in this research area, many challenges remain. For micromanipulation platforms that can conduct surgery in the microscale, it’s challenging to enable precise perception of the micro-tools and dexterous manipulation in 3D space. Therefore, new control strategies and vision techniques are worth exploring. To develop smarter robotic platforms, machine learning techniques can be incorporated into the control scheme. Therefore, it’s worthwhile to combine control commands generated by human operators and machine learning-based autonomous control to enable efficient human-robot shared control for surgical operations. This leads to the development of intelligent surgical robots to assist human operators in robotic surgery.
Dandan Zhang* (first author), et al, “Micro/Nano-Objects Pose Estimation Via a Sim-to-Real Learning-to-Match Approach Using Small Dataset”,
Communication Physics-Nature Portfolio (accepted in early 2022).
Dandan Zhang (first author), et al, “Explainable Hierarchical Imitation Learning for Robotic Drink Pouring”,
IEEE Transaction on Automation Science and Engineering, 2021.
“Progress in robotics for combating infectious diseases”,
Science Robotics, 2021.