top of page

Here shows our work related to Robot Learning.

background-rl.png

Research Overview

Robot learning techniques, such as reinforcement learning and imitation learning, will be investigated for the development of intelligent robots. To improve the efficiency of training a reinforcement learning model, effective representation learning and multi-modal sensor fusion algorithms will be explored. To enable the robots to learn unknown dynamics and acquire complex visual-motor skills via visual observation with generalizability, one/few-shot imitation learning will be investigated for the robots to learn from a small dataset of human demonstration.

​

In addition, Explainable AI will be developed to eliminate the black-box effects for deep learning-based algorithms, which can enhance human’ trust when interacting with robots. Applications include domestic robots, industrial robots, medical robots and warehouse robots.  

XAI for Intelligent Robotics

XAI.png

Target

  • Enhance User's Trust with Explainable Interface

  • Enhance User's Trust with Model Explanation

  • Enhance User's Trust with Uncertainty Prediction

  • Enhance User's Trust by Tracing Failures

Cross-embodiment imitation learning: This approach focuses on developing artificial agents (robots) capable of learning policies and skills by observing other agents (robots), regardless of differences in embodiment. However, Learning from Demonstration (LfD) (also known as Imitation Learning) may be challenging since the human demonstrator and the robots have inherent differences and may lead to systematic domain shifts. For example, there may be a mismatch between the robot's observations and actions and the recorded demonstration from humans, while the actions from human demonstrations are sometimes difficult to obtain. The inherent domain gap may lead to poor performance when using the pre-trained neural network model. To this end, cross-embodiment imitation learning is essential for agents to adapt skills across a variety of embodiments. The research addresses the challenges stemming from the physical form discrepancies between the learner and the demonstrator.

​

Data-Efficient Robot Learning: Most of the existing deep imitation learning-based approaches require a large amount of human demonstration data, while a desired deep reinforcement learning-based method requires a lot of trials and errors before the robot can obtain the desired policies. The time-consuming and effort-demanding learning process limits the deployment of robot learning methods in real-world applications. To enable the robots to learn unknown dynamics and acquire complex visual-motor skills with high efficiency, data-efficient robot learning techniques will be investigated. For example, one/few-shot learning will be used for the robots to learn from a small dataset of human demonstration or help enhance the efficiency of reinforcement learning.

​

XAI for Intelligent Robotics: In most intelligent microrobotic systems, the decision-making process of the robot is often opaque and not readily understandable by humans. It’s significant to embed interpretable intelligence into the robotic system, so the robots can become trustworthy assistants to assist humans’ daily lives in home-care scenarios. Therefore, I plan to investigate explainable AI (XAI) for the intelligent robotic system to ensure safety and enhance humans' trust in their robot partners. Interpretable interfaces will be developed to foster human trust in the robotic systems and make sure that the decision-making process of the robots is transparent.

•A. Explainable Intelligent Robotics

•B. Domain Adaptation for Robotic Manipulation

•C. Data-Efficient Robot Learning for Robotic Manipulation

•D. Continual Learning for Robotic Manipulation

•E. Sim-to-Real Transfer Learning for Robotic Manipulation

•F. Multi-modality Representation Learning for Contact-Rich Task

Ongoing Projects

01

Explainable Hierarchical Imitation Learning

Dandan Zhang; Qiang Li; Yu Zheng; Dongsheng Zhang; Zhengyou Zhang

EHIL.png

To accurately pour drinks into various containers is an essential skill for service robots. However, drink pouring is a dynamic process and difficult to model. 
Traditional deep imitation learning techniques for implementing autonomous robotic pouring have an inherent black-box effect and require a large amount of demonstration data for model training. To address these issues, an Explainable Hierarchical Imitation Learning (EHIL) method is proposed in this paper such that a robot can learn high-level general knowledge and execute low-level actions across multiple drink pouring scenarios. Moreover, with the EHIL method, a logical graph can be constructed for task execution, through which the decision-making process for action generation can be made explainable to users and the causes of failure can be traced out. Based on the logical graph, the framework is manipulable to achieve different targets while the adaptability to unseen scenarios can be achieved in an explainable manner.
A series of experiments have been conducted to verify the effectiveness of the proposed method. Results indicate that EHIL outperforms the traditional behavior cloning method in terms of success rate, adaptability, manipulability and explainability. 

​

Full paper link:

https://ieeexplore.ieee.org/document/9667114

​

logicGraph.png
tracefailures.png

02

One-Shot Domain-Adaptive Imitation Learning
via Progressive Learning

Dandan Zhang; Wen Fan; John Llyod; Chenguang Yang;Nathan Lepora

oneshot-progressive.png

Traditional deep learning-based visual imitation learning techniques require a large amount of demonstration data for model training, and the pre-trained models are difficult to adapt to new scenarios. To address these limitations,
we propose a unified framework using a novel progressive learning approach comprised of three phases: i) a coarse learning phase for concept representation, ii) a fine learning phase for action generation, and iii) an imaginary learning phase for domain adaptation. Overall, this approach leads to a one-shot domain-adaptive imitation learning framework. We use robotic pouring task as an example to evaluate its effectiveness. Our results show that the method has several advantages over contemporary end-to-end imitation learning approaches, including an improved success rate for task execution and more efficient training for deep imitation learning. In addition, the generalizability to new domains is improved, as demonstrated here with novel background, target container and granule combinations. We believe that the proposed method can be broadly applicable to different industrial or domestic applications that involve deep imitation learning for robotic manipulation, where the target scenarios have high diversity while the human demonstration data is limited.

​

Full paper link:

https://ieeexplore.ieee.org/document/9972847

Robot Learning for Service Robots

Learning from Demonstration for Automatic Tea Preparation for Service Robots

AARON ASAMOAH; Virginia Ruiz Garate; Dandan Zhang

bottom of page