top of page

Human-Robot Interaction

Research Overview:

We envision humans and robots collaborating within enhanced digital and physical spaces. More specifically, we work on integrating HRI with advanced digital technologies, such as Augmented Reality, Mixed Reality, Cloud-based Computing, etc.

 

Objectives:

We aim to create enriched interactive environments that enhance human-robot collaboration, leading to augmented efficiency and versatility.

​

Research Theme 1: Advanced Digital Technology

​

Through Augmented Reality (AR), robots can overlay critical information in real-world scenarios. This aids users by providing contextual assistance. By providing a shared AR view, robots can indicate their intentions or highlight objects of interest, which promotes more cohesive and predictable collaborations. Robots can be programmed to facilitate Virtual Reality (VR)-based training modules, which is helpful for surgical practice.

​

Robots can use Mixed Reality (MR) to help engineers and designers visualize and manipulate both virtual and physical elements, fostering a comprehensive design process. MR interfaces allow users to control robots in distant or hazardous locations by immersing them in a real-time virtual representation of the robot's surroundings, thereby enabling precise operations. MR allows robots and humans to work on tasks by blending physical and digital elements, bridging the gap between the virtual and real worlds.

​

In addition, with the support of Cloud-based Computing, robots can share experiences and learning instances, drawing from vast databases to enhance their capabilities. With cloud integration, robots can be operated, monitored, and even updated remotely, ensuring flexibility and immediate adaptability to dynamic tasks. Moreover, connecting to the cloud allows robots to utilize vast computational resources when needed, enhancing their problem-solving abilities without being limited by onboard hardware.

​

Research Theme 2: Interaction Interfaces

​

Beyond verbal interactions, we're innovating versatile communication channels – including gestures, haptic feedback, and visual cues – allowing robots to convey information in diverse and intuitive ways. We aim to design robots capable of delivering tactile feedback through haptic interfaces, enhancing user experience, particularly in tasks requiring precision or sensory validation.

 

We will also explore the frontier of direct communication between the brain and computers. Our Brain-Computer Interface (BCI) research focuses on decoding neural signals to facilitate seamless human-machine interactions. We will harness the latest advancements in neural networks and machine learning to create algorithms capable of accurately interpreting neural signals from BCIs, implement systems where robots provide real-time feedback to ensure user safety and refine the interaction process based on user responses.

01

Digital Twin-Driven Mixed Reality Framework for
Immersive Teleoperation With Haptic Rendering

Wen Fan, Xiaoqing Guo, Enyang Feng, Jialin Lin, Yuanyi Wang, Jiaming Liang, Martin Garrad, Jonathan Rossiter,
Zhengyou Zhang, Nathan Lepora, Lei Wei, Dandan Zhang

DTMR.png
Tencent-Haptics.png

Teleoperation has widely contributed to many applications. Consequently, the design of intuitive and ergonomic control interfaces for teleoperation has become crucial. The rapid advancement of Mixed Reality (MR) has yielded tangible benefits in human-robot interaction. MR provides an immersive environment for interacting with robots, effectively reducing the mental and physical workload of operators during teleoperation. Additionally, the incorporation of haptic rendering, including kinaesthetic and tactile rendering, could further amplify the intuitiveness and efficiency of MR-based immersive teleoperation. In this study, we developed an immersive, bilateral teleoperation system, integrating Digital Twin-driven Mixed Reality (DTMR) manipulation with haptic rendering. This system comprises a commercial remote controller with a kinaesthetic rendering feature and a wearable cost-effective tactile rendering interface, called the Soft Pneumatic Tactile Array (SPTA). We carried out two user studies to assess the system's effectiveness, including a performance evaluation of key components within DTMR and a quantitative assessment of the newly developed SPTA. The results demonstrate an enhancement in both the human-robot interaction experience and teleoperation performance.

Main Contributions:

  1. Integrating the DT technique into an MR-based immersive teleoperation system, leading to the development of the Digital Twin-driven Mixed Reality (DTMR) framework.

  2. Building a novel haptic rendering system based on i) pneumatically-driven actuators for tactile feedback on human fingertips, and ii) a commercial haptic controller that enables kinaesthetic rendering and real-time motion tracking.

  3. Constructing a seamless bilateral teleoperation framework through the integration of immersive manipulation via DTMR and the novel haptic rendering device, which includes both kinaesthetic and tactile rendering.

experiment-DTMR.png
bottom of page