
ViTacTip
Design and Benchmarking
This paper presents ViTacTip, a compact multi-modal sensor for robotic perception. Its transparent see-through skin enhances vision and proximity sensing, while embedded biomimetic tips improve tactile and force perception. We validate its capabilities through multi-task learning for hardness, material, and texture recognition, together with benchmarking on object recognition, contact point detection, pose regression, and grating identification. A GAN-based framework further enables cross-modality interpretation and flexible switching between sensing modalities.
MagicTac
3D multi-layer design Based Tactile Sensor
This paper presents MagicTac, a high-resolution grid-based tactile sensor for robotic contact perception. Its 3D multi-layer design, inspired by the Magic Cube structure, improves spatial resolution, while multi-material additive manufacturing enables low-cost, repeatable, and assembly-friendly fabrication. Experiments on tactile reconstruction using deformation fields and optical flow show that MagicTac captures fine textures and dynamic contact information effectively. The sensor can be manufactured for as little as £4.76 in 24.6 minutes.
Design and Benchmarking of A Multi-Modality Sensor for
Robotic Manipulation with GAN-Based Cross-Modality Interpretation
​​
Accepted by IEEE Transactions on Robotics



Poster Presentation at 2024 ICRA

MagicTac: A Novel High-Resolution 3D Multi-layer Grid-Based Tactile Sensor
Wen Fan, Haoran Li, Dandan Zhang
​
Paper Link:
https://ieeexplore.ieee.org/document/10610615
Accepted by 2024 ICRA
​


Tac-VGNN: A Voronoi Graph Neural Network for Pose-Based Tactile Servoing
Wen Fan, Max Yang, Yifan Xing, Nathan F. Lepora, Dandan Zhang*
​
Paper Link: https://arxiv-export3.library.cornell.edu/abs/2303.02708
Accepted by 2023 ICRA
Tactile pose estimation and tactile servoing are fundamental capabilities of robot touch. Reliable and precise pose estimation can be provided by applying deep learning models to high-resolution optical tactile sensors. Given the recent successes of Graph Neural Network (GNN) and the effectiveness of Voronoi features, we developed a Tactile Voronoi Graph Neural Network (Tac-VGNN) to achieve reliable pose-based tactile servoing relying on a biomimetic optical tactile sensor (TacTip). The GNN is well suited to modeling the distribution relationship between shear motions of the tactile markers, while the Voronoi diagram supplements this with area-based tactile features related to contact depth. The experiment results showed that the Tac-VGNN model can help enhance data interpretability during graph generation and model training efficiency significantly than CNN-based methods. It also improved pose estimation accuracy along vertical depth by 28.57% over vanilla GNN without Voronoi features and achieved better performance on the real surface following tasks with smoother robot control trajectories.
For more project details, please view our website: https://sites.google.com/view/tac-vgnn/home

Tac-VGNN: A Voronoi Graph Neural Network for Pose-Based Tactile Servoing
Wen Fan, Max Yang, Yifan Xing, Nathan F. Lepora, Dandan Zhang*
​
Paper Link: https://arxiv-export3.library.cornell.edu/abs/2303.02708
Accepted by 2023 ICRA
Tactile pose estimation and tactile servoing are fundamental capabilities of robot touch. Reliable and precise pose estimation can be provided by applying deep learning models to high-resolution optical tactile sensors. Given the recent successes of Graph Neural Network (GNN) and the effectiveness of Voronoi features, we developed a Tactile Voronoi Graph Neural Network (Tac-VGNN) to achieve reliable pose-based tactile servoing relying on a biomimetic optical tactile sensor (TacTip). The GNN is well suited to modeling the distribution relationship between shear motions of the tactile markers, while the Voronoi diagram supplements this with area-based tactile features related to contact depth. The experiment results showed that the Tac-VGNN model can help enhance data interpretability during graph generation and model training efficiency significantly than CNN-based methods. It also improved pose estimation accuracy along vertical depth by 28.57% over vanilla GNN without Voronoi features and achieved better performance on the real surface following tasks with smoother robot control trajectories.
For more project details, please view our website: https://sites.google.com/view/tac-vgnn/home





