Tactile perception is one of the basic senses in humans that utilize almost at every instance. We predict the touch of the object even before touching it, only through vision. If a novel object is encountered, we predict the tactile sensation even before touching. The goal of this project is to predict tactile response that would be experienced if this grasp were performed on the object. This is achieved by extracting the features of the visual data and the tactile information and then learning the mapping between those features.
We use Intel RealSense depth camera D435i for capturing images of the objects and Seed RH8D Hand with tactile sensors to capture the tactile data in real time(15 dimensional data). The main objective is to perform well on the novel object which have some shared feature representation of the previously seen objects.
 B. S. Zapata-Impata, P. Gil, Y. Mezouar and F. Torres, “Generation of Tactile Data From 3D Vision and Target Robotic Grasps,” in IEEE Transactions on Haptics, vol. 14, no. 1, pp. 57-67, 1 Jan.-March 2021, doi: 10.1109/TOH.2020.3011899.
 Z. Abderrahmane, G. Ganesh, A. Crosnier and A. Cherubini, “A Deep Learning Framework for Tactile Recognition of Known as Well as Novel Objects,” in IEEE Transactions on Industrial Informatics, vol. 16, no. 1, pp. 423-432, Jan. 2020, doi: 10.1109/TII.2019.2898264.