BiGS: Biotac Grasp Stability Dataset

Source: http://en.wikipedia.org/wiki/File:Tokyo_odaiba.jpg

Autonomous grasping of unknown objects is a fundamental requirement for robots performing manipulation tasks in real world environments. Even though there has been a lot of progress in the area of grasping, it is still considered an open challenge and even the state-of-the-art grasping methods may result in failures [1]. A reliable prediction of grasp stability helps to avoid such failures and provides an option to re-grasp the object safely. Since the majority of grasping failures happen at the contact points, which are occluded for vision systems, tactile feedback plays a major role for predicting grasp stability.

The human-inspired biomimmetic tactile sensor (BioTac) [2] is equipped with a 19-electrode array and a hydro-acoustic sensor surrounded by silicon skin inflated with incompressible and conductive liquid. This design provides rich tactile feedback similar to the slowly-adapting and fast-adapting afferents present in the human skin [3].

Latest developments in classification algorithms [4] allow us to explore the potential of large amounts of data from these sensors. Our goal is to provide a publicly accessible grasp-stability dataset collected using the BioTacs and, thus, enable further development of algorithms capable of reliable grasp stability prediction.

[1] J. Bohg, A. Morales, T. Asfour and D. Kragic. Data-driven grasp synthesis - a survey. CoRR, abs/1309.2660, 2013.
[2] N. Wettels, V.J. Santos, R.S. Johansson and G.E. Loeb. Biomimetic tactile sensor array. Advanced Robotics, 22(8):829–849, 2008.
[3] R.S. Johansson and R.J. Flanagan. Coding and use of tactile signals from the fingertips in object manipulation tasks. Nature Reviews Neuroscience, 10(5):345–359, April 2009.
[4] M. Madry, L. Bo, D. Kragic and D. Fox. St-hmp: Unsupervised spatio-temporal feature learning for tactile data. In Robotics and Automation (ICRA), 2014 IEEE International Conference on, pages 2262–2269, May 2014.

To get the data you need follow these steps:

  1. Fill out the registration form below
  2. You will receive an email with the link to the dataset and the instructions on how to use it
  3. Enjoy the dataset!

The dataset consists of the following sensory and robot state information:



Registration Form Please fill all the texts in the fields.

 

All the data was gathered on a haptically-enabled robot with the Barrett arm/hand system whose three fingers are equipped with novel biomimetic tactile sensors (BioTacs). Each BioTac consists of a rigid core housing an array of 19 electrodes surrounded by an elastic skin. The skin is inflated with an incompressible and conductive liquid.

The BioTac consists of three complementary sensory modalities: force, pressure and temperature. When the skin is in contact with an object, the liquid is displaced, resulting in distributed impedance changes in the electrode array on the surface of the rigid core. The impedance of each electrode tends to be dominated by the thickness of the liquid between the electrode and the immediately overlying skin. Slip-related micro-vibrations in the skin propagate through the fluid and are detected as AC signals by the hydro-acoustic pressure sensor. Temperature and heat flow are transduced by a thermistor near the surface of the rigid core.

Source: http://en.wikipedia.org/wiki/File:Tokyo_odaiba.jpg Source: http://en.wikipedia.org/wiki/File:Tokyo_odaiba.jpg

In order to apply feature extraction methods from computer vision, as described in the next section, the BioTac electrode readings have to be represented as tactile images. To form such a 2D tactile image, the BioTac electrode values are laid out according to their spatial arrangement on the sensor as depicted below.

Source: http://en.wikipedia.org/wiki/File:Tokyo_odaiba.jpg

The schematic of the electrode arrangements on the BioTac sensor (left). Tactile image used for computing features using computer vision techniques (right). The X values are the reference electrodes. The 19 BioTac electrodes are measured relative to these 4 reference electrodes. V1 and V2 are created by taking an average response of the neighboring electrodes: V1 = avg(E17, E18, E12, E2, E13, E3) and V2 = avg(E17, E18, E15, E5, E13, E3).

Source: http://en.wikipedia.org/wiki/File:Tokyo_odaiba.jpg Source: http://en.wikipedia.org/wiki/File:Tokyo_odaiba.jpg Source: http://en.wikipedia.org/wiki/File:Tokyo_odaiba.jpg



So far the objects we have in the database are a cylindrically-shaped box of wipes (left), a cubically-shaped box of candy (middle) and a ball (right).

For the experiments, we use objects, whose position is known in advance. In addition, we introduce a bowl that is firmly attached to the table (see below). The bowl is used to bring the object up right if it falls out of the gripper during the extensive shaking motions that are performed later in the experiment. This simple modification enables us to fully automate the learning process and let the robot run for more than 20 hours to provide data for grasp stability prediction without human intervention.

Source: http://en.wikipedia.org/wiki/File:Tokyo_odaiba.jpg Source: http://en.wikipedia.org/wiki/File:Tokyo_odaiba.jpg Source: http://en.wikipedia.org/wiki/File:Tokyo_odaiba.jpg



The experiment proceeds as follows. The robot reaches for the object to perform a top grasp, which we refer to as a nominal grasp. In order to generate more variety in the data collected for the grasp stability predictor, we generate subsequent grasps by adding white noise to the nominal grasp. We refer to these grasps as random grasps. After approaching and grasping the object using the force grip controller described in [5], the robot picks the object up and performs a range of extensive shaking motions in all directions to ensure that the grasp is stable. If the object is still in the hand after the shaking motion, we consider it to be a successful grasp. The wrist-mounted force-torque sensor is used to determine if the object is still in the hand at the end of the experiment.

For the cylindrical object we collected 1000 grasps, out of which 46% resulted in failures and 54% succeeded. For the box object we collected 500 grasps, out of which we had 31% successes and 69% failures. For the ball we gathered 500 grasps - 52.6% successes, 47.4% failures.

[5] Z. Su, K. Hausman, Y. Chebotar, A. Molchanov, G.E. Loeb, G.S. Sukhatme and S. Schaal. Force estimation and slip detection/classification for grip control using a biomimetic tactile sensor. In Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on, pages 297–303, 2015.

Yevgen Chebotar
Karol Hausman
Zhe Su
Artem Molchanov
Oliver Kroemer
Should you have any questions please contact the authors through email.