Paper \ Paper Link \ Poster \ ICRA 7-Minute Video Presentation
Abstract

Additionally, we show the advantages of our new hardware design and improvements in calibration methods for accurate depth map generation when compared to alternative lighting methods commonly implemented in previous camera-based tactile sensors. With these advancements, we make the integration of tactile sensors more accessible to roboticists by allowing them the flexibility to easily customize, fabricate, and calibrate camera-based tactile sensors to best fit the needs of their robotic systems.
Our Approach
Sensor Exploded View

Rainbow Illumination
We introduce a novel rainbow illumination scheme that uses a shiny, semi-specular coating to produce the gradual, rainbow color gradient needed for using photometric stereo techniques. This illumination method allows us to further simplify our fabrication process, without sacrificing the sensor’s ability to provide depth reconstructions of the surface deformations.
The rainbow illumination approach allows us to broaden the shape and size customizability (introduced by GelSight360) to now build a wider variety of sensors without the need for precise illumination or color tuning.
This simplified ray casting visualization demonstrates how a light ray from two LEDs is refracted and reflected through the different materials in the sensor.
When an object is pressed into the soft elastomer, the light ray is reflected into the camera, giving us an RGB intensity value at that pixel. Using the color gradient and light intensities around the circumference of the sensor, a neural network can be used to map the RGB images to surface normals.
Rainbow LED Circuitry

We are able to imitate the rainbow-like gradients seen in past Lambertian GelSights due to the rainbow illumination strategy.
This is done by packing as many RGB LEDs as we can fit onto the PCB. A specific hue, value, and saturation is assigned to each of the discrete LEDs, so that when there are enough, we start to mimic this continuous rainbow color. Depending on the shapes of boards, we use different sizes of LEDs in the design. In general anywhere from 21 – 28 LEDs can fit on each board.
Additionally, we designed a customized LED controller board that houses an Adafruit Trinket 5V Pro. It is small enough to fit at the base of an end-effector or in the palm of a robotic hand and ensures the signal integrity for the sensors. The Trinket board has the current capacity to run up to 5 LED boards off an external power supply, which is important when outfitting multiple sensors on a robotic hand.
Example Raw Sensor Outputs
Manufacturing Process
Both the rigid epoxy shell and soft silicone elastomer are produced in-house. The process takes ~2 days, including curing time.
Alternate Geometries


Different shapes and sizes of sensors are now possible without the need to re-design optical schemes and fine-tune LED colors for curved configurations. The RGB LEDs can just be arranged on a PCB around the circumference of the sensor footprint, as shown above.

Some of the alternate geometries implemented were designed to mimic the dimensions of Syntouch's BioTac sensors. Unlike with GelSight360, we were even able to shrink the size of the sensors down to ~20mm in diameter (about the diameter of a human index finger) because of the rainbow illumination scheme.
Depth Reconstruction
Method
The sensor is first calibrated using a CNC to collect about 5000 points across the sensor’s surface. Using the intrinsic and extrinsic camera matrices, we simulate the normal maps and train an MLP network to produce the surface normals of the sensor surface.
To reconstruct the surface deformation, the RGB values and pixel coordinates of the contact area are fed into the network, producing the surface gradients. These gradients are then integrated using the Bilateral Normal Integration method to produce a depth map and point cloud.

Results

Example tactile signals collected when different objects are pressed at different locations of the sensor surface. Top Row: Objects pressed into the sensor surface. Middle Row: Tactile difference images of the contact regions. Bottom Row: Estimated depth map of the imprinted object in the sensor skin.