I’m interested in different ways of displaying information to our bodies, and particularly to our skin. So, in my June visits to the Washington DC area and to Switzerland (Zurich and Lausanne), I made a point of trying to see as many people working with tactile and haptic displays as possible. I had the opportunity to try three very different devices, which made me realize just how difficult a problem this is.
The first of the three, shown right, was at Johns Hopkins University in Baltimore, where I met with Steve Hsiao and Takashi Yoshioka, both members of the Mind/Brain Institute there. This stimulator is designed to present varying shapes and textures to the fingertips. If you click on the image, you’ll see that this was a very large and expensive piece of equipment, with individual motors controlling each one of 400 pins. Not exactly portable. Plus, it’s designed to show forceimages, rather than vibration images, so although you do get some sense of a texture or shape, it doesn’t quite feel real.
Of course, it’s designed for research purposes, to see how we deal with stimuli, not as a practical display. And it’s also relatively new: the demo they did with me was fairly rudimentary. So I’m not criticizing it at all. I’m just showing how much machinery it takes in order to be able to apply a decent amount of force to a very small area in arbitrary way.
The second system, right, was also large. Renaud Ott helped me to into the Haptic Workstation when I visited him at the EPFL Virtual Reality Laboratory. As you may be able to see from the picture, the system basically consists of three elements. First you put on gloves, which purely sense the movement of your hand. Then you put little rings just under the first and second knuckle of each finger. (You’ll have to click to get the bigger version to see these, but can see it pretty clearly on my left thumb). These provide the force feedback to the hand itself. Finally, the long arms both sense your position in 3D space and also apply force. The screen shows the world you’re interacting with.
This was interesting: unlike anything I’d tried before. But not wholly satisfying. (Again, that’s not meant as a criticism, it’s a hard problem!) First, the fact that there was no way of putting pressure on the palm or fingertips were big issues for me in terms of trying to grasp the items on the virtual table. From what I can tell, we have two major ways of grasping things (I’m sure there are more, but these are the most obvious to me). One is to put your flat hand on the object, and when you feel it on your palm you curl your fingers to conform to it. Another is to use the tip of your thumb and forefinger to grab something. With this workstation, neither of these are possible, because you don’t feel the pressure in the right place. So it didn’t feel real.
Another issue was the fact that the display is so soft. It’s not strong enough to push you around so, for instance you couldn’t bang your hand on the table. You would just bounce into or through it. This was actually one of the problems that the team were working on: trying to find a way so that the difference between what you felt and what you saw didn’t confuse you. (Also, a tiny thing. You’ll see that all the objects on the table are round, except for a box that I’d pushed to the floor. Anyway, they kept rolling off. Annoying!)
I guess the problem here is that the task does not seem suited to the display, something that will no doubt come up again later.
I’ll get on to the third display in the next post.
Photo, top: This tactile stimulator developed at Johns Hopkins University has servo motors that control the force exerted by each pin.
Photo, bottom: Using the Haptic Workstation I try, generally unsuccessfully, to pick up objects from the virtual table.
Originally posted on Brains and Machines.