Table of Contents
This ShadowSense system uses a normal, off-the-shelf USB web-cam to capture the shadows that are produced by hand gestures on a robot’s skin. Then, there are algorithms that classify the movements to figure out the user’s specific interaction.
The study’s lead author, Guy Hoffman, said that their method provides a more natural way of interacting with a robot. Another benefit is that this method does not rely on large, expensive arrays of sensors.
Touch is an extremely important part of communication for nearly all organisms. It has, however, been almost completely absent when it comes to human-to-robot interactions. One reason for this is that older methods of full-body touch required too many sensors. This has historically made this technology impractical to implement. With this research, we finally get a low-cost alternative.
The researchers hooked the system up to an inflatable robot that had a camera underneath its skin.
They trained and AI classification algorithms with shadow images composed of six gestures. Those include touching with a palm, punching, touching with two hands, hugging, pointing, and not touching.
Depending on the lighting conditions, the system was able to successfully distinguished between the gestures accurately between 87.5% to 96% of the time
The sort of research is paving the way for mobile guide robots that can respond to different gestures, like turning to face a person when it detects someone poke them, or something like moving back when it feels a tap on the back.
Also, interactive touch screens could be added to inflatable robots that could make home assistant droids more privacy-friendly.
If the robot can only see you in the form of your shadow, it can detect what you’re doing without taking high fidelity images of your appearance. That gives you a physical filter and protection, and provides psychological comfort.HOFFMAN
If you are interested in the full paper, you can check it out here.