We’ve been promised that we’ll be able to control our smart devices using gesture commands. And it looks like it is becoming a reality – you can wiggle your finger to increase the volume in BMW’s infotainment system, you can wake your smart bracelet up by turning your wrist and you can silence your phone by flipping it over. However, people do not trust these inputs that much.
While gesture input is growing in popularity, users are still suspicious of computer’s interpretations of certain gestures. Mostly it is because gesture commands do suck and computers often get confused as to what we actually want them to do. Our gestures are not entirely precise and the machine just shrugs its shoulders – “I don’t know what you want from me, man”.
And what do we do then? We try again, because we’ve been taught that giving up is for losers. Now scientists from the University of Waterloo have developed a strategy, which employs our inclination to try again to help alleviate our frustrations from using gesture control. The basic principle of the strategy is to allow machines to interpret repeated gestures together. Most people repeat the same gesture as inaccurately the second time, which causes a load of frustration. Making the computer read both, the first and the second gestures, and then interpret what they are closest to would make for quicker responses.
Scientists looked at free-space gesture input and assessed the potential of bi-level thresholding strategy in three ways. First stage showed that bi-level thresholding actually works. The second one revealed that users found bi-level thresholding more accurate. The third one showed that bi-level thresholding boosts accuracy even in free-space gesture input.
Keiko Katsuragawa, one of the authors of the study, said: “Our studies demonstrated that bi-level thresholding has the potential for improving gesture recognition. We also found that when users give commands that are not recognized, they view the first instance of failure as relatively minor and the person will simply try again, but persistent failure is what really frustrates users”.
Users are people and people are not perfect. We cannot show the same exact gestures that computers expect from us. This results in a tonne of frustrations and no one can really do anything about it. Hopefully, the bi-level thresholding strategy could improve gesture control and reduce user dissatisfaction with technology.
Source: University of Waterloo