Deep in the basement of MIT’s Building 3, a two-legged robot named HERMES is wreaking controlled havoc: punching through drywall, smashing soda cans, kicking over trash buckets, and karate-chopping boards in half. Its actions, however, are not its own.
Just a few feet away, PhD student Joao Ramos stands on a platform, wearing an exoskeleton of wires and motors. Ramos’ every move is translated instantly to HERMES, much like a puppeteer controlling his marionette. As Ramos mimes punching through a wall, the robot does the same. When the robot’s fist hits the wall, Ramos feels a jolt at his waist. By reflex, he leans back against the jolt, causing the robot to rock back, effectively balancing the robot against the force of its punch.
The exercises are meant to demonstrate the robot’s unique balance-feedback interface. Without this interface, while the robot may successfully punch through a wall, it would also fall headlong into that wall. The interface allows a human to remotely feel the robot’s shifting weight, and quickly adjust the robot’s balance by shifting his own weight. As a result, the robot can carry out momentum-driven tasks — like punching through walls, or swinging a bat — while maintaining its balance.
Ramos says the interface takes advantage of a human’s split-second reflexes, which give the robot much faster reaction times than robots that adjust their balance based on visual feedback from onboard cameras.
“The processing of images is typically very slow, so a robot has difficulty reacting in time,” says Ramos, of MIT’s Department of Mechanical Engineering. “Instead, we’d like to use the human’s natural reflexes and coordination. An example is walking, which is just a process of falling and catching yourself. That’s something that feels effortless to us, but it’s challenging to program into a robot to do it both dynamically and efficiently. We want to explore how humans can take over complex actions for the robot.”
Ultimately, Ramos and his colleagues envision deploying HERMES to a disaster site, where the robot would explore the area, guided by a human operator from a remote location.
“We’d eventually have someone wearing a full-body suit and goggles, so he can feel and see everything the robot does, and vice versa,” Ramos says. “We plan to have the robot walk as a quadruped, then stand up on two feet to do difficult manipulation tasks such as open a door or clear an obstacle.”
Ramos and his colleagues, including PhD student Albert Wang and Sangbae Kim, the Esther and Harold E. Edgerton Center Career Development Assistant Professor of Mechanical Engineering, will present a paper on the interface at the IEEE/RSJ International Conference on Intelligent Robots and Systems in September.
Balance and feedback
To give the human operator a sense of the robot’s balance, the team first looked for a way to measure the robot’s center of pressure, or weight distribution, which indicates its balance and stability. The researchers worked with HERMES, a 100-pound biped robot designed by the team, along with the interface, for disaster response. They outfitted the robot’s feet with load sensors that measure the force exerted by each foot on the ground.
Depending on the forces measured, the researchers calculated the robot’s center of pressure, or where it was shifting its weight. They then mapped out a polygonal area, the edges of which represent each of the robot’s feet. They determined that if the robot’s center of pressure strayed toward the edges of this support polygon, the robot was in danger of falling.
Ramos and Wang then built the balance-feedback interface: a large polygonal platform equipped with motors, and an exoskeleton of metal bars and wires that attaches to a person’s waist — essentially, the human body’s center of mass. With computer software, the researchers translated the robot’s center of pressure to the platform’s motors, which apply comparable force to the exoskeleton, pushing a person back and forth as the robot shifts its weight.
“The interface works by pushing harder on the operator as the robot’s center of pressure approaches the edge of the support polygon,” Wang explains. “If the robot is leaning too far forward, the interface will push the operator in the opposite direction, to convey that the robot is in danger of falling.”
In experiments to test the interface, Wang repeatedly struck the robot’s torso with a hammer. Ramos, standing on the platform, was unaware of when the hammer would strike. As Wang struck the robot, the platform exerted a similar jolt on Ramos, who reflexively shifted his weight to regain his balance, causing the robot to also catch itself.
The team also tested whether the robot kept its balance while punching through drywall. Ramos, in the exoskeleton, mimed the action, and the robot simultaneously carried it out. The platform pushed forward on Ramos as the robot made contact with the wall. In response, Ramos rocked back on his heels, causing the robot to do the same.
“These experiments show the versatility of the human operator. In one test, the robot unexpectedly got its arm stuck in the wall. But, because the human was in the loop, the operator could arrive at a creative solution which was translated directly to the robot,” Wang says. “Our next goal is to try more complex coordinated movements such as swinging an axe or opening a spring-loaded door. These actions are difficult for many robots. If the robot stands stiff while pushing on a door, it tends to tip over. You have to lean your body weight into it and catch yourself as the because it’s so natural to humans, you can have the human do it.”
Jonathan Hurst, associate professor of mechanical, industrial, and manufacturing engineering at Oregon State University, says the new balance interface is an intuitive platform for operators, as an operator can use it without thinking.
“This interface likely won’t even distract a person,” says Hurst, who was not involved in the research. “It’s normal to keep your balance while focusing on a task. But perhaps more important than just a way to control a robot in the absence of knowing how to do it autonomously is being able to observe and collect data from the robot. Given hours of data recording the details of human strategies for balance and pose adjustment, I’d be willing to bet they will discover some relatively simple approaches for autonomous strategies.”
Source: MIT, written by Jennifer Chu