When a Soldier trips over a rock, he picks himself up, dusts himself off, and presses on. Bomb-defusing robots, for the moment, are not so good at recovering themselves in the same way.
Chad Kessens, a robot manipulation research engineer with the Army Research Laboratory, or ARL, part of the Research, Development and Engineering Command, on Aberdeen Proving Ground, or APG, Maryland, is working to make it so the autonomous vehicles used by Soldiers to investigate the inside of a room, or to defuse an improvised explosive device, can turn themselves back over, right side up, if they ever get flipped the wrong way.
At his lab at APG, Kessens had an improvised explosive device-, or IED, defusing robot sitting atop a piece of plywood that was propped up slightly on one edge to create an incline. He flipped the robot over on its back. A nearby researcher initiated a sequence of instructions for the robot, and within seconds, the machine had flipped itself upright.
His research, he said, will mean less time manipulating the sometimes complex controls of an autonomous vehicle to make it right itself, and fewer situations where a Soldier has to make the tough decision to either leave a robot behind or go into what may be a dangerous area to retrieve it.
Kessens said he embarked on his work after having attended the Army’s Route Reconnaissance and Clearance Course.
“Soldiers take it to learn to use robots for finding improvised explosive devices by the roadside in theater,” he said. “Through my interactions with the Soldiers and the trainers, who had been in theater using these robots, I learned that these robots turn over surprisingly often. And when they do, it can be difficult for the Soldier to return it to its upright state and continue the mission.”
One Soldier, he said, relayed to him a story about exactly the kind of scenario that would demand a robot perform on its own what now requires the intervention of an operator. An autonomous robot had flipped over, and the Soldier found himself spending an inordinate amount of time manipulating the controls trying to recover it.
“After 20 minutes of trying, he couldn’t do it,” Kessens said. “He valued his robot so much that he got out of the safety of the vehicle and went over and saved the robot. And that is exactly the kind of situation that we don’t want to put the Soldier in.”
When Kessens returned home, he looked into the scientific literature on what has been already done with self-righting robots.
“I found several solutions, each for a specific robot,” he said. “But the Army has several types of systems, and new systems will come out. I wanted to be able to develop a general framework for creating a self-righting solution for any robot. That includes tracked robots, legged robots, flying robots, and also very small robots that don’t have a lot of memory or processing power. My work has been aimed at developing a framework that can be applied to any robot. You give me a robot, and I give you a self-righting solution for the robot, assuming it is physically possible.”
Kessens said that many times when a robot flips over in an operational environment, the user – the Soldier – can’t see the robot, so he has no way of knowing what way the robot is actually sitting on the ground.
“It can be really disorienting when the robot flips over and the camera is staring straight at the sky or the ground, and the operator might not have a good idea of how the robot is configured, which could make it challenging to make the robot return to its upright state,” Kessens said.
So Kessens has developed software that, when coupled with information about how a specific robot is designed, generates a set of instructions the robot can use to flip itself back upright.
The software Kessens has designed does not run on the robot. Rather, the software runs on a separate computer, and develops an array of solutions the robot can use to flip itself upright, based on what orientation it might find itself in. Those solutions are then loaded into the robot, and it takes that set of instructions with it wherever it goes.
“One of the nice things about the framework I’ve been developing is that it takes pre-processed plans and distills them down to something that doesn’t take much memory or processing power,” he said. “It runs before the robot ever hits the field.”
The smallest robots might not have on board the processing power to calculate their own self-righting solutions on the fly. But with Kessens’ idea, even small robots with limited memory and processing power could carry onboard with them a set of already-developed self-righting solutions to get themselves back in the game.
When a robot flips over, then it can assess its orientation, reference the set of instructions it has for that particular situation, and then use its own flippers, wheels or arms to turn itself upright again and get on with its mission.
Kessens’ work is fairly math intensive. His software is meant to develop solutions for any robot. But to do that, he first needs to provide to it specific information about the robot. He needs to take into account the size and weight of the robot, how many arms it has, its wheels and flippers, and how mass is distributed on the robot. If it has a mechanical arm, the software must know how long each segment of that arm is, how much the arm weighs, and if the weight of the arm is at the base, near the robot’s body, or if it is at the end of the arm.
Each moving part on a particular robot, he said, could potentially be moved or manipulated in a way that helps the robot right itself.
On a robot with an arm, for instance, moving that arm in one direction could create the momentum needed to flip it back over. But that only works if there is enough weight on the end of that arm, if the arm is of the right length, and if the arm is moved quickly enough – and stopped quickly enough.
“If I use a dynamic motion, where I drop the mass quickly and then make it stop suddenly, now we are injecting momentum into the system and we can use the momentum to make the robot right itself,” he said. “It’s a total physics problem.”
Within the “Autonomous Systems Division” and within ARL, Kessens said, researchers are working to “transform tools into teammates.”
“We want to take these robots and give them enough autonomy that they act more like a well-trained dog, where the Soldier can send the robot on a mission where it operates on its own for a couple of minutes, where the Soldier doesn’t have to manage every joint motion and every single activity that the robot is doing,” he said.
If robots can be provided with a “higher level of cognitive ability,” he said, then instead of multiple Soldiers needing to deploy and operate and retrieve robots, “maybe we can flip that ratio and have one Soldier command four robots, where each of those robots is doing something, and it acts more like a teammate.”
Kessens said that kind of relationship between a team of Soldiers and the tools they use is “a ways down the line. But self-righting is one technology that is a part of that, one step toward that vision. We want to give Soldiers a robot that has more self-reliance.”
Filed Under: M2M (machine to machine)