NASA’s Jet Propulsion Laboratory (JPL) sent Chris Yahnker, group supervisor for Extreme Environments Robotics, to the recent MD&M West in Anaheim this year to talk about how NASA’s robots are revolutionizing the way of design and manufacturing.
Yahnker discussed how JPL focuses on designing robotic systems that explore a gamut of different environments, such as the deep ocean, outer space, and deep planets.
“How do we tackle the really tough problems out there?” he asked. “How do we go to Mars, Saturn, etc.?”
To accomplish these tasks, Yahnker says they have to put a lot of research and design into every single element.
“In order to do this, we need to design really high-reliable systems,” says Yahnker. “It requires a lot of testing which has enabled us to put robots on other planets.”
Currently, some of the key robotic system technologies JPL uses include machine vision, sensor-processing algorithms, advanced electro-mechanical systems, integrated simulation of landing and mobility, and human to robot interfaces.
Yahnker first honed in on how the team looks at the mobility and manipulation of their robots.
“We have to understand the space around the robot and how to interact with it,” he says. “How do we not put ourselves in a situation where if we hit a rock, we can’t get out of it?”
In response to these concerns, JPL can’t just place a robot on a planet. The robots JPL designs endure very harsh conditions, which requires prior simulations.
“We don’t just drop a parachute and put a rover on Mars,” Yahnker says.
Instead, they figure out the exact spot the rover will land in real-time, and they can then autonomously plan the landing so it can complete its mission.
On the other hand, deep-sea missions require less autonomous moving parts, he added. Deep-sea robots have more interactions, so as researchers, JPL has to interact, take real-time data, see what the robot’s seeing in the environment, and select something it sees that could be interesting or pertinent to research.
Simulation Importance
In order to be successful when landing an actual robot on a planet or in the sea, they use a physics-based simulation. This enables JPL to build a real dynamic analysis of what could happen with each robot or spacecraft when it journeys to a specific environment.
“We don’t want the first time we use this hardware to be the first time it comes into contact with Mars,” says Yahnker. “We can look at the terra mechanics. Mars is really dry, so the particles stick together. We need to be able to test these things before actually putting it on Mars.”
The physics-based simulation is a high-fidelity physics-based modeling simulation that includes hardware-in-the-loop capabilities, such as large high-resolution terrain models, contact dynamics, terra-mechanics, aerial, surface, and sub-surface models, and a parametric analysis.
This type of simulation allows the designers and researchers to get a sense of perception of what the robot will be seeing, doing, and feeling. For this, they develop models from sensors in real time, and are always looking to build more autonomy into their systems.
“The big thing is to give it vision with stereo vision,” adds Yahnker. “That way we can develop a 3D view of the world around it. This enables us to do target tracking, aerial surveillance, object recognition, activity recognition, and differentiate shapes from shadows.”
The interesting part of using this simulation is that most robots have a robotic twin, so JPL can test one robot in a similar condition on Earth before sending the robot to a foreign environment. Then, once one of the robots physically goes to a place like Mars, they can “pre-test” the twin robot on Earth prior to executing commands on Mars.
In Part 2, Yahnker addresses how a robot’s movement is executed while visiting Mars and delves into the future of robotic systems for JPL, while highlighting a few current projects.
Filed Under: Aerospace + defense, Robotics • robotic grippers • end effectors