While Tesla is rolling out self-driving car technology to its userbase, researchers are working on how to make autonomous systems more reliable and safe. On Friday, we talked to Michael Wagner, Senior Commercialization Specialist at the Robotics Institute at Carnegie Mellon’s School of Computer Science, about the difference between designing for the forefront of consumer technology and designing for highly robust research projects.
Wagner says that a “robot uprising” is unlikely, and extremely preventable, because of the kind of safety and oversight techniques he works with.
“Folks who are saying we have to be concerned about autonomy, they’re absolutely right,” he said. “The good news is there are techniques we can apply.”
Of course, it’s not necessarily easy to apply those techniques.
In Wagner’s work, he interrogates what machine learning systems actually know when they learn. Researchers use black box testing to get an idea of how machine learning systems work from the outside – not examining the code, just interacting with the system and seeing what happens. This helps discover some unexpected bugs. For example, a robotic algorithm they tested had a problem where the marker for the robot itself would disappear. Without an awareness of the robot, the safety systems wouldn’t go off when the robot got near something.
Wagner’s team can then send feedback to the developers, pointing out those unexpected problems. In another example, they found that a self-driving car algorithm had speed limitations when it was moving forward, but none when it was in reverse.
They didn’t have to build a car to test that, though. Wagner’s team tests pieces of software systems in isolation, which is much more efficient than physically building a robot or even creating a complete simulation of the system.
“I also develop ways to test those machine learning systems and find where the problems in them are rapidly so we can start to understand where the risks are,” Wagner said.
He said he thinks a lot about questions like whether a super-smart AI could deactivate its own “off” switch, but that it’s certainly possible to plan for technical failures – and from an engineering perspective, a robot uprising is just another type of software failure.
The Robotics Institute built a robot for the U.S. Army called the Autonomous Platform Demonstrator (APD) or “Crusher,” which could have been potentially dangerous for testers. “We worried a lot about losing control of it …So [professor of electrical and computer engineering Phillip Kooman] and I spent a lot of effort building this stand-alone component that we were able to prove out.”
Because that component is stand-alone, there are fewer worries about the autonomous systems affecting it.
This bolt-on monitor idea solves some problems, but there are also trickier challenges, such as making sure a self-driving system expands its parameters for pedestrians to include people in wheelchairs or with canes as well as completely bipedal silhouettes. These things are easier to do in the lab than in a system that receives a lot of upgrades in the field.
Wagner said that he believes both fields – cutting-edge deployment and lab work – are valuable and are bringing important things to human understanding of autonomous machines. However, the two fields operate in what are essentially different social circles.
“The reason why we’re maybe not having the conversation yet is that these two worlds aren’t talking to one another yet, and they have to.”
Getting both circles to talk is “the natural next step,” Wagner said, and the best way to assuage some of the “generalized anxiety” around autonomous systems.
Filed Under: M2M (machine to machine)