The future of human-robot partnerships could be revolutionized by child’s play—specifically, the play of babies.
A team of researchers led by Dr. Rajesh Rao, a professor of computer science and engineering at the University of Washington, recently published a paper showing how robots can learn much like children—amassing data by watching adults do something, determining the goal of the action and then deciding how to perform it on their own. Rao’s work is sponsored by the Office of Naval Research (ONR). View the paper here.
“This is a major step in designing robots that can learn from watching humans,” said Dr. Micah Clark, a program officer in ONR’s Warfighter Performance Department who oversees Rao’s research. “It could one day result in truly intelligent machines that understand the intent and goals behind certain tasks, and help humans achieve those goals.”
For decades, scientists, writers and filmmakers have envisioned a future where robots make human life safer and easier—doing mundane household chores or helping troops in battle.
Rao believes this type of artificial intelligence might be achieved with inspiration from the most adorable and inquisitive of humans—babies.
“Babies learn about the world around them through play,” said Rao, “grabbing toys, pulling them apart, banging them on the floor or pushing them off tables. This self-exploration helps babies learn the physics of their environments, and how their actions influence objects.”
Rao collaborated with Dr. Andrew Meltzoff, a respected child psychologist and co-director of the Institute for Learning and Brain Sciences at the University of Washington. Meltzoff’s work (not sponsored by ONR) shows that children as young as 18 months can infer the goal of an adult’s actions and develop ways of reaching that goal themselves.
Using data from behavioral tests conducted by Meltzoff involving babies, Rao’s team designed a machine-learning model to allow robots to explore how their actions result in diverse outcomes.
They tested the model in two types of experiments. The first was a “gaze” computer simulation where the robot learned to track the head movements of others to determine where they were looking. The second involved the robot watching humans move toys around on a tabletop, and then being left to play with the toys on its own.
Rao’s team observed several patterns. After trial and error, the robot was able to figure out the consequences of its actions on the toys. It learned, for example, that a particular toy was harder to pick up than push, and that pushing a toy too close to the edge would make it fall.
The robot could observe and infer the goal of a human action on a toy, and achieve the same goal but with a different action it considered more reliable. For example, it could push instead of pick up a toy and place it at a particular table spot. It could even signal for human help when it felt its actions were too unreliable.
“To get a robot to perform a task like picking up a toy, you normally have to code instructions or physically move a robotic limb with a joystick or other controller,” said Rao. “Our research might make it possible for people to eventually train and program robots through demonstration and speech, much like parents teach their children. This would be useful to our military in jobs like disarming explosive devices, fighting fires, transporting heavy equipment or going into combat zones, where there is a premium on teaching robots new skills on-the-fly.”
Rao and his team plan on scaling up their learning model and design more sophisticated robots to perform more complex tasks. His work is part of ONR’sScience of Autonomy Program.
Filed Under: Aerospace + defense