Researchers at the Korea Advanced Institute of Science and Technology (KAIST), the University of Cambridge, Japan’s National Institute for Information and Communications Technology (NICT), and Google DeepMind have argued that our understanding of how humans make intelligent decisions has now reached a critical point. Robot intelligence can be significantly enhanced by mimicking strategies that the human brain uses when we make decisions in our everyday lives, they said last week.
In our rapidly changing world, both humans and autonomous robots constantly need to learn and adapt to new environments. But the difference is that humans are capable of making decisions according to the unique situations, whereas robots still rely on predetermined data to make decisions.
Rapid progress has been made in strengthening the physical capability of robots. However, their central control systems, which govern how robots decide what to do at any one time, are still inferior to those of humans. In particular, they often rely on pre-programmed instructions to direct their behavior, and lack the hallmark of human behavior, that is, the flexibility and capacity to quickly learn and adapt.
Applying neuroscience to the robot brain
Applying neuroscience in robotics, Prof. Sang Wan Lee from the Department of Bio and Brain Engineering at KAIST and Prof. Ben Seymour from the University of Cambridge and NICT proposed a case in which robots should be designed based on the principles of the human brain. They argue that robot intelligence can be significantly enhanced by mimicking strategies that the human brain uses during decision-making processes in everyday life.
The problem with importing human-like intelligence into robots has always been a difficult task without knowing the computational principles for how the human brain makes decisions — in other words, how to translate brain activity into computer code for the robots’ “brains.”
However, researchers now argue that, following a series of recent discoveries in the field of computational neuroscience, there is enough of this code to effectively write it into robots. One of the examples discovered is the human brain’s “meta-controller.” It is a mechanism by which the brain decides how to switch between different subsystems to carry out complex tasks.
Another example is the human pain system, which allows them to protect themselves in potentially hazardous environments.
“Copying the brain’s code for these could greatly enhance the flexibility, efficiency, and safety of robots,” said Prof. Lee.
An interdisciplinary approach
The team argued that this inter-disciplinary approach will provide just as many benefits to neuroscience as to robotics. The recent explosion of interest in what lies behind psychiatric disorders such as anxiety, depression, and addiction has given rise to a set of sophisticated theories that are complex and difficult to test without some sort of advanced situation platform.
“We need a way of modeling the human brain to find how it interacts with the world in real-life to test whether and how different abnormalities in these models give rise to certain disorders,” explained Prof. Seymour. “For instance, if we could reproduce anxiety behavior or obsessive-compulsive disorder in a robot, we could then predict what we need to do to treat it in humans.”
The team expects that producing robot models of different psychiatric disorders, in a similar way to how researchers use animal models now, will become a key future technology in clinical research.
Sympathy for the robot
The team also stated that there may also be other benefits to humans and intelligent robots learning, acting, and behaving in the same way. In future societies in which humans and robots live and work amongst each other, the ability to cooperate and empathize with robots might be much greater if we feel they think like us.
“We might think that having robots with the human traits of being a bit impulsive or overcautious would be a detriment, but these traits are an unavoidable by-product of human-like intelligence,” said Prof. Seymour. “And it turns out that this is helping us to understand human behavior as human.”
Filed Under: AI • machine learning, The Robot Report, Robotics • robotic grippers • end effectors