“The next frontier for the robotics industry is to build machines that think like humans. Scientists have pursued that elusive goal for decades, and they believe they are now just inches away from the finish line.”
That’s what we found in a story published recently in National Defense Magazine. We like to keep our eyes and ears open, but we usually don’t find interesting things in military review publications. But this time it’s pretty big stuff. A group of researchers Malibu, California and University of California, Los Angeles have built a tiny machine that would allow robots to act independently. This sounds very like Isaac Asimov’s ideation of a “positronic” brain, the foundation of artificial intelligence in his popular series of “robot” novels.
Unlike traditional artificial intelligence systems that rely on conventional computer programming, this one “looks and ‘thinks’ like a human brain,” said James K. Gimzewski. He’s the leader of the team at UCLA and a Professor of Chemistry there. The fact that he’s a chemistry professor and not a computer science wonk says something very interesting about the approach being used. Unfortunately details of exactly how this thing works are a little beyond the scope of this article, but standard computer electronics this isn’t.
Early Steps – Or Lack Thereof
Artificial intelligence has had a rocky history, to say the least. AI researchers have been working on the problem since the 1960’s and before. Learning machines were the first step – a computer made entirely of matchboxes and little beads plays the game of Nim and after playing the game a few times becomes unbeatable. A small mouse-sized robot called a Terrapin rolls around on the floor and either draws or follows lines, and is controlled by a language called LOGO – which in turn finds its way into the basis of LEGO Mindstorms and robotic devices are now assembled and programmed by children to interact with their environments, responding to stimuli.
The search for true artificial intelligence has been defined mainly by discovering what it isn’t. Early attempts at artificial intelligence included languages like LISP and Prolog, both designed to express responses to finite lists of predefined possible conditions, did provide the possibility to create self-modifying (and potentially therefore unmaintainable) code. At the same time, roboticists were struggling with the technique of making robots do something simple for us, but very hard for robots: walking across a room.
The approaches all boiled down to two basic ones: either try to predict all the possible stimilus/response pairs, or process goal valuations in real time to attain a desired result. The first lacks flexibility – the second lacks speed. This is why, in the past five decades of earnest research in artificial intelligence, no one has yet been able to produce anything resembling human-like reasoning or cognitive functions.
Before robots even as sophisticated as DARPA’s LS3 “Big Dog” would be possible, robotics researchers had to get used to the idea that it was okay to cheat.
Sony’s Aibo
Though not the first robot to use this, the Sony Aibo solved the walking problem by using a preestablished algorithm for moving the legs to create the walking motion. Then the walking routine was modified on the fly according to incoming data from its sensors. As it turns out, this is how most activities undertaken by living creatures happen. Learning to walk is pretty tough, but once learned, it’s a stored algorithm, modified only when needed in order to accomplish a goal. If we had to recompute how to walk every time we had to do it, we’d never manage. Modern robotics approaches the problem of coordinated motion by learning patterns and using a master control program to figure out which ones to use and when to apply them. The Aibo was more an expensive toy than anything else, but it did demonstrate that lifelike behavior was possible in a mobile platform.
DARPA’s “Big Dog”
By 2007, DARPA was beginning to demonstrate some pretty impressive results with their robotics program. The contest to produce self-driving cars produced some interesting results, and the technology became the core of Google’s self-driving car efforts, and in 2010, one of them was given a license plate in Nevada.
But as remarkable as these are, DARPA’s LS3 “Big Dog” is even more so. Designed and built by Boston Dynamics, it’s able to function in extremely rugged terrain, it’s capable of carrying large payloads, walking over fallen trees and climbing hills, stumbling through stream beds, and even following a human leader. Its brain is still a conventional computer, and yet it’s capable of some extremely lifelike behavior. It can even recover from rolling completely over if it has to, as you’ll see in the video.
Back To Basics – Really Basic
And yet, the most advanced as well. We’re talking about the human brain itself. Here are some clues as to what’s going on: the participants in this project include Malibu-based HRL (formerly Hughes Research Laborary) and the University of California at Berkeley’s Freeman Laboratory for Nonlinear Neurodynamics. The latter is named after Walter J. Freeman, who has been working for 50 years on a mathematical model of the brain that is based on electroencephalography data. EEG is the recording of electrical activity in the brain.
So what this means is, in English: Freeman has spent most of his life reverse engineering how the human mind works based on the electrical signals it emits, a task once dismissed as impossible, but now accepted as a viable avenue of research. This is something like reconstructing the sound waves from your stereo by watching a movie of the ripples the sound makes in a glass of water.
What sets this new device apart from any others is that it has nano-scale interconnected wires that perform billions of connections like a human brain, and is capable of remembering information, Gimzewski said. Each connection is a synthetic synapse. A synapse is what allows a neuron to pass an electric or chemical signal to another cell. Because its structure is so complex, most artificial intelligence projects so far have been unable to replicate it.
A “physical intelligence” device would not require a human controller the way a robot does, said Gimzewski. The applications of this technology for the military would be far reaching, he said. An aircraft, for example, would be able to learn and explore the terrain and work its way through the environment without human intervention, he said. These machines would be able to process information in ways that would be unimaginable with current computers.
Studies of the brain have shown that one of its key traits is self-organization. “That seems to be a prerequisite for autonomous behavior,” he said. “Rather than move information from memory to processor, like conventional computers, this device processes information in a totally new way.” This could represent a revolutionary breakthrough in robotic systems, said Gimzewski.
And he’d be right.
What They Might Be Thinking Of Putting It In
While a synthetic brain that thinks the same way a human does is cool – scary, mind you, but cool none the less – it’s not really quintessentially scary until you start talking about putting it into a humanoid body.
Like this one. Boston Dynamics again – except this time they’re making a humanoid called “Atlas”. It’s around six feet tall, weighs a bit over three hundred pounds, and has a reach like an orangutan. It also has stereoscopic vision, LIDAR (a real time laser scanner that helps it understand the geometry of the environment around it) and can navigate through obstacles that would make your average human think a minute before trying. It has 28 motorized joints and tremendous strength, but so far it’s not strong enough to carry it’s own power supply. They’re still working on that, but I don’t think anybody has any illusions that they’re not hard at working solving it.
These machines will likely be used as surrogate humans who can go into environments that would kill a flesh and bone creature.
Nobody knows what they’re likely to do with this new technology if they can get it working reliably. It’s not likely to be made into a Terminator-style soldier. They’re too expensive to allow them to be shot at, let alone blown up, and they’re likely to stay that way for a long time to come.
Could a self-directing robot with an artificial brain achieve self-awareness? And if it did, what would it think about itself? About its creators? What would it do? Science fiction has long posited these questions, and we’ve never derived any answers that gave us that feeling of certainty we’ve been looking for. It would seem that now it’s even more important to figure these answers out, because we may get the chance for practical application sooner rather than later.
– 30 –
I have characters in my stories who were agnates used for spare-part surgery but were ‘brain dead’. On release into the human world they were supplied with a positronic brain and were capable of reproducing themselves with other agnates, or humans.