Facebook

by Gene Turnbow

He’s definitely cute. For a robot, he’s certainly smart. Is he self-aware? That depends on how you define it.

For a very short time, for the purposes of the test, possibly so.

The Nao robot is an experimenter’s platform. At $6-7,000 for one of them at full retail, it’s not exactly an impulse purchase. He is, however, a fully functioning robot in the true sense of the word, i.e., he solves problems on his own according to his programming. What, then, can we infer from this video that apparently shows one of these little guys demonstrating self-awareness, understanding that the voice he hears speaking is his own?

The test consists of three Nao robots.  Two have been given “dumbing pills” that render them mute. The third robot has also received a “pill,” but it’s a placebo. It can still speak, but it doesn’t know this at the beginning of the test. The “pill” itself is actually just a tap on the head, but only one of the robots is able to ignore this tap that would otherwise have rendered it mute. A human experimenter then asks the three robots, “Which pill did you receive?”

All three robots attempt to respond, but none has enough information to answer the question. When each fails, each tries to stand up and say, “I don’t know.” Only one can, because it was the one that was given the placebo. Upon hearing its own voice, though, that one robot now has the last piece of information it needs to solve the puzzle, raises its hand and says, “Sorry, I know now: I was able to prove that I was not given the dumbing pill.”

This little guy has just passed one of the hardest tests for artificial intelligence out there. It’s an updated version of an old logical puzzle where three men are given colored hats to wear, and none of them can see his own hat. They have all been told that the test is fair to all the men, and that there is at least one blue hat, and there may be one, two, or zero white hats. Unbeknownst to them, all are wearing blue hats. The first two men answer “I don’t know”, because they can’t see if their own hat is white or blue. However, the third man can use the statements of the other two to deduce that his own hat is also blue, for since the first two didn’t know whether or not they were wearing a white hat, the third could tell that neither were, therefore all the hats would have to be blue.

At this point you’re probably drawing the conclusion that a robot can be specifically programmed to solve a specific problem, and that this really doesn’t prove very much about machine consciousness. You’d be right.

“This is a fundamental question that I hope people are increasingly understanding about dangerous machines,” said Selmer Bringsjord, chair of the department of cognitive science at the Rensselaer Polytechnic Institute and one of the test’s administrators. “All the structures and all the processes, informationally speaking, that are associated with performing actions out of malice could be present in the robot.” Bringsjord believes that while machines may never be truly conscious, they may be able to emulate logic and decision-making that would otherwise indicate consciousness so well that the distinction might not matter.

This is one of the fundamental problems with AI research: no matter how clever a computer program you create, no matter how closely it emulates the behavior of an intelligent, self-aware being, it will have limitations that a real conscious mind would not have. Each time you solve a problem, you discover that you have emulated a conscious action, without any way to prove that there was any real conscious thought involved. Researchers in the field of artificial intelligence are playing a game in which the goalposts are constantly moving.

While a fascinating step forward, robots are not about to take our thrones as top predator any time soon. While it is possible to build machines that can make autonomous decisions, self-awareness in a machine is still a technological will-o’-the-wisp.

You can exhale now.

For more information on this experiment, visit Rensselaer AI and Reasoning (RAIR) Lab’s project page.

For more stories on artificial intelligence and robotics, please visit:

– 30 –

Gene Turnbow
Gene Turnbow

President of Krypton Media Group, Inc., radio personality and station manager of SCIFI.radio. Part writer, part animator, part musician, part illustrator, part programmer, part entrepreneur – all geek.

%d bloggers like this: