Member:
American Society of Journalists and Authors - Logo

Contact Information

Latest Book

Telecommuting for Dummies

Featured Article

Business Reporting Home Personal Writing
General Interest International Reporting Technology Writing

Thinking Machines?  Inside the offbeat world of the  14th annual artificial intelligence conference.

By Minda Zetlin

Day One:

I'm standing on the concrete exhibit floor of the Rhode Island Convention Center. Nearby is a cardboard box with a small door on each side of it. In front of me is a robot, small, round and black, about two feet high, that seems to be rolling along towards the box. Suddenly it stops, pivots to the left and heads off in a completely new direction.
"Where are you going?" I ask it.

"It's going around you," says a roboticist standing behind me. Apparently, the fact that I was at least four feet away doesn't matter.

This robot,. I learn later, is Lois, or maybe Clark, one of a of a two-robot team from Georgia Tech University. Another thing I will learn: robots don't see very well.

It's the first day of the American Association of Artificial Intelligence conference. Upstairs computer experts from all over the world are holding sessions of which even the titles strike me as incomprehensible: "Knowledge Representation: Ontologies" and "Constraint Satisfaction Problems: Symmetry," for example. Beyond the very basic idea of reproducing--or at least imitating--what humans experience as "thought," I have no idea what artificial intelligence is. 

But I do know where I want to be: down here in the exhibit hall. On one side, some 20 robots from across the country are competing at a variety of simple tasks. On the other side, some of the world's best game players are facing down some of the world's best game-playing software to see which is smarter. Arrayed among them are exhibitors who have brought robots for display. The whole place is a mad jumble of strange creatures, funny flashing lights and odd robot sounds. 

Whatever preconceptions I might have had about the size of a robot quickly go by the wayside: they range from a tiny circular robot, about two inches in diameter to large solid robots big enough for their roboticists to ride. In fact "Wheelesley," a robotic wheelchair that can find its own way around is designed for precisely this purpose. The two-inch one turns out to be one of my favorites: it's a learning robot that bumps its way through a maze on the first day of the conference, but will learn to navigate it near perfectly by the end.

What Color Is That Squiggle Ball?

What Lois or Clark was preparing for when I first encountered him or her was a robot event titled "Find Life on Mars." Here's how it works: an enclosed area of the exhibit floor is meant to represent the Martian surface. Styrofoam Martian "rocks" are piled here and there in the enclosure. In the center is the "spaceship"--the cardboard box I'd seen earlier. 
Arrayed through this environment are brightly colored cubes, balls and Squiggle balls (battery-operated balls that roll around erratically) which respectively represent dead and living Martians. The robots are supposed to navigate through, avoid the rocks, pick up the various Martians and, depending on their color, deliver them back to one of the spaceship doors. Capturing a live Martian confers a huge bonus of extra points. Destroying one, according to contest rules, would violate the "prime directive"--and carries a stiff penalty.

But none of the robots ever actually catches a moving ball: they keep getting into trouble. One has its sensors set too high, so that it looks over the tops of the rocks and crashes into them. Others get Martians caught in their grabbers. At one point, Clark suffers a broken arm. Nevertheless, with Lois still going strong, Georgia Tech will wind up winning this event--proving the value of teamwork, even for robots.

It also proves that robots are not half so well developed in real life as they are in the popular imagination. Obviously, an android like Star Trek's Data is centuries into the future, but even a primitive being like the one in the old Lost In Space ("Danger, Will Robinson!") is way ahead of anything that's been built so far.

The problem, explains Alan Schultz, a roboticist from the Naval Research Laboratory, is the one I noticed when I first arrived: robots don't see too well. "People take eyesight and hearing for granted, but they're incredibly complex," he explains. "Eyeballs and a brain would be great on a robot, but we don't have that. They need to have sensors to interact with the world, and the sensors we have are not that good." He adds that his lab's next project is to build a robot that can understand what he calls "multilevel" communications--a combination of words and gestures, such as "Go over there." Simple enough for any dog to understand, but way over the head of most existing robots.

Backgammon: The Human Gets Lucky?

While robots on one end of the hall struggle to understand the difference between a yellow ball and a blue cube, game-playing programs at the other end are giving the best human players a run for their money. This year's conference is the first to include an event called the Hall of Champions where top-rated human players and the best game software can compete in exhibition games.

The first contest pits backgammon World Cup holder Malcolm Davis against a program called TD-Gammon, created by IBM researcher Gerry Tesauro. The TD-Gammon/Davis match is unusual in two ways. First, it is the only game besides Scrabble where neither human nor machine is heavily favored. And second, TD-Gammon is the only game-playing software here that relies on learning rather than a technology called "Deep Search."

Deep Search refers to a computer's ability to predict many moves into the future what the result of each move will be, and choose the best possible moves accordingly. Tic-tac-toe is a good illustration: it is never possible to beat a computer because it can predict all possible results of every move. When a game can be predicted in this manner from beginning to end, software researchers say it is "solved," and the best result anyone playing a computer can hope for is a draw.

Backgammon is deeply unsuited to Deep Search, because each move depends on an unpredictable roll of the dice, yielding too many possibilities for even the fastest of today's computers. This is why Tesauro, who works with neural net technology that mirrors human learning patterns, was interested in programming it.

He created TD-Gammon without knowing much about backgammon. Instead, he let the software teach itself how to play, by playing against itself. It started, he recalls, from a random strategy where it would only win accidentally, and improved its play over millions and millions of games. 

Tesauro believes TD-Gammon has slightly better odds of winning, but this match goes to Davis--who graciously credits luck as a factor in his victory. "I've never ever played a computer before," he adds. Asked how the computer played, he replies, "I'd say it was a very solid player with no noticeable weaknesses. Some great players might get scared, they might get mad or they might get embarrassed. With the computer, I couldn't take advantage of any of those things."

Bridge: Beaten from Afar

At the time of the conference, the North American Bridge Championships are taking place in Albuquerque. Jeff Meckstroth and Eric Rodwell, who some believe make up the strongest pair in bridge today, take time out from the event to play the program Goren-in-a-Box over the Internet.

Goren-in-a-Box, (named for bridge great Charles Goren), is the creation of Matt Ginsberg, founder of the Computational Intelligence Research Laboratory at the University of Oregon, Eugene, and also the organizer of the Hall of Champions. As expected, the program is beaten badly by its distant human opponents. "It's not in the top 100 players in the United States or anything like that," Ginsberg says. "Maybe in the top 5,000."

One big difference between GIB and most expert human players is that the program doesn't signal. During play, the defensive partnership can use cards to signal each other about what their holdings are and which suits to play. The problem, Ginsberg points out, is that these signals can also be understood by the opponent, so that in each instance, the defensive players must weigh whether giving a signal is worthwhile or not. 

GIB would have trouble making this judgment. "I would rather not have GIB do anything badly," he adds, sounding very much the protective father. And that's a feeling he freely admits to. "I'm devastated when GIB does something stupid," he says.

Checkers: Time Runs Out

One program that went into its match heavily favored was the checkers-playing Chinook, created by John Schaeffer, professor of computing science at the University of Alberta. According to Schaeffer, Chinook has solved checkers from the point where only eight pieces remain on the board, and has earned the right to play for the World Championship (a special World Man-Machine Championship was created to accommodate it). Its opponent at the conference is the current world checkers champion, Ronald "Suki" King from St. George, Barbados. 

But if Schaeffer expected an easy win for Chinook, he hasn't counted on technical difficulties. Chinook wins the first game of the two-game match, but because of networking problems, its moves must be relayed via a cumbersome system--time that is, naturally enough, counted against the computer. In the second game, Chinook seems headed for a win when its time runs out, leaving the match a draw. 

"I have to take that computer a lot more seriously than I am taking it now," King comments after the game. Next time we play, I'm going to beat it."

King further claims that checkers is more difficult (though less complicated) than chess, because in checkers, he says, it's all but impossible to recover from a mistake. "In chess, you can move forward, and you can come back," he adds. "In checkers, once you go forward you can't come back, which makes checkers more like life."

Movie with Your Snack?

That evening, conference attendees are invited to a cocktail party with a difference: the robots will serve the hors d'oeuvres. This event, called "Hors D'oeuvres, Anyone?" is one of the four events in which robots will be judged. Humans are each given a token with which to vote for their favorite robot servers.

Personality counts in this event, and so the robots are all dolled up, one with a bow tie, another with a sign that reads "Will demo for food." 

And they speak. "Have an hors d'oeuvre: it's your destiny," suggests Coyote, a robot from the Naval Research Lab, and: "This looks like the beginning of a beautiful friendship," he adds, for those who partake.

Another robot is set up with both a tray of hors d'oeuvres and a small screen, where clips from various current movies are running--an "in-snack movie," as one roboticist jokes. 
But most of the robots suffer from the same impediment: immobility. Not that they're unable to move, but the humans naturally gather around each contestant, and the robots find themselves locked in.

"It's a conflict because you're not allowed to run into anyone, and yet they want you to explore," complains Schultz. "Coyote would beep, sound a klaxon, honk a horn, say 'pardon,' but people still wouldn't get out of the way."

Day Two:

The second day of the exhibition gets underway with a chess match between Gabriel Schwartzman the US champion and a chess-playing program called that runs on a home computer. Schwartzman wins, but it seems insignificant in the face of Garry Kasparov's recent and dramatic loss to the IBM program Deep Blue--something that keeps the game programming community buzzing throughout the conference.

The next event is Scrabble, another evenly seeded match pitting 21-year-old Adam Logan, the top-rated North American player against the computer program Maven. Logan wins the match. But Maven can claim the distinction of having profoundly changed its game. "Players used to think defensive moves were important--like blocking an opponent's access to a triple word score," says Brian Sheppard, who created the program. "Maven has proved an aggressive strategy is more effective. Go for as many points as you can from the first."

Go: Treating the Computer with Contempt

The most dramatic gaming moment of the conference came when Janice Kim, the only Westerner ever to enter the professional dan ranks at go, played the computer. 

Go is a game in which Deep Search does little good, and, compared to the other games here, computers are notoriously ineffective at it. So it is no surprise that Kim beats the computer, the only surprise is how badly. During the first of two games, she not only defeats the computer by a humiliating margin, the software displays its own limited understanding of "life" and "death." In go, a group of pieces is said to be dead if the board position is such that it cannot be protected from being surrounded by the opposing pieces. Understanding the difference between life and death is essential to go, and so the software has a feature for identifying live and dead sections. The only problem is that it is often wrong. Worse, it wastes moves trying to defend dead sections.

At the start of the second game, with Kim playing white, the audience gasps as a starburst of black pieces spreads over the board. Kim has given the computer a 25-piece advantage--a move Schaeffer describes as "the perfect expression of contempt for the machine." As play progresses, she makes inroads here and there, but a 25-piece lead seems impossible to overcome. Ginsberg wanders over to the curtained booth where players sit, to offer her the option of scrapping that game and start a more reasonably balanced one. She elects to keep playing.

"I asked her whether she had any chance of winning, and she said it depends on what the machine does," he reports to the audience. Sure enough, she manages to squeak out a narrow victory.

"Twenty-five points was way too much, and I did things that were completely unethical--and wouldn't have worked against any DNA-based opponent," she says cheerfully, after emerging from the booth to a solid round of applause.

When I ask her to elaborate, she explains: "It was like a dawning realization that I could get away with anything. There were times when my group was dead if I didn't make a certain play, and I would look at the score and think: I should defend, anyone else would kill this. But then I'd think: It's never going to figure out how to kill me. And it didn't, in fact."

Day Three:

Othello: Defending Silicon Honor


By the last event of the Hall of Champions, computers have not prevailed in a single match. The only draw they've managed was in checkers, where the computer had been expected to win. It's time, Ginsberg says, for a program to "defend silicon honor."

Logistello is just the program to do it. Of all the games included in the Hall of Champions, Othello is the best suited to the Deep Search method of looking ahead and calculating moves. But because each move causes one or more Othello disks to flip to the opponent's color, it's difficult for humans to look far ahead with much accuracy.
"A good human player can look ahead a maximum of 10 play (five moves for each side), " notes Michael Buro the creator of Logistello. "The computer can look 20 moves ahead, and without making mistakes. That's a big difference."

What's more, Logistello has solved Othello for the last 25 moves. That means the only way to beat the program is to gain a solid lead within the first 40 moves--and make only perfect moves thereafter. This is why Logistello has so far proved impossible for humans to beat--indeed Buro says his most challenging competition comes from other Othello software.
During the Hall of Champions competition, Logistello's opponent is Tetsuya Nakajima, the Japanese Student Othello Champion, who plays from Japan, via the Internet. Nakajima loses the first game of the match by a wide margin, and the second by a narrower margin.

The following week, Logistello is scheduled to play the world champion, Takeshi Murakami (also from Japan). Murakami had asked for double each player's usual one-hour time limit because of the need to play his last 13 moves perfectly if he hopes to beat Logistello. Even so, he will lose all six games of the match.

Poker: See You Next Year?

Ginsberg had high hopes of including poker in this year's Hall of Champions. It is currently being programmed by Daphne Koller, assistant professor in the Computer Science Department at Stanford University. Though Ginsberg had a human poker champion ready to play, Koller declined, saying her work was not ready yet. 
Koller's poker program differs from some of the others in that it uses game theory rather than search. And, she says, with this technique, the software chooses to bluff on the same kinds of hands an expert human player would.
As for Ginsberg, he has his own reasons to hope poker will be included next time around. "I've spoken to both camps individually," he explains. Daphne is extremely confident that when she is ready, she will beat whoever she plays. And the poker people are completely confident that no computer is going to come close. So that'll be a lot of fun when it happens. Somebody is in for a big surprise."

But for this year, the event is over, the exhibitors take down their booths, and the roboticists gather up their awards. I've learned that robots are way behind what the public imagines them to be. And computer game software is way ahead: despite its resounding defeats this year, there's a sense that it's only a matter of time before the best players at most of these games are computers.

But that in itself is misleading. For when human champions play computers, they're playing machines that have been programmed by people. And when computers play human champions, those champions are backed by...computers. Backgammon champ Davis may never have played against a computer, but he does use one to calculate "roll outs" as he studies his game. Othello expert David Parsons says he has memorized 600 computer-generated openings. "And don't you think Kasparov would have loved to have a computer like Deep Blue on his team?" he asks. Then there's Maven, which has changed the shape of Scrabble.

So whether the champion is a human-programmed computer, or a computer-aided human, one thing seems clear: when it comes to games, artificial intelligence and human intelligence need each other.

[Books]      [Bio]      [Latest Book]      [Featured Article]
[Home]      [Contact Information
        

Samples[Business Reporting]      [Personal Writing]      [General Interest]  
[International Reporting]      [Technology Writing]  

Copyright © 2000 [Minda Zetlin]. All rights reserved.