Skip to main content
ABC News
Don’t Forget Humans Created The Computer Program That Can Beat Humans At Go

Humankind is licking its wounds after its latest defeat at the “hands” of artificial intelligence. Over the past week, AlphaGo, a computer program created by Google’s DeepMind, defeated human Lee Sedol, an 18-time international titlist, 4-1 in a five-game Go match in Seoul, South Korea. Go, an ancient Chinese board game, is far more complex than chess and was seen as a kind of final frontier in AI research — a moonshot to spur technological progress. Before this week, machine dominance in Go had been seen as an achievement still decades away.

We assembled some experts in game-playing AI and the game of Go to discuss what this means and what’s next — for AI and for us.

Our participants:

  • Oliver Roeder: Senior writer at FiveThirtyEight. All too human.
  • David Doshay: Archivist for the American Go Association, co-creator of SlugGo, a Go-playing computer program.
  • Matt Ginsberg: Businessman, astrophysicist, creator of a former computer bridge champion called GIB and an expert-level AI crossword puzzle solver called Dr. Fill. FiveThirtyEight wrote about Matt and his new basketball prediction technology in October.
  • Andy Okun: President of the American Go Association and a 1 dan Go player. He attended the match in Seoul.
  • Jonathan Schaeffer: Computer science professor at the University of Alberta and the man who solved checkers.

This conversation has been edited for length and clarity.


Oliver: One human commentator, on witnessing Lee Sedol losing a game to AlphaGo, said he “felt physically unwell.” What were your reactions to this match?

Andy: Similar to that. It was a shock. Not a surprise, because I think we had no way to predict the outcome. It just felt bad.

Jonathan: Excited. An amazing result for technology. And a compliment to the incredible capabilities of the human brain.

Matt: I think that I found it curious more than anything else. The work seemed to me to be more engineering than science, and I had expected there to be more science needed.

Jonathan: Matt is right — engineering, sure. But the science was well-known. It is a mystery to me why the program plays as well as it does.

Andy: It makes moves that no human, including the team who made it, understands.

Oliver: Why feel bad? AlphaGo didn’t rise from the primordial goo. It was the creation of humans!

David: That is a very important point: This is the work of some very dedicated people.

Andy: AlphaGo is a creation of humans, but the way it plays Go is not.

Matt: The point that I think is important is that AlphaGo plays Go differently than we do. I think that in general, the natural domains of competence of man and machines are different, which is good news for both of us. Only games appear (currently) to be approximately equally amenable to both sets of skills.

David: I think this is great for Go. I am getting at least five times the number of requests to join our Go club. The Western world is paying attention to Go in a way that it previously did not. It will open up parts of the game that humans have thought unimportant.

Oliver: After the match, Lee said: “When I saw AlphaGo’s moves, I wondered whether the Go moves I have known were the right ones.” Forgive the provocation, but is it possible that humans just aren’t very good at Go to begin with?

Matt: Oliver, can you tell me what it means to be “very good” at Go? We are better than dogs, by a lot. Does the fact that chess programs now beat us easily mean that we are bad chess players? I don’t think “very good” can really be defined.

Jonathan: Humans may not be as good as they think. The game of Awari was solved roughly 10 years ago. [Editor’s note: A game is “solved” when there is an algorithm that can play perfect moves from any position.] The perfect computer solution was used to analyze human games. The result was that many human moves were in fact mistakes.

Oliver: I guess I’m asking about a sort of tension here: Go is thought of as a victory for computers, but is victory over humans the way to claim that? If it’s such a complex game, it must be incredibly difficult for humans to play well compared to whatever the Platonic ideal is.

Matt: Oliver, the “Platonic ideal” is to play perfectly. In Go, I guess you could ask how many stones a perfect player could give away against AlphaGo. Or how many pawns Deep Blue (or whatever) could give away against a grandmaster. Until a game is solved, though, we tend not to know that answer very well.

Andy: If you were to ask a Go player, quite separate from the question of computers, whether there is a higher level of Go than humans have reached, just better moves and strategies we haven’t found yet, he’d say yes. Now there is a box that might show that next level to us.

Matt: Because Go is more “intuitive,” it’s not obvious to me that AlphaGo will teach us much. What can a race horse learn from a Porsche?

Andy: A Go pro gives up much of the rest of life to become one. Lee Sedol is the third of the great world-beating Go talents to come along. It is hard to hear him say maybe we don’t know much about the game. I think it is understandable they don’t find it easy to respond to this like it is a neat science fair project.

Jonathan: Kasparov did not respond well when he was beaten by Deep Blue. But today it is standard practice for all grandmasters to have an electronic sparring partner. Human players are stronger today as a result of technology.

David: And exactly the same will happen in Go.

Oliver: Games — backgammon, chess, checkers, bridge, poker — have long been testing grounds for AI. Are we out of games now? Where can AI flex its muscles, and what’s the next grand challenge?

Matt: Crossword puzzles. :)

Andy: Games with vaguer boundaries and goals. Many-player games, games with diplomacy and alliance, games with contracts and negotiations, games where the goals are public perception. Game representations of politics.

Jonathan: There are many challenges remaining. Multiplayer poker research is one that is quite active. Also video games — real-time strategy games, à la StarCraft. Politics is a game. Environmental strategies is a game. Military strategy, of course, is a game.

Oliver: So what I’m hearing is: There are plenty of games to … go.

Jonathan: Yes, the research in this area will not GO away. The next “big” AI challenge is to build a general game player. This program would take the rules of a game, go off and learn, and then come back and play at a high level of skill. There already is an annual competition in this area. This is an application where the AlphaGo technology might excel.

Oliver: Can we also talk about the tech and the data a bit? AlphaGo relies on Monte Carlo tree search with two deep neural networks, or so I’m told, and analyzed 30 million moves from human-played games.

Jonathan: Learning from human games helps accelerate the program’s learning. AlphaGo could learn to become a strong player on its own, without using the human games. The learning process would just take longer.

David: Computers just do things differently than people. So the Monte Carlo methods — which depend upon generating huge quantities of random numbers, something computers do very well with modern algorithms — at least give the machine a task at which it could possibly excel.

Oliver: I assume that Google didn’t buy DeepMind just to win at Go. Demis Hassabis, the head of DeepMind, said AlphaGo’s algorithms will one day “be used in all sorts of problems, from health care to science.” Is that likely, and what’s the, ahem, endgame here?

Jonathan: The technology used in AlphaGo is quite general. In contrast, Deep Blue was hard-wired for chess. Thus the long-term impact of AlphaGo will be much more significant.

Matt: Jonathan, I think I would disagree. Deep Blue had a huge impact because of what it told people in AI (search works) and what it told people outside of AI (chess has fallen). I don’t know that AlphaGo can do much better. It is not obvious to me that AlphaGo reflects how humans think. I would be surprised if that were the case.

David: But beyond Go, there is the combination of two new techniques, Monte Carlo tree search and deep convolutional neural nets, which have shown great promise in learning to do a very difficult task. These techniques will without question be applied to other hard problems. A faster computer will be able to discern things that we did not see first.

Jonathan: AlphaGo demonstrates the effectiveness of search, yes. The major contribution is the “knowledge” of AlphaGo — how it learned so quickly to do well in a massive search space. That dimension has been poorly understood, at least in games. Now we have a way of doing something that is fundamental to most AI systems. The lesson from Deep Blue was important, but the methods could not be directly applied. In AlphaGo, they can be applied.

David: The important thing about AlphaGo is that we figured out a way to transform the problem from one based upon handed-down lore (supported by deep analysis) to one that our machines can sit and crank upon, searching for answers.

Matt: … and Deep Blue did much the same thing. I think that is the whole point — machines inevitably perform well when we find algorithms that exploit the fact that they are operating at picosecond speeds and can do a lot more search, or computation, or whatever, than we can. It’s the lesson in Deep Blue, or AlphaGo, or Jonathan, or GIB, or Dr. Fill.

Jonathan: Do any of the Go players online have an understanding of what AlphaGo played in the games that was so different than a human? Have we learned any new strategies?

Andy: Not yet. It will be weeks or months before these game records are understood at that level. Probably never by me.

Jonathan: Do we yet know whether there will be any more public AlphaGo games? After defeating Kasparov, IBM ended the project and dismantled the machine. Has DeepMind indicated anything about its future plans? I have not heard anything.

Andy: I am sure they won’t dismantle it. But they’ve been clear they won’t discuss next matches or plans until after digesting this match.

Oliver: One thing here really interests me, and I’m hoping y’all can help me understand it. There seem to be some really deep philosophical — or at least linguistic — issues swirling around here. AlphaGo has been called creative, especially after a rare move it made in Game 2. But we don’t really mean creative, do we? What do we mean when we call a computer creative?

Matt: I don’t know if that is a meaningful question. I think that creative means, “If a human did this, it would be an indication that the human was likely to do interesting and surprising things in the future.” I think that calling a computer program creative may be sort of meaningless.

Jonathan: Computers are not “creative.” They maximize some numeric function. Calling a computer creative is a form of anthropomorphism.

Andy: The philosophical problem is we don’t know what we mean by humans being creative either.

David: I think that “creative” is the word that people are using when something they expect to be simple-minded does something that they do not understand but have to admit is good.

Matt: That AlphaGo is doing interesting things when playing Go is no indication at all that it will do anything interesting in any other domain.

David: Of course it is unlikely that AlphaGo will do much of anything else well. It was trained to do this one thing. That is today. We are gaining experience getting a variety of algorithms to work together over a number of processors. As the number of processors in the box grows, some will do one thing, others do their things, and eventually it will not be easy to distinguish if “the computer” is doing one or many things well. We have simple multitasking running now — it will only grow in scale.

Oliver: What aren’t I asking about? What should a Go novice like me take away from this? What isn’t the media thinking about here?

Andy: The rewards to a person of playing face-to-face Go. Self-control and discipline, determination and struggle, acceptance of loss, respect and, most of all in my experience of the Go world, friendship. Happily, I think this tech will be incredibly useful in medical research, diagnosis, complex treatment planning and so on. Unhappily, I think it will be just as amazing in law enforcement.

David: The media seem to want this to be “man vs. machine,” a big theme since John Henry, and the result is eventually always the same. What the media misses is the people driven to build and improve the machine until it is our new workhorse. The machine is helping itself learn, but it is still people who build and program it.

Matt: Watson was going to revolutionize medical research as well. Has it?

Oliver: Not that I’ve heard.

Jonathan: Not sure if Watson has revolutionized anything, but IBM is making a lot of money selling the technology.

Andy: Watson was like three or four years ago, wasn’t it? Give it a sec.

Oliver: It’s game night at the Roeder residence: What can I go play in full confidence that I am better than the best machine? Mouse Trap?

David: Very little. Poker is still up for grabs, I believe.

Matt: Oliver, you can go play War. No machine is better at that than you are. :)

Andy: Two games, Oliver. Mornington Crescent and Calvinball.

David: It should not be about being better than any machine out there. It is all the things Andy said and the willingness to open up your mind and have some fun. I doubt that AlphaGo has any real fun.

David: Oh, yes, definitely Calvinball!

Oliver Roeder was a senior writer for FiveThirtyEight. He holds a Ph.D. in economics from the University of Texas at Austin, where he studied game theory and political competition.

Comments