Go GameTraditional Board Games

Chinese Go champion Ke Jie 2nd defeat against Google’s DeepMind AlphaGo artificial intelligence

AlphaGo beats Ke Jie again. The match took place in Wuzhen, China. Ke Jie, the Chinese prodigy Go champion,  has loss the second leg of a three legs game against against Google’s DeepMind AlphaGo artificial intelligence.  The prize money for the win against Google’s DeepMind is 1.5 Million dollar. But will Ke Jie be up for the challenge?

DeepMind artificial intelligence

Deep Mind was co-founded in 2010 by Demis Hassabis  and its mission, according to him  is to “solve intelligence” and then use intelligence “to solve everything else”.

Demis Hassabis, former chess master, Deep Mind was bought by Google in 2014  for 625 million dollar.

It is not the fist attempt for DeepMind AlphaGo  at beating  a Go game champion. Last year, in Mach 2016, AlphaGo defeated Lee Sedol 5-0 in a tournament in Seoul.

This was  the first time that a computer program had defeated a human professional player in the full-sized game of Go. Something that many  previously thought to be at least a decade away.

For those who are not yet familiar with Go, Go is a very ancient traditional game, probably the most complex game human play. Many say Go is more complex than Chess for instance.There are more configuration in the board than there is atoms in the universe! The number of moves a player can make in Go is of the order of 10 times that in Chess.

Go is a game for two players that play on a 19 by 19 grid board. Players place in turns, either black or white stones in the board. Once the stones are on the board, they can’t move, but they can be captured by completely surrounded them. The ultimate goal of the game is to control more than 50 % of the board.

David Silver, the main programmer  on the Go team in Deep Mind, explains it in the video below :

When you look at the board, there are hundreds different places where the stones can be put down, and a hundred different ay the white stones can respond to those moves, hundreds different ways black can respond in turn to whites moves. So you get this enormous search tree with hundred times, hundred times, hundreds of possibilities. In fact the search space of Go is so too enormous and too vast for a brute force approach to have any chance of succeeding.

Therefore Deep Mind team had to adopt different approach that is more human like in the way to deal with the position of the stones.

What is AlphaGo made of?

In software like DeepBlue for chess or AphaGo for Go, the key of success is to create an algorithm that examines each consecutive possible sequences of moves for each players and evaluates which player has the advantage at the end of these moves sequences. This is referred as “search tree”.

There are two different neural network algorithms in  the AlphaGo software. They both work in tandem to reduce the enormous complexity  of the tree search.

  • The policy network:

Used to select the moves. It reduces the breadth of the game, limiting the number of moves for a particular board position. The network considers by learning to chose the best promising moves for that position. In other words, it servers as a short-sighted ranking of moves in the tree search.

  • The value network

The value network is used to  evaluate the board position. It predicts the the expected outcome and reduces the effective depth of the search by estimating how likely a given board position will lead to a win without chasing down every node of the search tree.

AlphaGo combines deep learning  (learning from Go game transcript between human Go champions for instance) with  tree search and reinforcement learning. Reinforcement learning is when the algorithm plays and updates its policy model continuously by learning from rewards, positive or negative,  it gets after each moves.

See how AlphaGo works by checking out this research paper published by Google DeepMind team  on the Nature.com website.

Many Go experts didn’t really expect AlpaGo to win against Go champion when it was first tested last year against Lee Sedol. AlphaGo did better that many expected.

What is the future of AlphaGo’s technology?

DeepMind team are hoping that will be able to use the AlphaGo technology  in other areas than the Go game. They are hoping they will be able to use  it along with  google applications, trying to optimize the way Google interact with its users.

They are also hoping that one day, we will be able to use AI in  medicine. AI could help the patients to personalized their treatment by using reinforcement learning to understand which treatments will lead the the best outcome.

But no doubts that extending what has been accomplished here with AlphaGo in the world of Go game  to other areas of the real world will required a lot of work and the intervention of many human brains.

Watch AlphaGo Live event broadcast.

Facebook Comments

Enter your email address to subscribe to this blog and receive notifications of new posts by email. Signup Now!

We won’t or sell or spam your email. We promise!

We will be happy to hear your thoughts

Leave a reply

Login/Register access is temporary disabled
Skip to toolbar