"The game of Go is not just a sport that one needs to win; it is a form of art, just like music or paiting, with which one expresses one's unique individuality. In order for it to a be a work of art, it needs to have a creative and unique aspect that can speak to us. Go is not just about winning. More importantly, it is about expressing oneself."
Master Shuko in a letter to Lee Changho, In "Go With the Flow"
I became infatuated with Go in 2016 during the AlphaGo vs. Lee Sedol match.
It is quite a simple game. Players take turns to place a piece (stone) on the board, at the intersections of a 19x19 grid (for a full size match). If you manage to surround your opponents stones such that they have no empty grid lines around their stones, then you capture those stones and remove them from the board. At the end of the match you count up how many empty grid spaces are surrounded by your stones, and add that to how many you have captured. The player with the highest score wins.
There are a couple of other rules for special cases, but that is all there is to it. Chess on the other hand, has far few possible moves each round (1st player 1st move has 8 pawns with at 1 or 2 jumps, and 2 knights with 2 options each = 20 first moves, opponent then has 20 options and so on with generally only perhaps 15-20 options a move) and so is amenable to Monte Carlo approach (consider each move and calculate strength of the game, and for each of those moves consider each possible subsequent move, assess each board strength and then consider every possible next move. If have 6 rounds to consider then perhaps 20 ^ (6 rounds * 2 players) = 64 million boards to consider as the match clock ticks away a few minutes, seek to assess a few further rounds ahead for a Grandmaster vs a competent amateur and Deep Blue computer had a tough task to compete against Kasparov.
With Go firstly the number of options becomes excessive (361 moves for black, 360 moves for white, then 359 options for black... so 6 rounds is 361x360... x 349 = 1.3 billion trillion trillion possibilities ). Secondly it can be very difficult, even for professional players, to assess overall board strength. So a move might be locally strong or weak, but it can also have influence latter in the game of a player's or their opponent's other groups. Now of humans have difficulty with this, then how is an AI to assess might possible moves are worth consider further.
AlphaGo is in several parts and it is their combination of approaches that has worked so well. Firstly experience - AlphaGo was given thousands of professional matches to look at and learn common patterns and responses. This helps Alpha Go narrow down the number of moves that it needs consider. Secondly AlphaGo includes a module that tries to assess the strength of a board. AlphaGo second learning was by playing millions of games against itself adapting its neural network to learn what is a stronger board position. This is AI, as opposed to a clever programming code, because there is no specific coding as to what a stronger board looks like - instead the neural network has to adapt to see what works best. AlphaGo also played against previous versions of itself to help it get stronger. Yes humans get stronger by playing lots of games, but a committed amateur on an internet gaming site might only play several thousand games as they rise up the amateur rankings, and a profession at Go school will from young childhood played a few games a day and studied go problems so their few thousand game exposures a year over 10 or 20 years still does not compare to how many millions of games AlphaGo has already tried with itself. The neural network of strong board position is not enough to play Go well - one need to have past experience (the deep-learning that gives rise to the company's name of Deep Mind) of what are likely to be strong patterns of responses to limit the number of sensible moves to decide between. The deep learn is though not enough without learnt expertise of assessing the merits of a play.
DeepMind AlphaGo uses a brute-force approach, albeit a different one from Deep Blue, as well as a different one from Deep Throat which was a different one from the original Deep Throat. But it's brute-force nonetheless, as it relies upon being able to sample millions of variations before it makes a move. So, what we have learned from the last couple of days is not that Go, unlike chess, requires an AI solution, but that Go is more like chess than we used to think.
However, there is another side to this: even without doing any look-ahead at all, an AlphaGo-style " artificial neural net" can play a respectable amateur-level game just by itself, which demonstrates that such an artifice is capable of learning to "see" patterns in an image and "know" what they "mean" (i.e., what to do about what it sees, i.e., which move to make). The pattern recognition side of things is what the DeepMind company and the Google Brain project are focused on, and even if they overhype what they are doing, their technology still represents a kind of intelligence that is indeed artificial, as it is not told what to do. but how to learn what to do.
Nevertheless, the so-called singularity is, as Noam Chomsky adroitly put it, science fiction. Sure, it will happen one day, but that day will be long after sea levels have risen substantially and London, Tokyo and New York become Waterworld.
It is is a trivial matter for Alpha to spit out what I figure to be the most probable sequence variation, but the next giant leap in AI will be when someone figures out how to map convolution configurations into symbolic expressions that relate to a hierarchical description of the position in terms of things like group safety, influence, and potential territory, so that Alpha’s daughters will be able to explain what they are thinking about in something resembling any Natural Language.

