• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

AI Reaches A Long Sought Milestone

Revoltingest

Pragmatic Libertarian
Premium Member
A software program by Alphabet has beaten a professional go player in a non-handicap game.
http://bits.blogs.nytimes.com/2016/01/27/alphabet-program-beats-the-european-human-go-champion/
In March, it will play the strongest pro, Lee Sedol for a $1,000,000 prize.

Since it's an AI program, not one of those brute force calculating algorithms with a massive playbook, we can look forward to novel fuseki & joseki (aspects of the opening). I also wonder how it will compare with mathematical approaches to the endgame. Will it figure out these things on its own?

Hey, @icehorse!
The article says the March game will be streamed live on youtube.
Such games are over my head, but I plan to watch anyway.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Since it's an AI program, not one of those brute force calculating algorithms with a massive playbook
Of the type created decades ago when such methods were used to enable a computer to "learn" the rules of checkers and to become adept at it. We have indeed become incredibly adept at creating systems which engage in pattern recognition, allowing them to "learn" not only the rules of games but select ideal strategies as well. The problem is that we are completely ignorant when it comes to even beginning to understand how we might create systems that are capable of the kind of associative intelligence rats and pigeons are. The problem with these kinds of demonstrations of AI capabilities (namely, those that involve games) is that games are rule-based. Computational intelligence paradigms/AI systems/soft computing methods are particularly good at navigating rule-governed state/phase space (i.e,. figuring out optimal "moves" in an environment in which the ideal moves can be computed via exact rules and exact conditions, such as those that determine moves in a game and how a game is won). They suck when it comes to much of anything else, as demonstrated by the embarrassing effectiveness of CAPTCHAs.

For those interested, the link to the Nature article.
 

Revoltingest

Pragmatic Libertarian
Premium Member
Of the type created decades ago when such methods were used to enable a computer to "learn" the rules of checkers and to become adept at it. We have indeed become incredibly adept at creating systems which engage in pattern recognition, allowing them to "learn" not only the rules of games but select ideal strategies as well. The problem is that we are completely ignorant when it comes to even beginning to understand how we might create systems that are capable of the kind of associative intelligence rats and pigeons are. The problem with these kinds of demonstrations of AI capabilities (namely, those that involve games) is that games are rule-based. Computational intelligence paradigms/AI systems/soft computing methods are particularly good at navigating rule-governed state/phase space (i.e,. figuring out optimal "moves" in an environment in which the ideal moves can be computed via exact rules and exact conditions, such as those that determine moves in a game and how a game is won). They suck when it comes to much of anything else, as demonstrated by the embarrassing effectiveness of CAPTCHAs.

For those interested, the link to the Nature article.
A problem facing programmers was computational power.
Chess is simple (relatively) enuf to calculate decision trees farther ahead than for go.
This is aided by a playbook of games, to see a winning path.
In go, players typically read more moves ahead than in chess.
This is aided by a couple things....
- Patterns are easier to visualize.
- The greater complexity gave humans an advantage in groking which lines of play to ignore or pursue.
 

LegionOnomaMoi

Veteran Member
Premium Member
A problem facing programmers was computational power.
True. Still is, actually.
Chess is simple (relatively) enuf to calculate decision trees farther ahead than for go.
Chess is unbelievably complex. Far more complicated than Go (actually, that's not true- complicated in this case depends upon what one demands the program to "learn" vs. what rules are given/programmed; however, in general checkers can be and has been incredibly complicated compared to chess simply by virtue of requiring that the program "learn" to play the game, which isn't generally the method that is used for systems like Deep Blue or Watson). The decision trees in chess are infinite, meaning that given any initial starting configuration (first move by opponent), it is impossible to determine a deterministic winning strategy (at least in general). This is also largely true of checkers. Winning paths are generally either probabilistic (thousands or tens of thousands of paths are processed by algorithms that weight them via some mechanism programmed to select superior ones) or exploit the deterministic nature of the game's rules.

Patterns are easier to visualize.
Here you hit upon an absolutely central issue. The program in question (like all others) can't visualize or otherwise even represent anything akin to the kind of patterns that humans see everywhere (often erroneously). Computer programs do not tolerate ambiguities, while the ability to visualize abstract "patterns" (relations among similar outcomes, distributions, configurations, etc.) is based upon such ambiguities. It is no easier for a program to "visualize" the 300,000,000th digit of pi or solve a system of 10,000 equations than it is to recognize a winning path. The differences are simply a matter of computational power and the ability of a human to sufficiently reduce the problem to mere computations.
The greater complexity gave humans an advantage in groking which lines of play to ignore or pursue.
And the triumph is the problem. The achievement here isn't one that shows our ability to do anything qualitatively different or one that indicates we have made progress in the field of AI. Merely that we once again have found a way to bypass understanding in order to reduce the mechanisms mammals (and humans) use readily to identify patterns, strategies, etc., all of which are conceptual/semantic, into purely formulaic, mindless, syntactic manipulations.
 

Brickjectivity

Veteran Member
Staff member
Premium Member
It represents the ability of a single-use neural network. You can almost call it disposable, because once trained it is useless for anything else. Next they should try to make one neural network simultaneously play chess, go, checkers and still be able to learn new games.
 

Thief

Rogue Theologian
A problem facing programmers was computational power.
Chess is simple (relatively) enuf to calculate decision trees farther ahead than for go.
This is aided by a playbook of games, to see a winning path.
In go, players typically read more moves ahead than in chess.
This is aided by a couple things....
- Patterns are easier to visualize.
- The greater complexity gave humans an advantage in groking which lines of play to ignore or pursue.
but I won't believe until it does impromptu stand up comedy
 

Revoltingest

Pragmatic Libertarian
Premium Member
It represents the ability of a single-use neural network. You can almost call it disposable, because once trained it is useless for anything else. Next they should try to make one neural network simultaneously play chess, go, checkers and still be able to learn new games.
That's too easy....just playing games.
I want it to also come here to post limericks.
 

Revoltingest

Pragmatic Libertarian
Premium Member
True. Still is, actually.
Were that true, then we wouldn't have seen Deep Blue beat the world champ, Kasparov
(at which time, even this kyu level janitor might've beaten the best go playing program).
Chess is unbelievably complex. Far more complicated than Go (actually, that's not true- complicated in this case depends upon what one demands the program to "learn" vs. what rules are given/programmed; however, in general checkers can be and has been incredibly complicated compared to chess simply by virtue of requiring that the program "learn" to play the game, which isn't generally the method that is used for systems like Deep Blue or Watson). The decision trees in chess are infinite, meaning that given any initial starting configuration (first move by opponent), it is impossible to determine a deterministic winning strategy (at least in general). This is also largely true of checkers. Winning paths are generally either probabilistic (thousands or tens of thousands of paths are processed by algorithms that weight them via some mechanism programmed to select superior ones) or exploit the deterministic nature of the game's rules.
Let's look at some basic differences between chess & go....
http://users.eniinternet.com/bradleym/Compare.html
Board:
Chess is played on 8x8 = 64 squares.
Go is played on 19x19 = 361 intersections.
Rule limitations:
Chess greatly restricts the movement of individual pieces.
Go has simpler rules, resulting in fewer movement restrictions (no self capture, unplayed position, no repeated board position, aka "ko").
Typical game length:
Chess = 80 (max ???)
Go = 150 (max 400+)
This suggests man orders of magnitude greater complexity for go.
A smarter confirmation....
https://www.quora.com/How-does-the-complexity-of-Go-compare-with-Chess
Here you hit upon an absolutely central issue. The program in question (like all others) can't visualize or otherwise even represent anything akin to the kind of patterns that humans see everywhere (often erroneously). Computer programs do not tolerate ambiguities, while the ability to visualize abstract "patterns" (relations among similar outcomes, distributions, configurations, etc.) is based upon such ambiguities. It is no easier for a program to "visualize" the 300,000,000th digit of pi or solve a system of 10,000 equations than it is to recognize a winning path. The differences are simply a matter of computational power and the ability of a human to sufficiently reduce the problem to mere computations.

And the triumph is the problem. The achievement here isn't one that shows our ability to do anything qualitatively different or one that indicates we have made progress in the field of AI. Merely that we once again have found a way to bypass understanding in order to reduce the mechanisms mammals (and humans) use readily to identify patterns, strategies, etc., all of which are conceptual/semantic, into purely formulaic, mindless, syntactic manipulations.
Pattern recognition is the AI development which interests me most.
Go works for humans better than old school software because players rely upon preference for some patterns over others in order to limit the number of decision trees. Essentially, one must have a sense of aesthetics in addition to analytical skill. (Btw, I lack both.) How soon will computers have this sense of elegance?
 

LegionOnomaMoi

Veteran Member
Premium Member
Were that true, then we wouldn't have seen Deep Blue beat the world champ, Kasparov
(at which time, even this kyu level janitor might've beaten the best go playing program).
I seem to have glossed over an important point (or at least not adequately addressed it).

A long time ago in a galaxy far away (from some other galaxy), the introduction of computers and the advent of the cognitive sciences resulted in an enormous interest as well as research in to AI. For a long time, the dominant paradigm in computational intelligence/AI was algorithmic. So, for example, researchers believed that there was nothing really different between, say, addition via a human brain, a calculator, or an abacus. The hardware is irrelevant, because all you need is to determine the rules or steps that enable you to carry out the required procedure.

Deep Blue is an example of this kind of programming. Deep Blue didn't learn chess. Deep Blue didn't learn anything. The challenge was trying to get a computer already "fitted" with the "rules" of chess to weight the optimal moves. Everything was built in (and the approach was basically brute-force) .

Another early paradigm that didn't gain real prominence until the '80s was that of learning, beginning with artificial neural networks (the foundational paper and ANN model was actually published by McCulloch-Pitts in 1943; it was followed shortly by Rosenblatt's perceptron and the development of "Hebbian" learning). After the algorithmic approach failed to generate natural language, failed to exhibit anything like intelligence, and generally failed for problems where solutions weren't rigidly defined by rules (the way chess is and games are more generally), people began to think that maybe the rules/steps aren't what mattered. There was a massive increase in bio-inspired computing and in algorithms which mimicked learning in living systems.

So, for example, the article you linked to concerns the use of supervised learning via neural networks. The program "learns" to play the game and to master it. This is MUCH, MUCH more difficult than programming the rules into the software to begin with (as you can imagine). Systems like Deep Blue don't use anything like the methods used in AI research with games like Go, because 1) Go is "simple" in many ways and (relatedly) 2) games like chess which involve more complex rules and multiple pieces pose greater problems for human: "We can contrast Go with other popular games such as Othello, checkers or chess: Humans often get lost in the ‘combinatorial chaos’ of the game and miss a tactical combination, while machines are able to exploit their superior computing power."
Müller, M. (2002). Computer Go. Artificial Intelligence 134: pp. 145-179.
Go works for humans better than old school software because players rely upon preference for some patterns over others in order to limit the number of decision trees.
Exactly!!! Well put.

Let's look at some basic differences between chess & go....
"Generalized chess is thus provably intractable, which is a stronger result than the complexity results for board games such as Checkers, Go, Gobalng, and Hex"
Fraenkel, A. S., & Lichtenstein, D. (1981). Computing a perfect strategy for n× n chess requires time exponential in n. In S. Even & O. Kariv(Eds.). Automata, Language and Programming (pp. 278-293). Springer.

The problem for GO programs is that Go isn't complex enough (or rather, it is only complex in a very specific, limited sense). Humans are experts at pattern recognition. Computers suck at it. Computers are great at calculating lots of different outcome scenarios and weighing them one at a time. Humans suck at this. Thus humans have a hard time with the complexity of chess, and computer systems have been built to exploit this. Go is too simple: the rules are more limited and the pieces homogenous. Humans can exploit natural pattern recognition that is so difficult to program. Brute-force doesn't work, not because of the state-space complexity (which isn't a good measure unless the configuration space is limited to positions of basically equivalent pieces), but because there is only state-space complexity. The strategy is only the pattern (well, not exactly, but for all intents and purposes).

Chess greatly restricts the movement of individual pieces.
Every restriction is a rule that increases complexity. It constrains the state space but increases "decision complexity" and other complexity measures (e.g., combinatorial).This suggests man orders of magnitude greater complexity for go.

Pattern recognition is the AI development which interests me most.
I'm with you there. The thing is that this hasn't really been applied to chess games. It's too difficult, especially when other methods (brute-force and built-in rules) work as well as they do.
How soon will computers have this sense of elegance?
Well, by most estimates, many decades ago (it's always "just around the corner"). I'm probably at least as bad a person to ask as any hardcore AI proponent: I'm too jaded by our failures and how much our progress has consisted of realizing how much more we have to go than thought.
 

Revoltingest

Pragmatic Libertarian
Premium Member
Go is too simple: the rules are more limited and the pieces homogenous.
Actually, the rules of go being far simpler is a sign of their being less restrictive, thereby imposing fewer limitations on each move.
This creates more possibilities per move.
This creates greater complexity when combined with other factors.....
- More total moves per game.
- The global nature of the game, ie, local battles affect non-local battles, from the opening (fuseki) to the endgame (yose). Even the yose can become global because of the ko rule, ie, no board position can be repeated. This generates incredible complexity for humans by bringing settled & even dead groups back into play. This requires sometimes considering a great many possible consequences for each move.

How to measure complexity?
We could go by how complicated a game is for humans, but this suffers because humans playing complex non-solvable games will play to one's maximum ability. Then they'll tend to seem equally complex to us. Moreover, chess & go are very different in how they tax us. Attacking the problem with a computer using an algorithm & a massive playbook is another measure. Chess has been much easier than go using this method, making the latter more complex from this perspective.

At what point does this game playing become AI?
I don't know. It's above me pay grade to say.
 

LegionOnomaMoi

Veteran Member
Premium Member
Actually, the rules of go being far simpler is a sign of their being less restrictive, thereby imposing fewer limitations on each move.
But the restrictions on the number of moves is partitioned and integral to strategy. In Go, the pieces can all be treated as essentially identical (this is somewhat related to, but by no means precisely analogous to, Bose-Einstein vs. Fermi-Dirac statistics). Each pieces move is restricted by a rule governing how it can move, meaning that the overall configuration space of winning moves must treat pieces as distinct during the entire evolution of the game. Each kind of piece plays a particular role in the overall strategy that no other type of piece can occupy regardless of position or time. Thus combinatorial, decision, and other measures of complexity are increased AND (more importantly), unlike Go w can show absolutely that there exists no strategy, method, or algorithm that can possibly be found for the generalized Chess game:
"Generalized chess is thus provably intractable, which is a stronger result than the complexity results for board games such as Checkers, Go, Gobalng, and Hex"
Fraenkel, A. S., & Lichtenstein, D. (1981). Computing a perfect strategy for n× n chess requires time exponential in n. In S. Even & O. Kariv(Eds.). Automata, Language and Programming (pp. 278-293). Springer.
(I thought this quote bore repeating).
This creates more possibilities per move.
It creates more possible moves per game, true. And, what's more, MANY more possible moves. But
1) The total number of possible moves consists of a majority of moves that are never made or probable
&
2) It severely limits what computations should determine how given pieces are used because it makes them indistinguishable in general (they are distinguished only by their position at particular times).
This creates greater complexity when combined with other factors.....
- More total moves per game.
This generates incredible complexity for humans by bringing settled & even dead groups back into play.
Actually, one of the reasons Go has been of such great interest for AI/computational intelligence is because it isn't complex for humans. The best algorithms have historically performed extremely poorly because humans are able to recognize strategic or "winning" patterns easily, but creating code to do this is incredibly difficult.
This requires sometimes considering a great many possible consequences for each move.
Not as many as in Chess. Pieces can be treated identically globally for every move, and only the overall pattern is really important as the pieces are for all intents and purposes indistinguishable.
How to measure complexity?
This is an EXCELLENT question. Personally, I find that algorithmic complexity (perhaps the most used metric) is woefully inadequate and preferred only because of the focus on computability in AI and similar fields, as well as the difficulty formulating a measure of complexity that takes into account that which isn't "syntactic" or purely meaningless (e.g., conceptual measures or those which involve decisions that can't yet be formalized). I also believe, and have been trying to develop a mathematical model of, the inverse relationship between the ability for systems (ants, bees, fish, neurons, etc.) to engage in nearly-zero lag synchronous collective behavior and the uniqueness (in particular, intelligence) of constituents/components of the systems. In other words, the complex behavior of e.g., ant colonies is possible only because ants are stupid, while social animals like chimps or humans can't engage in such complex coordinated activity because they are intelligent. One problem is that this means compare the complexity of intelligent systems composed of stupid, simple "parts" vs. the simplicity of stupid systems composed of intelligent, complex parts.
The overall problem of characterizing complexity is far more difficult, and too often it is defined operationally and pragmatically.
It's above me pay grade to say.
I don't know either, and it IS my pay grade (and, FYI, doesn't pay much).
 

Revoltingest

Pragmatic Libertarian
Premium Member
But the restrictions on the number of moves is partitioned and integral to strategy. In Go, the pieces can all be treated as essentially identical (this is somewhat related to, but by no means precisely analogous to, Bose-Einstein vs. Fermi-Dirac statistics).
Hey!
No fair bringing up something I can't even pronounce!
Each pieces move is restricted by a rule governing how it can move, meaning that the overall configuration space of winning moves must treat pieces as distinct during the entire evolution of the game. Each kind of piece plays a particular role in the overall strategy that no other type of piece can occupy regardless of position or time.
Actually, go pieces can do this. Many games have pieces in areas formerly occupied, sometimes multiple times. This comes up frequently in tesuji & eye space filling.
Thus combinatorial, decision, and other measures of complexity are increased AND (more importantly), unlike Go w can show absolutely that there exists no strategy, method, or algorithm that can possibly be found for the generalized Chess game:
"Generalized chess is thus provably intractable, which is a stronger result than the complexity results for board games such as Checkers, Go, Gobalng, and Hex"
Fraenkel, A. S., & Lichtenstein, D. (1981). Computing a perfect strategy for n× n chess requires time exponential in n. In S. Even & O. Kariv(Eds.). Automata, Language and Programming (pp. 278-293). Springer.
(I thought this quote bore repeating).
If this were true, then the fact that Deep Blue can beat any & all chess players would mean that it is AI in full flower.
But were AI so powerful, then why has it not beaten any top go players yet?
(The Europistanian champ is not in the same league as top Japanese, Korean & Chinese pros.)
It creates more possible moves per game, true. And, what's more, MANY more possible moves. But
1) The total number of possible moves consists of a majority of moves that are never made or probable
&
2) It severely limits what computations should determine how given pieces are used because it makes them indistinguishable in general (they are distinguished only by their position at particular times).
This creates greater complexity when combined with other factors.....
- More total moves per game.

Actually, one of the reasons Go has been of such great interest for AI/computational intelligence is because it isn't complex for humans. The best algorithms have historically performed extremely poorly because humans are able to recognize strategic or "winning" patterns easily, but creating code to do this is incredibly difficult.
Aye, it points out how humans are adept at certain kinds of complexity.
The calculations I've seen for the total number of moves for both chess & go have addressed the fact that many possible games wouldn't be played, & discounted the total accordingly (by many many orders of magnitude).
Not as many as in Chess. Pieces can be treated identically globally for every move, and only the overall pattern is really important as the pieces are for all intents and purposes indistinguishable.
I played chess once.
It was a fun little puzzle.
But once I tried go, I gave up officious kings, drama queens, swishy bishops, naughty knights, bricky rooks, & feckless little pawns.
I could go on about the aesthetics & the more interesting strategies, but that pales in comparison to what's most important.
The stones & boards are prettier than chess pieces!
I've tried many different kinds, but I've settled on a few.
My portable board is slate & shell stones with a katsura board & Chinese quince bowls.
My home set is a kaya (6" thick) board with thick slate & shell stones, with camphor wood bowls.
It feels so sinful to have something too good for me!
If you haven't yet tried go, I recommend it.
Then we could bore people with our lame discussions....even more than I already do!
This is an EXCELLENT question. Personally, I find that algorithmic complexity (perhaps the most used metric) is woefully inadequate and preferred only because of the focus on computability in AI and similar fields, as well as the difficulty formulating a measure of complexity that takes into account that which isn't "syntactic" or purely meaningless (e.g., conceptual measures or those which involve decisions that can't yet be formalized). I also believe, and have been trying to develop a mathematical model of, the inverse relationship between the ability for systems (ants, bees, fish, neurons, etc.) to engage in nearly-zero lag synchronous collective behavior and the uniqueness (in particular, intelligence) of constituents/components of the systems. In other words, the complex behavior of e.g., ant colonies is possible only because ants are stupid, while social animals like chimps or humans can't engage in such complex coordinated activity because they are intelligent. One problem is that this means compare the complexity of intelligent systems composed of stupid, simple "parts" vs. the simplicity of stupid systems composed of intelligent, complex parts.
The overall problem of characterizing complexity is far more difficult, and too often it is defined operationally and pragmatically.

I don't know either, and it IS my pay grade (and, FYI, doesn't pay much).
I'll recommend you for a raise.....& what we call an "Uncle Frank" parking space (right in front by the entrance).
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Actually, go pieces can do this.
I think I'm probably failing to explain what I mean. The importance in the nature of chess pieces is that they fall into fundamentally different types regardless of any game, any moment of any game, etc. Thus any program which seeks to find winning strategies must, from the beginning, somehow address the fundamentally different nature of the types/kinds of pieces. In short, the strategy must incorporate the nature of the pieces at play, not just the discrete configuration states of a particular game or even of games in general.

Many games have pieces in areas formerly occupied, sometimes multiple times.
Right. But the point is that this is a matter of how games evolve. With chess, the importance for strategy of the ways in which particular pieces move exists independently of any game.

If this were true, then the fact that Deep Blue can beat any & all chess players would mean that it is AI in full flower.
This goes back to my attempt at making my initial point (or one of them). Deep Blue and other chess programs don't exploit what we have for years thought of as the methods used in AI, computation intelligence, machine learning, etc. Neural networks, swarm intelligence, fuzzy sets, evolutionary algorithms, gene expression, cellular automata, and so on, all revolve around creating a program that will "learn" to adapt to particular situations. In short, instead of programming in the strategy, the programmers create code designed to allow the program to "learn" the ideal strategy as fast as possible.
Chess is harder for humans because in addition to the plethora of possible moves, we have to take into account how each kind/type of piece is able to move and the general values of particular pieces. This means taking into account a lot of calculations/computations regardless of how any particular game proceeds. So programmers can exploit our limited ability to calculate/compute huge numbers of outcomes based on the values of pieces and the ways in which they can move. The "rules" of chess are much more complicated than those of "Go" (mostly) because of the pieces, and so programming these "rules" into code exploits our weaknesses and allows computers to win against chess masters. It doesn't allow chess programs to actually win by learning or computing the ideal strategies.
In short, systems like Deep Blue have been more successful than systems that seek to outperform humans at e.g., Go because humans are far worse at chess (or because chess involves complexities that humans aren't particularly well adept at handling but which computers can be programmed to handle far better).

But were AI so powerful, then why has it not beaten any top go players yet?
Because Go is too simple. Winning strategies don't depend upon calculating the ideal ways to use the most and least valuable pieces in addition to the configuration and evolution of particular games. It's just the configuration and evolution of games that matter. Which means that it is just the patterns that matter, and computers suck at pattern recognition while humans are excellent here.

The calculations I've seen for the total number of moves for both chess & go have addressed the fact that many possible games wouldn't be played, & discounted the total accordingly (by many many orders of magnitude).
Which calculations would these be? I've generally seen only the state-space or search-space. Not a reduced configuration space that discounts improbable 'boards." In fact, often enough the total number of possible moves includes illegal ones.

If you haven't yet tried go, I recommend it.
I have. I'm not a big fan, but I like it more than chess. Actually, I'm currently into Othello/Reversi. I like cellular automata, and I suppose that is related to why I like games in which locally the moves and rules are very simple, but how this allows for global complexity.
I'll recommend you for a raise
That would be great. Better yet, I should have just followed a career in engineering. Do you know of any job openings for engineering work by those whose training involves mostly useless theoretical research and a general avoidance of pragmatic, practical results?
 

Revoltingest

Pragmatic Libertarian
Premium Member
I think I'm probably failing to explain what I mean. The importance in the nature of chess pieces is that they fall into fundamentally different types regardless of any game, any moment of any game, etc. Thus any program which seeks to find winning strategies must, from the beginning, somehow address the fundamentally different nature of the types/kinds of pieces. In short, the strategy must incorporate the nature of the pieces at play, not just the discrete configuration states of a particular game or even of games in general.


Right. But the point is that this is a matter of how games evolve. With chess, the importance for strategy of the ways in which particular pieces move exists independently of any game.


This goes back to my attempt at making my initial point (or one of them). Deep Blue and other chess programs don't exploit what we have for years thought of as the methods used in AI, computation intelligence, machine learning, etc. Neural networks, swarm intelligence, fuzzy sets, evolutionary algorithms, gene expression, cellular automata, and so on, all revolve around creating a program that will "learn" to adapt to particular situations. In short, instead of programming in the strategy, the programmers create code designed to allow the program to "learn" the ideal strategy as fast as possible.
Chess is harder for humans because in addition to the plethora of possible moves, we have to take into account how each kind/type of piece is able to move and the general values of particular pieces. This means taking into account a lot of calculations/computations regardless of how any particular game proceeds. So programmers can exploit our limited ability to calculate/compute huge numbers of outcomes based on the values of pieces and the ways in which they can move. The "rules" of chess are much more complicated than those of "Go" (mostly) because of the pieces, and so programming these "rules" into code exploits our weaknesses and allows computers to win against chess masters. It doesn't allow chess programs to actually win by learning or computing the ideal strategies.
In short, systems like Deep Blue have been more successful than systems that seek to outperform humans at e.g., Go because humans are far worse at chess (or because chess involves complexities that humans aren't particularly well adept at handling but which computers can be programmed to handle far better).


Because Go is too simple. Winning strategies don't depend upon calculating the ideal ways to use the most and least valuable pieces in addition to the configuration and evolution of particular games. It's just the configuration and evolution of games that matter. Which means that it is just the patterns that matter, and computers suck at pattern recognition while humans are excellent here.


Which calculations would these be? I've generally seen only the state-space or search-space. Not a reduced configuration space that discounts improbable 'boards." In fact, often enough the total number of possible moves includes illegal ones.


I have. I'm not a big fan, but I like it more than chess. Actually, I'm currently into Othello/Reversi. I like cellular automata, and I suppose that is related to why I like games in which locally the moves and rules are very simple, but how this allows for global complexity.

That would be great. Better yet, I should have just followed a career in engineering. Do you know of any job openings for engineering work by those whose training involves mostly useless theoretical research and a general avoidance of pragmatic, practical results?
I can't help with the job search....I've been out of touch with the real world for too long now.
And our argument is fizzling, as I become increasingly aware that we agree too much.
 
Top