Actually, the rules of go being far simpler is a sign of their being less restrictive, thereby imposing fewer limitations on each move.
But the restrictions on the number of moves is partitioned and integral to strategy. In Go, the pieces can all be treated as essentially identical (this is somewhat related to, but by no means precisely analogous to, Bose-Einstein vs. Fermi-Dirac statistics). Each pieces move is restricted by a rule governing how it can move, meaning that the overall configuration space of
winning moves must treat pieces as distinct during the entire evolution of the game. Each kind of piece plays a particular role in the overall strategy that no other type of piece can occupy regardless of position or time. Thus combinatorial, decision, and other measures of complexity are increased AND (more importantly), unlike Go w can show absolutely that there exists no strategy, method, or algorithm that can possibly be found for the generalized Chess game:
"Generalized chess is thus provably intractable, which is a stronger result than the complexity results for board games such as Checkers, Go, Gobalng, and Hex"
Fraenkel, A. S., & Lichtenstein, D. (1981). Computing a perfect strategy for n× n chess requires time exponential in n. In S. Even & O. Kariv(Eds.).
Automata, Language and Programming (pp. 278-293). Springer.
(I thought this quote bore repeating).
This creates more possibilities per move.
It creates more possible moves per game, true. And, what's more, MANY more possible moves. But
1) The total number of possible moves consists of a majority of moves that are never made or probable
&
2) It severely limits what computations should determine how given pieces are used because it makes them indistinguishable in general (they are distinguished only by their position at particular times).
This creates greater complexity when combined with other factors.....
- More total moves per game.
This generates incredible complexity for humans by bringing settled & even dead groups back into play.
Actually, one of the reasons Go has been of such great interest for AI/computational intelligence is because it isn't complex for humans. The best algorithms have historically performed extremely poorly because humans are able to recognize strategic or "winning" patterns easily, but creating code to do this is incredibly difficult.
This requires sometimes considering a great many possible consequences for each move.
Not as many as in Chess. Pieces can be treated identically globally for every move, and only the overall pattern is really important as the pieces are for all intents and purposes indistinguishable.
How to measure complexity?
This is an EXCELLENT question. Personally, I find that algorithmic complexity (perhaps the most used metric) is woefully inadequate and preferred only because of the focus on computability in AI and similar fields, as well as the difficulty formulating a measure of complexity that takes into account that which isn't "syntactic" or purely meaningless (e.g., conceptual measures or those which involve decisions that can't yet be formalized). I also believe, and have been trying to develop a mathematical model of, the inverse relationship between the ability for systems (ants, bees, fish, neurons, etc.) to engage in nearly-zero lag synchronous collective behavior and the uniqueness (in particular, intelligence) of constituents/components of the systems. In other words, the complex behavior of e.g., ant colonies is possible only because ants are stupid, while social animals like chimps or humans can't engage in such complex coordinated activity because they are intelligent. One problem is that this means compare the complexity of intelligent systems composed of stupid, simple "parts" vs. the simplicity of stupid systems composed of intelligent, complex parts.
The overall problem of characterizing complexity is far more difficult, and too often it is defined operationally and pragmatically.
It's above me pay grade to say.
I don't know either, and it IS my pay grade (and, FYI, doesn't pay much).