∫iii Possible positions
By Tom Brown
- 414 reads
The ideas and arguments described now are not unique to Chess they can be applied to many board games for instance Checkers, and probably even more generally. For example we may think of complicated puzzles like Rubik's cube.
A legal game, g is a "path", or "trail" of distinct positions on a chessboard evolving move after move by the official rules of the game of Chess. Any position that can "arise" or is "visited by" a game, that can be reached by some legal game/-s, will be called a possible position. These are two distinct sets.
We will call the set of all legal games G, and P the set all the possible positions. Both these sets are bounded as can be shown with elementary combinatorics, and thus are finite. It then follows each of the two numbers are fixed (yet not known!) integers.
In principle one can know whether there are as many possible positions as legal games or not, or which is largest say, with some essentially simple algorithm by actually counting. However the exercise is futile and hopelessly doomed due to the enormous orders of magnitudes involved as pactically infinity.
Please note that in this problem one may not assume indirectly (or directly!) that for each possible position there is a unique legal game, and obviously there is not only one position in any game. Such would actually be assuming and starting out exactly with what we want prove! You must realise that a given position might arise from different (paths) games, and also obviously, a game consists of many positions.
The proof could be a problem at the level of a good university undergraduate and consisting of concepts of very basic set theory. However it is done very indirectly and the proof is not obvious. Unfortunately my word processing would have to be very crude here and with only standard text.
The result is surprising. One can demonstrate that there are exactly the same number of legal games as there are possible positions.
Proof
Since we know that both G and P are finite we may ennumerate (and count) the members. This means we may write both sets out each as a numbered list of its elements, and thus associate with every element a distinct number. The actual order or any other relationship is not of importance.
A function f from G into P is defined as f(g_i) := p_i which in effect says the i-th element in our ennumeration (provided it exists!) of G is mapped to the i-th element of P.
Suppose G has m elements and P has n. We have two cases, m ≥ n and m ≤ n
For the first case, note that f is defined on the whole of G since each i ≤ m Clearly f is 1-1 when considering the ennumerations and the construction of f. Also f must be onto as follows simply from the definition of P itself. Thus G and P have the same number of elements namely m.
The argument for the other case, i.e when m ≤ n, is symmetrical. A function k into G is defined on P by k(p_j) := g_j It is 1-1 and onto G, also because of the definitions of P and G, and again n = m
QED
- Log in to post comments