Monday, 2 April 2012

Michael de la Maza Statistics

It has been brought to my attention that the results of all Michael de la Maza’s games are available on the USCF website:

Judging from the names of the tournaments, these results appear to be sorted into date order, with the latest first.  I copied the results into a spreadsheet, maintaining that order. I deleted the games against unrated opponents, which left me with 190 games.  I divided these games into batches of 19 games, in their assumed time order.  I worked out the average of the opponents’ pre-event ratings and de la Maza’s scores, for each of these batches.  I used these values to calculate a rating for de la Maza, using the Elo formula:

d = -400*log(1/s - 1)

where s is de la Maza’s score and d is the difference between his calculated rating and the average rating of his opponents.  The results are summarised in the table:

    Oppo  Score   MdlM
10  1920  0.7895  2149
 9  1824  0.7368  2003
 8  1801  0.7105  1957
 7  1600  0.8947  1972
 6  1599  0.6579  1713
 5  1477  0.6842  1612
 4  1579  0.6053  1653
 3  1494  0.6579  1608
 2  1378  0.5263  1396
 1  1496  0.3158  1362

I fitted a least squares line to the calculated ratings and plotted them on a graph:

(N.B. You can click on this graph to enlarge it.)  The red dots are the calculated ratings, and the green dots are the corresponding points on the least squares line.  The standard deviation of the slope of this line (calculated from the least squares residuals) is nearly twelve time less than the slope itself, so it is extremely unlikely that the improvement was caused by chance alone.  Five sigma is good enough for CERN!

(The square root of the mean of the squares of the least squares residuals is 59 Elo points.  I estimated this value to be 112 Elo points using my coin tossing model.  Since the proportion of draws in de la Maza’s games was only about 10%, I assumed a single random selection between a win or loss for each game.  I used the fractional score s as the success probability for this random selection.  I calculated the standard deviation of the points score as sqrt[19s(1-s)], and found the corresponding change in the Elo rating. The square root of the sum of the squares of these rating changes is likely to be an overestimate, but probably not by almost a factor of two.)  [See my later article Rating by Maximum Likelihood for more accurate rating calculation, which gives the performance the last 19 games as 2189.]

The graph shows that de la Maza progressed a constant rate, improving by over 800 points in three years, with no loss of steam at the end.  Here is his USCF Raiting History Graph:

On this graph, de la Maza's final rating is 2041, whereas on mine it is 2149 - over a 100 points higher.  This is because the final point on my graph is derived from just the results of his last 19 games, whereas the final point on the USCF graph includes earlier (less good) results.  If he had continued to play at the same level, these earlier results would have dropped out of the calculation, and his rating would have risen to my value - or more if he had continued to improve.

In his second Chess Cafe article de la Maza quoted Fischer in Chess Life: “Getting from Expert to Master is a difficult transition, but getting to Expert is about grasping tactics.” In the US, you become a National Master when you reach a USCF rating of 2200.  De la Maza was almost there.  By his account, he achieved that success by just playing chess and practising tactics.

See my next article for a comparative performance by the youngest US National Master.


  1. De la Maza's tournament activity and training regimen have not been duplicated by his followers, I suspect. It has been asserted that he was unemployed during his ascent. However, tournament entry fees and travel require some sort of income.

    His score of 31% against opponents rated over 2000, although too few to be statistically significant, is low enough to generate doubts that he would have been able to maintain his rating if he had continued playing.

  2. I do not expect that anyone has duplicated de la Maza’s full training regimen very closely. I expect that there have been young adults who have trained as much as him, and played as much chess, but without the same results. For his ideas to have value for others, it is necessary for a practical implementation to work better than the traditional alternatives. Setting impossibly high standards and saying “you did not get all the details right” is no use!

    Without a European social security system, de la Maza would have needed his own money. Even in Europe, most unemployed people would train for a paying job rather than chess. Unless he was very wealthy, his motivation is difficult to understand.

    Using pre-event ratings (i.e. those before the =>), I make his score against 2,000+ players 2 out of 11, i.e. 18%. I expect that he would soon have been in the next section up, and struggling. I know how it feels!

  3. Bright Knight, you are correct. I excluded those over 2500 by mistake. My interest concerns how he would have fared against those in his rating class. There should be no question that he will fare well against 2500+ without opening preparation and positional knowledge. It seems probable as well that he would find less opportunity to profit from tactical opportunities when the opponents are over 2000. If someone could produce the game scores from his games against 1900+, it would prove interesting.

  4. I found a list with the 16 best improvers of 14,580 members of the dutch chess federation from sept 2000 to sept 2001.
    I will post this list within a few days (it's a picture).
    nr 1 improved from 1488 to 1879 (391 pts) within a year.
    the highest rating (nr 15) improved from 1992 to 2250 (258 pts) within a year.

  5. What we want to know here is do X achieve Y, not do X and 0.001% achieve Z. Unfortunately, we all know that the norm for adults is spend a lot of time on chess and gradually get worse!

  6. I aggregated the results for the last two points on the graph, and sorted them into rating order. For these results, his rating was 1996 against the weaker 50% of his opponents (averaging 2038), and 2173 against the stronger 50% (averaging 2038).

    The average rating of his stronger opponents was pushed up by three games against a 2500 player. However, missing out these results bumps up the score, and actually increases his performance to 2199.

  7. I have added a mug shot and improved the summary and conclusions.

  8. I look at Sevian's statistics, and I look at De La Maza's statistics.

    Sevian's tells me that he has "played up" a level a lot during his time in chess. He has virtually as many games against 1400 players as he has against 2400.

    I would assert that I would jump 200 points in about 4-5 months, if I could trot around the USA and only play in Class A sections or better - 6 months max.

    DeLaMaza's statistics interestingly enough tell me a different story. He was winning at all levels and not playing as much as I had thought. Therefore, I will argue that his specialized studies and inherent "brains" going into rated chess (as well as being able to play against lots of higher-rated players) meant that he could jump up levels of strength very quickly, and his record IMHO is MUCH more impressive than Sevian's. His record tells me that he was improving mostly away from the board, by his studies/method/intense studying, and not by playing in tournaments as much as I had thought.

  9. Here is the corresponding table for Sevian:

    10 2022 0.5909 2085
    9 2018 0.5758 2071
    8 1943 0.5758 1997
    7 1875 0.6364 1972
    6 1807 0.5303 1828
    5 1787 0.5909 1851
    4 1683 0.5606 1726
    3 1528 0.5303 1549
    2 1409 0.4848 1398
    1 948 0.7273 1118

    He was only a little stronger than his competition. I have read that there is no strong correlation between rating and experience. Plenty of people play lots of chess and do not improve.

    De la Maza’s results look too good to be true. His improvement was very nearly as fast as that of the fastest juniors, but unlike them he did not show any signs of slowing down. He says he got to very nearly Master strength just by practicing tactics, which Fischer said was not possible, even with his talent. Others have not reproduced MdlM's results.

    1. To me the most suspicious are two of MDLM's chess improvement periods:

      1) #7 (1600) ---> 0.8947 RPF = 1972
      2) #10 (1920) ---> 0.7895 RPF = 2149

      I have tested some of rating changes and how hard is to beat players with 75, 80, 85 or 90% of winning ratio.

      For example look here (at the table)

      It means at the first case - if he obtained 0.89 winning ratio he played 351 (rating) points stronger than his opponents (average rated). And at the second case - 0.79 means he played 230 (rating) points stronger than his opponents (average rated).

      I could be able to replicate his 1st "sky-rocket" score/result, but it could have been extremally demanding task. In other words you have to be a 2-3 classes (as a class - let's take 100 points of chess rating level) stronger than your opponents. If your opponents are strong D or weak C class platers - you have to play like a very strong B or medium A class player (to be able to outplay them at 90% ratio). And what if your opponents are very strong B or weak A class players? If you want to beat them without a doubt (like MLDM did winning World Open) scoring 80% ratio - you would have played like a very strong expert or a medium master player! Is it possible to achieve such skills just with "extensive tactical program" (like 7-circles or something like that). You probably know the answer. If anyone of you believe masters know just tactics you are simply naive. It is the same as thinking that (strong amateur or semi-professional) mathematicians knows just calculations very well (and nothing else).

      When I was younger I have done two 100% score results. The first was in 1998. I played against kids and young teenagers (average rating of my opponents was about 1200-1300) at the 11-rounds tournament. I won all the games, but guess what. I previously played many games and a few tournaments with faster time control and in most of them I have achieved about 5,0 or 5,5 points out of 9 (rounds) against players rated 1500-1900. Most often my rating performance was at the range of 1550-1700 (at the G15 time control).

      In reality I was about 1500-1550 rated player those days and that's why I won all the games in my first classical time control OTB tournament ever. I was simply better about 2-3 classes than those kids and teenagers. The only disadvantage of mine was the lack of longer time control OTB tournament games. And after I scored 4/4 (and later 6/6 and 8/8) all the participants in my group (it was the weakest one) knew that I am going to win the tournament (group). The only question was this: is there anyone who can take away 1 point from me?

      Those days I was "unrated" (it means rated as 1000) and my final rating performance (RPF) after 11 rounds was 1543. It may sounds really impressive if you take into account that it as +543 points more than my initial rating and 300-350 points more than most of the participatnts at this group. However the truth is that I was the strongest player inside this group, but I have not had the proper rating (I should have 1400 rating - it is 4th category). I have never repeated such an outstanding result (even if I once gained 9/9 about 8 years later, but it was against a really quite weak opposition and with quite much luck to me)...

    2. Now please try to imagine how strong MDLM have had been to make an impressive 8 points out of 9 - against players under 2000. He had to be at least 2150-2200 rated (real strenght)... or he has had an amazing LUCK at such prestigious event! If anyone of you believes there is luck at chess you know you should think of taking another game to master ;) :).

      To sum up: I am more than sure MDLM's method or "circles" works only for such geniuses like him. I strongly believe he published the book because he was "on top" winning the prestigious even of World Open. If you could earn more money without any real effort (writing such quality book is no more than a one month of work) would you miss it? Take notice that lucky stars may not shine too long - especially if you are at media attention.

      The bad news (but honest) is: most of us mortals - have to search for any other methods - most often working hard and test which ones works for us and which not.

  10. People play lots of chess and do the same things over and over again. If you go to work and do the same thing over and over again, you collect a reliable source of income. In chess, reliability means that one is reliably stuck with the same rating.

    Tebow got kicked out and rejected by other QBs and ex-QBs because he was not "What a quarterback should look like", not a "real QB" even though he was better than most of them.

    MDLM got the same knock, he made that rating jump and stopped to bother to tell us how he did it. But then we had to have Masters and such jump out and say that that is not what "a real chessplayer" should look like.

    There is always a crowd ready to get the torches and pitchforks, and down a "messiah" as being a heretic, because that person doesn't fit the mold.

  11. He did tell us in great detail how he did it. The problem is that his story does not look credible. Perhaps there is such a thing as the talent to be a World Champion and he had it. Lazlo Polgar thought not, and appears to have proved his point.

    Why does a PhD in Computerscience at MIT have a problem to get a job?

    What was under the coat of the Specialist for Artificial Inteligence?

    I recalled that the first time we played was a Tuesday evening at the Metrowest Chess Club, and that he had come to play dressed in a hooded black cloak. It was mysterious. The hood stayed up during our game, and to add to the mystique, his left arm never moved from under its cover.

    It seemed as though I was playing the Headless Horseman, using his hidden arm to hold on to his horse under the table. As I recalled this, de la Maza leaned back in his chair, and laughed out loud like a little boy. It took him a few moments to settle down. When he did, he explained.

  13. Which temperature is it at this picture :
    and here?

  14. Read here about the first instance of using a wearable computer in a casino from/with a professor of the "MIT"
    Read here about an MIT-Team Hacking Las Vegas in the 90's ( and here )

    And here you may see how such an counting computer ( George ) did look like:

    At least 70 people have been Members of the MIT BlackJack Team ( see: )

  15. I did not know that the prizes for winning rated sections in US tournaments were as much as $10,000. A silicon friend would certainly have been a feasible method of achieving a large and rapid adult improvement with a very tactical style.

    People would believe that you just practiced tactics on a computer - because that is exactly what they want to hear! None of that hard work studying openings strategy and endgames, and countless master games. Just a thousand tactics problems. Do not bother with analysing the position completely from the diagram. Just guess the moves one by one, and see what the computer accepts. Repeat seven times. What could be easier?


  17. Clearly, he had experience of computer chess programming.

    I do not have any doubt that he would have been able to put together a workable package. He could probably just have cross compiled an open source program onto suitable hardware, and put in the necessary interface. The Black Jack people used toe operated keys. A miniature solenoid could have provided a touch sensitive Morse Code style output. I take your point about the temperature in the tournament room. He was wearing a big jacket when everyone else was in tea shirts.

    I do not believe his story, for the reasons that I have given. Furthermore, the description of his CT-ART training does not look right for a rapidly improving player nearing 2200.

    I am sceptical about the theory that he was near master strength when he started. Nobody has come forward and said that he was a strong junior under another name or in another country. He could have learned from books and playing against computers. Internet chess has also been available since 1992. Nonetheless, I have difficulty in imagining a kid doing that.

    The computer theory looks the most likely one to me, and also explains his playing style.

  18. I expect that he could have got away with four keys for four fingers in one pocket and four solenoids for four fingers in another. That would not work in a casino, but for a chess tournament in those days, perhaps it would. Would the organisers have reserved the right to disqualify anyone that they suspected of cheating? Would the players have accepted that? If not, what could the tournament organisers do? They have no right of search. Would the police search someone because a tournament organiser said that someone put their hands in their pockets after every move and before making every move?

  19. "One of the earliest known cases of using technology to cheat was in 1993 at the World Open. An unrated newcomer wearing headphones used the name "John von Neumann" (matching the name of a famous computer science pioneer), and scored 4½/9 in the Open Section, including a draw with a grandmaster and a win over a 2350-rated player. This player seemed to have a suspicious bulge in one of his pockets, which appeared to make a soft humming or buzzing sound at important points in the game. When he was quizzed by the tournament director, he was unable to demonstrate even a rudimentary knowledge of some simple chess concepts, and he was disqualified."


    For chorded keyboards see:

  20. If i would have written a learning chessprogram i would be very interested how it works "in real". Maybe a friend does have a nice device which makes it possible to use a computer at a chessclub without the opponents recognising that he is playing an engine. This would be a "perfect" Turing Test ( ). The Turing Test is an important Test, well known in the field of AI.
    Then it is possible that i would like to see how good this engine might get... and this experiment might develop the wrong way...

  21. The cheats who were caught were either very inept, or not serious about hiding what they were doing. Computer programs should do better if their opponents do not know that they are computers, so the program does not have to be particularly sophisticated to reach 2200.

    As far as you Turing Test is concerned, that is easy to do. Use a computer for Internet chess, keeping the playing strength well down, and see how long it takes for you to be found out. If you want to avoid being caught at a higher level, make your own moves, unless the computer says there is something much better. A few hundred points boost would be difficult to detect.

    This is how cheap and easy it is with current technology:

    I expect that similar kits were available 15 Years ago. ARM chips have been around for a long time.