Friday 1 July 2011

The CHP Experiment

Following on from my success with the Bain and Woolum Experiments, I decided to conduct a similar experiment with the next three books on Dan Heisman’s list:

Chess Strategy for Kids - Jeff Coakley
Back to Basics: Tactics -  Dan Heisman
The Winning Way - Bruce Pandolfini

Since Coakley-Heisman-Pandolfini is a bit of a mouthful, I have abbreviated it to CHP - not to be confused with Combined Heat and Power!  I spit the problems into six batches, which I labelled A to F.  I took the books in the order Heisman, Coakley, Pandolfini, and made batch A the 1st problem, 7th problem, 13th problem and so on.  Batch B was the 2nd problem, 8th problem, 14th problem and so on.  The other batches were constructed in the same way.  Coakley presented some difficulties here.  I used only the problems that were clearly numbered as exercises, and discovered too late that some of the problems at the end could be used both with White to move and with Black to move.  I used only the White to move versions.  I missed out the first problem in Heisman to make the number of problems divisible by six.  I used 433 problems from Heisman, 185 problems from Coakley and 150 problems from Pandolfini.  There were 768 problems in all, i.e. 128 problems per batch.  In Pandolfini, the page header gives the solution, so I had to make a mask to cover it up.  See the previous section for the modifications that I made to the Empirical Rabbit Timer to cope with multiple books, irregular numbering systems, and problems scattered about within the books.  The early part of my schedule was the same as for the Woolum Experiment:

         Sat  Mon  Wed  Fri  Fri  Wed  Mon
Week 1:  A1,  A2,  A3,  A4                  Days: 1-7
Week 2:  B1,  B2,  B3,  B4,  A5             Days: 8-14
Week 3:  C1,  C2,  C3,  C4,  B5             Days: 15-21
Week 4:  D1,  D2,  D3,  D4,  C5,  A6        Days: 22-28
Week 5:  E1,  E2,  E3,  E4,  D5,  B6        Days: 29-35
Week 6:  F1,  F2,  F3,  F4,  E5,  C6        Days: 36-42
Week 7:                 F5,       D6        Days: 43-49
Week 8:                           E6,  A7   Days: 50-56
Week 9:                           F6,  B7   Days: 57-63

Where A1, A2, A3.… are  passes 1, 2, 3... of batch A, and similarly for the other batches. As with Woolum, I did my first four passes of each batch at two day intervals on a Saturday, Monday, Wednesday and Friday.  I again did the fifth pass of each batch on the next Friday, and the sixth pass two Wednesdays later.  For the first ten passes, the day on which each pass takes place was again given by the table:

Pass: 1  2  3  4   5   6   7   8    9   10
Day:  1  3  5  7  14  26  50  96  185  355

From Pass = 4 onwards, the pass takes place on Day = 1.92Pass-1, rounded to the nearest whole number.  I had a heavy schedule on a Monday at this point, and decided to make Monday easier by repeating on the Sunday, the 25% of the problems that had given the most trouble on the Saturday.  (This did make Monday easier, I do not believe that it was not a good idea.  Although I did better with CHP than with Woolum on Pass 4, this advantage had almost completely evaporated a week later.  This result confirms that repetitions that are close together do very little to improve long term performance.  It is also possible that having to work a little harder on a repetition makes increases its instructional value.)  As with the previous experiments, incorrect solution times were counted as more than 30 seconds irrespective of the actual time spent.  For the first three batches, I also repeated the 25% of the problems at which I had done worst on Pass 6 half way between passes 6 and 7.  These problems roughly equated to those that I got wrong or took me more than 5 seconds to solve.  I had already done this in the Woolum Experiment, where it seemed to help.  Here is a comparison of my performances on my first passes through the first band last batches of the Bain and Woolum, and on my first batch of CHP:













(0-5 denotes 0-4.999... seconds, and similarly for the other “buckets”.)   I did better on CHP A1 than I did on Woolum A1, but worse than on Woolum F1.  I believe that the reason for this is simply that CHP was harder overall.  Here is my performance on my first passes through batches A-F:













The overall picture was still one of improvement, but this chart looks less convincing than that for Woolum.  However, the results do look more convincing when we look at the individual books separately - see below.  With batch F, I omitted the partial pass on the Sunday, and restricted Pass 3 to the 25% of the problems at which I did worst on Pass 2, which roughly equated to those that I got wrong or took more than 10 seconds to solve. This experiment proved to be successful in that my performance on batch F was essentially the same as that for the earlier batches:













Could I have omitted Pass 3 entirely?  Probably.  Pass 3 may not have a significant effect after I have completed several passes at wider intervals.  Here are the results of my first passes through each of the batches of CHP, for the Coakley problems:













(The cumulative chart presents a clearer picture here with the smaller number of problems per batch.)  I bettered my best performance on Bain from the outset here, but subsequently made no discernable improvement.  Here are the results of my first passes through each of the batches of CHP, for the Heisman problems:












This chart presents a more favourable picture than that for CHP as a whole.  Here are the results of my first passes through each of the batches of CHP for the Pandolfini problems:












Again, this chart looks more favourable than that for CHP as a whole.  Here are the results for Heisman + Pandolfini:












This chart too is more favourable than that for CHP as a whole, but perhaps not quite as favourable as for Heisman alone.  Coakley appears to have been too easy for me at this stage, and masked my progress by adding random variation to the results.  I have a good case here for discounting the Coakley results, but not the Pandolfini results, so the Heisman + Pandolfini chart above is probably the best indication of my progress.

For an update, see my later article: Basic Tactics Revision.

4 comments:

  1. Have you considered the approach suggested by Mnemosyn, http://www.mnemosyne-proj.org/? I am considering a project like yours using that (free) software. It would seem that its reinforcement model is more logical. Measuring its effectiveness is more difficulty, however.

    ReplyDelete
  2. Mnemosyne is based on an old version of the SuperMemo algorithm. In my article Lessons from SuperMemo, I conclude that SuperMemo is not suitable for learning chess tactics. More frequent scheduling of chess problems that I took longer to solve does not appear to help. These problems are often the least instructive (i.e. because they are are too difficult for that stage in the learning process, too unlikely to occur in practice, or simply faulty). There is also huge variability in my solution times. The problems that take me longer on one occasion will not be the same as those that take me longer on another occasion. The reason for this is that my solution time depends on what catches my eye first, which can depend on which problems I have been looking at recently.

    ReplyDelete
  3. Do you see any progress in your OTB or Blitzgames?
    I would expect, that a blundercheck of your games by Fritz would show now less blunders.

    ReplyDelete
  4. Initially, I did not see a subjective difference. I still did not see things that I thought I should see, and being a few seconds faster on average is not really noticeable. I am now becoming noticeably more efficient. However, to see a rating difference, I would have to subtract two uncertain numbers to get a smaller number with even greater uncertainty. The road to tactical monstrosity is a long and rocky one, with many false prophets and few reliable milestones along the way!

    ReplyDelete