The Empirical Rabbit is one year old this month! It is time to summarise the highlights of the year, and indicate any changes to my perspective.
Year of the Rabbit
Lessons from Cognitive Psychology Nov 2010
This article summarises findings of cognitive psychology that are relevant to repeatedly solving chess problems. When the repetitions are closely spaced, performance increases rapidly, but long term memory retention is poor. When the repetitions are widely spaced, performance increases slowly, but long term memory retention is much better. Practical learning systems (e.g. those for learning foreign languages) space the initial repetitions closely, and gradually increase the intervals between successive repetitions.
Once Through vs. Repetition Nov 2010
This article remains a fair summary of the relative merits of solving an infinite stream of new problems, versus solving a smaller set of problems repeatedly. In practice, a compromise has to be made between practicing on as many problems as possible; and learning (and retaining) all the lessons that these problems have to teach. Another practical consideration is the limited supply of good problem sets - an infinite stream of problems is not available in the real world - so the real question is when to repeat!
7 Circles Dec 2010
This article remains a good summary of the limitations of Michael de la Maza’s 7 Circles method. MDLM solved 1,000 tactical chess problems seven times using the CT-ART 3.0 tactical trainer, halving the time interval between each repetition: 64 days, 32 days, 16 days, 8 days, 4 days, and 2 days. My experience was that by that by the time I had worked my way through Fred Reinfeld’s 1,001 Winning Chess Sacrifices and Combinations, and returned back to the beginning, I did not remember much. My accuracy had improved a little, but I was not obviously faster.
The Reinfeld Experiment Jan 2011
I decided to use Reinfeld’s 1,001 to experiment with expanding repetition intervals. After much experimentation, I decided to solve each batch of problems repeatedly at intervals of 1 day, 2 days, 4 days, 8 days, 16 days... My progress is summarised by the table:
Repetition: 1 2 3 4 5 6 7 8 9
Percent Score: 85% 93% 95% 95% 97% 97% 95% 92% 90%
Minutes/Problem: 3.5 2.7 2.0 1.5 1.3 1.3 1.2 1.0 1.2
Day Number: 1 2 4 8 16 32 64 128 256
This method was clearly highly effective at improving my performance at solving the problems that I was practicing, and helped me to refine my training methods.
The Bain Experiment Mar 2011
Inspired by Dan Heisman’s Novice Nook articles, I decided to apply my method to timed tests of the 388 simple tactics problems from John Bain’s Chess Tactics for Students. Again, I improved at the problems that I was practising - but what about problems that I had never seen before? Here is a histogram of the solution times for my first passes through three equally difficult problem sets:
I was astonished! The time limit method (see my October article Rating Points Revisited) estimates my improvement as 346 Elo points with a standard deviation of 134 Elo points, for a time limit of 5 seconds. Unfortunately, this very large apparent improvement appears to have been partly due to a high proportion of problems that were near duplicates.
Tactics Performance Measurement Apr 2011
This article discusses some important issues concerning tactics performance measurement. It has become clear that only reliable method of measuring tactics performance is to compile statistics for a large number of reliably rated players. The methods based on giving problems ratings and treating them as opponents (e.g. those used by the online tactics servers and my time limit method) can give only a rough indication.
The Woolum Experiment May 2011
For my next experiment, I used 792 problems from the next book on Dan Heisman’s list, which was Al Woolum’s Chess Tactics Workbook, repeating the problems on days 1, 3, 5, 7, 14, 26, 50 and 96. Here are the results for my first passes through six equally difficult problem batches:
The time limit method estimates my improvement as 136 Elo points with a standard deviation of 39 Elo points, for a time limit of 5 seconds. The proportion of near duplicates in Woolum is much more realistic than for Bain, so this result is more convincing.
The CHP Experiment Jul 2011
My next experiment used the next three books on Dan Heisman’s list: Jeff Coakley’s Winning Chess Strategy for Kids, Dan Heisman’s Back to Basics Tactics, and Bruce Pandolfini’s The Winning Way. I used the same repetition schedule as for the Woolum Experiment. Here are the results for my first passes through six equally difficult problem batches:
The time limit method estimates my improvement at the 583 problems in Heisman + Pandolfini as 71 Elo points with a standard deviation of 44 Elo points, for a time limit of 5 seconds. This result was not as convincing as for Woolum, but it was still encouraging.
Time to Move Up a Gear Sep 2011
This article discusses the adjustments that I made to speed up my tactics training. Most importantly, I decided to finish Woolum at Pass 8, and omit Pass 3 from my schedule for my next two experiments. (I later decided to end CHP at Pass 8, and to end the next two experiments at what was now Pass 7.)
The Susan Polgar Experiment Sep 2011
My next experiment was based on Susan Polgar’s Chess Tactics for Champions. I divided the 570 problems into just four problem batches to save time, but at the cost of limiting my chances of accurately measuring my progress. Nonetheless, the time limit method estimates my improvement as 64 Elo points with a standard deviation of 33 Elo points, for a time limit of 5 seconds. To avoid schedule overload, I had to extend the interval between Pass 6 and Pass 7 from six weeks to eight weeks. Nonetheless, here are my results for Passes 2 to 7 of the first batch of problems:
There is some evidence of a drop off in the number of these problems that I was able solve in under 5 seconds, but the number that I failed to solve within 40 seconds reduced steadily. (It may be significant that I was tackling harder problem sets in later experiments between Passes 6 and 7 of this experiment.)
The Ivashchenko 1b Experiment Oct 2011
This experiment was based 539 problems from Sergey Ivashchenko‘s Chess School 1b. I divided the problems into just four problem batches, which again limited my chances of obtaining a clear result. The time limit method estimates my improvement as about 70 Elo points at the longer time limits, but the standard deviation is at least as large - so we cannot draw any firm conclusions here.