**The Empirical Rabbit is one year old this month!**It is time to summarise the highlights of the year, and indicate any changes to my perspective.

￼

2011 Chinese

Year of the Rabbit

**Lessons from Cognitive Psychology Nov 2010**

This article summarises findings of cognitive psychology that are relevant to repeatedly solving chess problems. When the repetitions are closely spaced, performance increases rapidly, but long term memory retention is poor. When the repetitions are widely spaced, performance increases slowly, but long term memory retention is much better. Practical learning systems (e.g. those for learning foreign languages) space the initial repetitions closely, and gradually increase the intervals between successive repetitions.

**Once Through vs. Repetition Nov 2010**

This article remains a fair summary of the relative merits of solving an infinite stream of new problems, versus solving a smaller set of problems repeatedly. In practice, a compromise has to be made between practicing on as many problems as possible; and learning (and retaining) all the lessons that these problems have to teach. Another practical consideration is the limited supply of good problem sets - an infinite stream of problems is not available in the real world - so the real question is when to repeat!

**7 Circles Dec 2010**

This article remains a good summary of the limitations of Michael de la Maza’s 7 Circles method. MDLM solved 1,000 tactical chess problems seven times using the CT-ART 3.0 tactical trainer, halving the time interval between each repetition: 64 days, 32 days, 16 days, 8 days, 4 days, and 2 days. My experience was that by that by the time I had worked my way through Fred Reinfeld’s 1,001 Winning Chess Sacrifices and Combinations, and returned back to the beginning, I did not remember much. My accuracy had improved a little, but I was not obviously faster.

**The Reinfeld Experiment Jan 2011**

I decided to use Reinfeld’s 1,001 to experiment with expanding repetition intervals. After much experimentation, I decided to solve each batch of problems repeatedly at intervals of 1 day, 2 days, 4 days, 8 days, 16 days... My progress is summarised by the table:

Repetition: 1 2 3 4 5 6 7 8 9

Percent Score: 85% 93% 95% 95% 97% 97% 95% 92% 90%

Minutes/Problem: 3.5 2.7 2.0 1.5 1.3 1.3 1.2 1.0 1.2

Day Number: 1 2 4 8 16 32 64 128 256

This method was clearly highly effective at improving my performance at solving the problems that I was practicing, and helped me to refine my training methods.

**The Bain Experiment Mar 201**1

Inspired by Dan Heisman’s Novice Nook articles, I decided to apply my method to timed tests of the 388 simple tactics problems from John Bain’s Chess Tactics for Students. Again, I improved at the problems that I was practising - but what about problems that I had never seen before? Here is a histogram of the solution times for my

**first**passes through three equally difficult problem sets:

￼

I was astonished! The time limit method (see my October article Rating Points Revisited) estimates my improvement as 346 Elo points with a standard deviation of 134 Elo points, for a time limit of 5 seconds. Unfortunately, this very large apparent improvement appears to have been partly due to a high proportion of problems that were near duplicates.

**Tactics Performance Measurement Apr 2011**

This article discusses some important issues concerning tactics performance measurement. It has become clear that only reliable method of measuring tactics performance is to compile statistics for a large number of reliably rated players. The methods based on giving problems ratings and treating them as opponents (e.g. those used by the online tactics servers and my time limit method) can give only a rough indication.

**The Woolum Experiment May 2011**

For my next experiment, I used 792 problems from the next book on Dan Heisman’s list, which was Al Woolum’s Chess Tactics Workbook, repeating the problems on days 1, 3, 5, 7, 14, 26, 50 and 96. Here are the results for my

**first**passes through six equally difficult problem batches:

￼

The time limit method estimates my improvement as 136 Elo points with a standard deviation of 39 Elo points, for a time limit of 5 seconds. The proportion of near duplicates in Woolum is much more realistic than for Bain, so this result is more convincing.

**The CHP Experiment Jul 2011**

My next experiment used the next three books on Dan Heisman’s list: Jeff Coakley’s Winning Chess Strategy for Kids, Dan Heisman’s Back to Basics Tactics, and Bruce Pandolfini’s The Winning Way. I used the same repetition schedule as for the Woolum Experiment. Here are the results for my

**first**passes through six equally difficult problem batches:

￼

The time limit method estimates my improvement at the 583 problems in Heisman + Pandolfini as 71 Elo points with a standard deviation of 44 Elo points, for a time limit of 5 seconds. This result was not as convincing as for Woolum, but it was still encouraging.

**Time to Move Up a Gear Sep 2011**

This article discusses the adjustments that I made to speed up my tactics training. Most importantly, I decided to finish Woolum at Pass 8, and omit Pass 3 from my schedule for my next two experiments. (I later decided to end CHP at Pass 8, and to end the next two experiments at what was now Pass 7.)

**The Susan Polgar Experiment Sep 2011**

My next experiment was based on Susan Polgar’s Chess Tactics for Champions. I divided the 570 problems into just four problem batches to save time, but at the cost of limiting my chances of accurately measuring my progress. Nonetheless, the time limit method estimates my improvement as 64 Elo points with a standard deviation of 33 Elo points, for a time limit of 5 seconds. To avoid schedule overload, I had to extend the interval between Pass 6 and Pass 7 from six weeks to eight weeks. Nonetheless, here are my results for Passes 2 to 7 of the first batch of problems:

￼

There is some evidence of a drop off in the number of these problems that I was able solve in under 5 seconds, but the number that I failed to solve within 40 seconds reduced steadily. (It may be significant that I was tackling harder problem sets in later experiments between Passes 6 and 7 of this experiment.)

**The Ivashchenko 1b Experiment Oct 2011**

This experiment was based 539 problems from Sergey Ivashchenko‘s Chess School 1b. I divided the problems into just four problem batches, which again limited my chances of obtaining a clear result. The time limit method estimates my improvement as about 70 Elo points at the longer time limits, but the standard deviation is at least as large - so we cannot draw any firm conclusions here.

This is without a doubt the best comparison of tactical training methods I've seen, thanks for compiling the list and making the analytic comparisons. I found the comparison with learning languages to be particularly insightful.

ReplyDeleteThis brings up the question, naturally, of how tactics training translates to improved playing performance (which is the overall goal for most of us). Do you intend to try to measure that at some point, or do you intend to continue measuring increased tactical performance on different problem sets?

Thank you very much for your encouragement. Ivashchenko has endgame problems, and Coakley has both strategy and endgame problems, so I am branching out from just tactics problems, but where I go from here depends on the results that I get!

ReplyDeleteIt might be interesting to check on old training-sets not seen for a while. On some of my old sets i am still quick, on some i even got quicker and a few sets... well i have to do them again.

ReplyDeleteI left tactics for a while to do some chessimo endgames and chessimo strategy. The repetitions are already part of this program. The tactics at chessimo are not that interesting, the problems are often used elsewhere.

I expect that I will revisit my speed training problem sets again. I am currently trying the opposite approach, writing down my solutions to more difficult problems, without a fixed time limit. Chessimo endgames and strategy look interesting. How did you find them?

ReplyDeletehttp://www.chessimo.com/trainer/index.php?lang=en&val=en

ReplyDeletechessimo does have Lections about Tactics, Strategy and Endgames they say:

...

More than 4000 tactic exercises

You will improve your combinatory vision considerably and memorize several characteristic positions that will help you to identify critical moments of a game.

More than 700 commented strategy exercises

They will increase your ability to understand the underlying concepts of certain positions.

...

1440 endgame exercises

You will be able to practice, solidify and increase the knowledge you acquired after studying the commented endgames.

I just name the Strategy section of Chessimo: Chessimo-Strategy

You may download an try a fee demo version ( the examples there are very(!) easy )

I like it because there is not that much strategy learning software and the repetition is already part of this software.

See video about chessimo here

http://www.chessvideos.tv/forum/viewtopic.php?t=5507

Chessimo was very popular ( old name PCT Personal Chess Trainer ) in the early 2000 as tactics learning software for the "knights errant". See f.e. http://likesforests.blogspot.com/2007/08/how-to-use-personal-chess-trainer-2007.html

and

http://temposchlucker.blogspot.com/2006/08/restarting-endgames.html

P.S. I like it, i learn new things, speed up with known things. The suggestions of chessimo are usually very close to the PV1 of the Chessengine Houdini. Some people did not like that Chessimo usually only accepts one "given" solution. But i think, thats ok. Chesstraining is not (only) about finding a solution ( by yourself) but to memorise games and specific given solutions too.

ReplyDeleteThank you very much for that. It certainly looks interesting.

ReplyDelete