Chapter Problems Motif
1 2-31 Pins
2 33-62 Back Rank
3 64-93 Knight Forks
4 95-124 Double Attack
5 126-155 Discovered Checks
6 157-186 Double Checks
7 188-217 Discovered Attacks
8 219-248 Skewers
9 250-279 Double Threats
10 281-310 Promoting Pawns
11 312-341 Removing the Guard
12 343-372 Perpetual Check
13 374-403 Zugswang
14 405-434 Identifying Tactics
(The first problem in each chapter is an illustration that duplicates the first problem of that chapter.) The book is formatted as a workbook. Beneath each diagram, the book says who it is to move, and gives hint for finding the solution. There are no solutions, as such, but the hints are so detailed that a solution is hardly necessary. The reverse side of each diagram is set aside for writing the solution, so I was able to cut out the diagrams along with their hints, doing only minor damage to some of the hints on the other side. I folded back the hints so that they were not visible, and marked each diagram with W+, W=, W#, B+, B=, or B#, according to the result required, taking care not to look at the diagrams. (This caused some problems. There are a few cases where the book says e.g. “win a rook”, and it is possible to win material in another way. I gave myself the benefit of the doubt in these cases.)
The book says that the problems within each chapter are in order of difficulty. I constructed six batches A-F of problems from Chapters 1 to 13. Batch A consisted of problems 2, 8, 14, 20, 26..., plus problems 33, 39, 45, 51, 57..., and so on. Batch B consisted of problems 3, 9, 15, 21, 27..., plus problems 34, 39, 46, 52, 58..., and so on. The remaining batches were constructed in the same way. This method ensured that, as nearly as possible, each batch had the same number of problems with the same level of difficulty from each chapter. If the book’s ordering by level of difficulty was perfect, there would be a very slight increase in difficulty from batch to batch. Any remaining variation in difficulty between the batches can reasonably be ascribed to random factors. I set Chapter 14 aside, because sorting the problems by motif would have given me prior knowledge, and I had a suspicion (which seems to be right) that they are duplicates of problems from previous chapters. I thoroughly shuffled each batch - which was almost entirely ineffective - so I scrambled the order by sorting them into two piles and then into three. I discarded problem 26, because it worked only if the opposing side blundered; and problem 95, because it was a complete dud.
The early part of my schedule was:
Day 1: A+B, A+B
Day 2: A+B, C+D
Day 4: A+B, C+D
Day 6: C+D
Day 8: A+B, C+D
Day 9: E+F
The first four repetition intervals for batches A+B were ½ day, 1 day, 2 days and 4 days, and the first three repetition intervals for batches C+D were 2 days, 2 days and 2 days. I measured the time taken to solve each problem with a stopwatch, rounding the times to a tenth of a second. In the diagrams below, I counted the times of any incorrect solutions as taking more than 30 seconds, whatever the actual time spent. The cumulative distributions of solution times for the first four passes through batches A+B were:
Both the ½ day, 1 day , 2 days, 4 days, and the 2 days, 2 days. 2 days repetition intervals worked well here. The intervals of 1 day, 2 days, 4 days used in the Reinfeld Experiment should give much the same results as 2 days, 2 days, 2 days (see the earlier article on that experiment). It is not clear whether the additional repetition at ½ day would still have a significant benefit after many more repetitions at progressively doubling intervals.
Not surprisingly, I improved at the problems that I was practicing - but what about problems that I had never seen before? Here is a histogram of solution times for my first passes through A+B, C+D and E+F:
The simplest hypothesis is that the solution times were all reduced by a common factor. I found that dividing all the solution times for my first pass through A+B by 1.3 gave the closest match to the solution times for my first pass through C+D. Similarly, dividing all the solution times for my first pass through A+B by 2.6 gave the closest match to those on my first pass through E+F. (I used the method of least squares here.) The histogram below compares the counts in each “bucket” for my real passes through C+D and E+F with those simulated by dividing the solution times of A+B:

The fit is about as good as it could be, given the statistical variability. It is remarkable that my first pass of E+F was 2.6 times faster than my first pass of A+B.
What about the pattern matching theory? Imagine that x% of the problems in A+B are duplicated in C+D, and that any internal duplication within C+D is at the same level as that in A+B. On the first pass of C+D, I will have solved x% of the problems three times already, and the remainder will be new to me. I can approximate my performance on the x% by using the histogram for the third pass of A+B. My performance on the remaining problems within C+D can be approximated by the histogram for my first pass through A+B. I used the method of least squares to find the value of x% which made this approximation as close as possible to the histogram for my first pass through C+D. The best fit was with x% = 25%. I also approximated my first pass through E+F using the histograms for my first and fifth passes through A+B - the best fit was obtained with x% = 48% - which is almost exactly twice the value for C+D, as it should be if duplicates are equally distributed throughout the batches. The histogram below compares my real passes through C+D and E+F with the approximated ones:

Again, the fit is good. Bain has many problems that are the same as another problem within the book, but with one less move at the beginning. If I tackle the harder problem first, the easier one should show up as a duplicate, but if I tackle the easier one first, the harder one may not show up as a duplicate.
What conclusions can we draw? The data is consistent with my matching patterns that I already knew 2.6 times faster. Since Bain’s problems were either very simple - or examples from Reinfeld that I already knew - it is possible that I already knew all the patterns, and had just become faster at finding them. However, I am sure that I am not 2.6 times faster at solving all tactics problems at this level of complexity. I believe that most of my improvement was pattern specific. This interpretation is supported by the fact that my improvement in going from my first pass of A+B to my first pass of E+F was almost exactly twice my improvement in going from my first pass of A+B to my first pass of C+D. (N.B. If the experiment had been carried out on a set of problems that had the same statistical profile as simple tactics in real games, my improvement would be real, whether or not it resulted from pattern duplication.) It could be objected that, despite my best efforts, E+F might be easier than C+D, which might in turn be easier than A+B. It is not possible to completely eliminate possibilities like this from a single player experiment. Please feel free to repeat the experiment with the batches in the reverse order!
For an update, see my later article: Basic Tactics Revision.
From your description, it looks like you're doing this on paper, how are you finding that? For the timing, are you using a stopwatch?
ReplyDeleteA stopwatch is not too bad, but I have now written a little Java program to do the timing and record the result, which is much better.
ReplyDeleteWould you mind sharing the Java program? posting it some place on your site perhaps? You might be able to collect more data that way too as people use it.
DeleteI will publish the source code next month.
DeleteMy question know is,
ReplyDelete-how specific are the learnd pattern, mabye they are all "very" related so the benefit at OTB might be "to small"
-how many pattern (tactical pattern ) are there. Heisman is talking about 2000, but this depends how sharp or unsharp you are looking at differences
- how "quick are these pattern forgotten". If there are 2000? (20000) pattern to learn and it takes 6 Month (6 years ) to learn them but you forget in 3 Months (3 years)...
Would be interesting if you are know better at CTS, ATS, TT, CT or OTB
There are four questions here:
ReplyDelete(1). How specific are the patterns that need to be learned?
(2). How many patterns do we need to learn?
(3). How quickly are these patterns forgotten?
(4). Have I improved at practical chess?
If we sample more patterns in our learning set, we will find more patterns that are more alike, so the answer to question (1) depends on the answer to question (2).
The number of patterns to be learned depends on where we draw the line on how simple common the patterns have to be. The more complicated examples are likely to be less common, and can usually be broken down into simpler patterns. It is worth while to learn to solve the simple and common patterns almost on sight, but not the complicated and uncommon ones. The 2,000 number is just a guess.
You will forget most of the patterns that you learn in a few weeks unless you practice using them. You will remember nearly all of the patterns for the rest of your life, if you practice using them at intervals which roughly double with each repetition.
I have not played chess for 15 years, but it is very difficult to compare OTB performances unless you play a very large number of games against reliably rated opponents. If you are doing that, your improvement might be a result of playing chess rather than the training.
I felt I got better after doing this. I don't have proof but I felt like I was recognizing more tactics in my games.
ReplyDelete