Following on from my basic tactics revision (see my earlier article: Basic Tactics Revision Results), I decided to revise two slightly more difficult problem sets:
Susan Polgar’s Chess Tactics for Champions
Sergey Ivaschenko’s Chess School 1b
(See my earlier articles: Susan Polgar Experiment and Ivashchenko 1b Experiment.) I decided to split both problem sets into six rather than four problem batches for my revision. Here is my old chart from the Susan Polgar Experiment:

Overall, I scored 84% at an average of 15 seconds per problem. Here is the corresponding chart for my recent revision pass:

Overall, I scored 94% at an average of 13 seconds per problem. There is some indication that number of problems that I solved in under 5 seconds increased from one batch to the next, as it did on the previous chart. My performance (9 months after my last revision), was better than that on my first attempt; but not as good as it was shortly after my previous 7 relatively closely spaced repetitions. I learned these problems over a period of 3 months, so revision was due after about 3 months, and well overdue. I appear to have become more careful since my earlier experiment, even when doing a speed test, which is encouraging. It is, however, also possible that I remember the harder problems better than the easier ones as a result of spending more time puzzling over them.
I decided not to time myself on the Ivaschenko problem set. This saved me some study time, and also allowed me to concentrate more on accuracy rather than speed. I did, however, make a list of the problems that I failed to solve (or got wrong), and the problems that I did not solve efficiently. I am considering doing some extra work on the problems that caused me difficulty, both for this problem set and for some of the others.
Chapeau!
ReplyDeleteYour performance on these "known" problems is still good. The training did show effect. I wonder what did improve? memory quantity/quality, memory access speed, logical reasoning, thought process...? I quess its the "Chessmemory"( which is of high! importance ), i quess you should be able to remember tactic puzzles ( and not only them ) better/quicker now. Now it would be interesting to see how your performance on "problems never seen before" develped.
At the moment i have lots of fun doing Chessity problems. I will check the effect of that training at chesstempo and my performance at firsttimer in Blitzmode there. I think its the best available "standard" for the tactical skill
With these simple problems, I believe that my memory has improved. I believe that I am identifying more piece configurations that are likely to be relevant per unit time. I also believe that my thought process has improved. I am more likely to see checks from the other side of the board, for example. I am also less likely to get stuck in a grove, chewing over the same moves. My thought process for these simple problems seems to be to look around the board for clues or promising moves to try. If something does not work I move on. I got the subjective impression that I was sharper after doing this revision.
ReplyDeleteI try to look at the position properly before trying moves, but this is not easy in these tests. If I were to always count the pieces, for example, that would mask a small improvement in actually solving the problems. Ideally, I should be told beforehand when I am a piece down or in check, because I would know that in game.
The fact that I improved on the later problem batches tells us that I improved on other problems that I had almost forgotten, which strongly suggests that I also improved on similar problems that I have never seen before. However, much of this improvement might have been the result of restricting my search to look for simple problem book combinations of the type specified (e.g. “pin”).
The Susan Polgar problems are still quite simple. Up to mate in 4, with a similar level of complexity for the others. (She has done a very good job of presenting typical positions, and rarely repeats precisely the same trick twice.) I think our average 1700-1800 player would be kicking himself if he missed one of these in a game, but accidents do happen at all levels. Ivashchenko 1b is rather harder, and the blue Coakley rather harder still. A more organised approach is important for the harder problems. Demonstrating an improvement on these harder problems has proved to be more difficult.
A statistically sound tool for measuring tactical performance would be very useful. I have given lots of ideas on how to go about that, but it is not easy.