In the next few days, I'm going to back up and discuss some of the issues that led me to create the Chess Vortex Blog and corresponding Chess Vortex Project. I will then discuss in general terms how the Chess Vortex Project might approach addressing these issues in a scientific way. And finally, I will discuss the motivation behind making Chess Vortex a community project, what it means for Chess Vortex to be a community project, and how I will attempt to act as an agent of the community to create the "products" unique to Chess Vortex.
My hope is that these Chess Vortex "products", as I am calling them, will be three-fold in nature. First, I imagine a new set of representational tools to help evaluate graphically, or via other media, human interaction with tactical problems. Second, I imagine a set of mathematical tools to analyze behavior and the cognitive process of solving tactical problems. And third, I imagine that Chess Vortex will yield--as its most valuable product--new knowledge that leads to a deeper understanding of these cognitive processes.
So first lets begin with the fundamental issues. Today the issue will be...
Accuracy Versus Rating
Probably the most controversial topic on the CTS Message Board is the nature of tactical strength (aka problem-solving strength). This controversy arises from the observation that one's accuracy (number of problems correct per number attempted) can be sacrificed for Glicko rating and vice-versa. As a result, the Glicko rating assigned by the CTS has a duplicitous nature and, like all things duplicitous, can only be trusted if taken in proper context. The context for the Glicko rating, therefore, is not merely one's Rating Deviation (RD), which is explicitly included as a parameter in the system, but also one's accuracy. The inherent difficulty in assessing the accuracy, however, is that it is not formally part of the Glicko rating system and thus its weighting in one's rating is not easily determined. One goal of the Chess Vortex Project is to determine this weighting using robust mathematical analysis. The idea would be to create a scale that incorporates Glicko rating, RD, and accuracy to dependably compare the tactical strength of any two tacticians. Moreover, and decidedly more importantly, the hope is to be able to compare the improvement of tacticians who train at differing accuracy rates (time-averaged accuracy) to determine the optimal training parameters for developing tactical strength.
I am not finished with this issue by any stretch of the imagination, but I promised myself some sleep tonight (for a change), and so now I present my CTS performance today--somewhat for purposes of vanity, but mostly to show off the Chess Vortex community's first (albeit a work in progress) product--the Session Graph:
And now, before I forget, I happily present the
Here's the solution and why I like it (start selecting text following the colon): 1...Nh3+. Now if 2.gxh3, then its mate in three: 2...Qh4+ 3.Ke2 Rg2+ 4.Rf2 Qxf2++.
Monday, October 29, 2007
Subscribe to:
Post Comments (Atom)
5 comments:
read:
that is to say: complex adaptive systems,
agency based computing and change,
whole systems, integrative studies
diminished returns versus increasing returns
feedback between observer and field
heuristic comparison....
learning, insight, and real meaningfull change
some thoughts out loud :)
dk
Interesting ideas! I also prefer to focus on accuracy more than time, although philosophically my goal is more aligned with chess improvement rather than solving puzzles for the sake of solving them. Of course, that doesn't change that I need to solve some in order to improve. :)
Regarding run-length-encoding and its graphical display... I hate to be a pain, but what's the goal?
I track my CTS progress like this:
"94% @ 1400"
I use this to measure how quickly I am improving. The rating alone is not sufficient because you can up your rating by dropping your accuracy... so you need to measure at least both dimensions to get a real feel for your progress.
"(97% @ 1428±92, 1403 final)"
I can also see this providing more useful information. Because often I am too fixated on the "final" score but the average is also important.
But why does it matter whether my session was was 9p-1f-9p-1f or 18p-2f? Does this help me to track my progress better or improve in any way that tracking the score and %accuracy alone wouldn't allow?
Curious,
likesforests
I don't know that the controversy arises so much from a need to properly compare tacticians, but rather from the fact that people use CTS in different ways and for their own purposes. I personally use CTS as source of tactical training puzzles to train my analysis ability. My goal is to improve the accuracy, and over time, speed, of my analysis. Accuracy at this point is by far the #1 concern for me. I don't think you can come up with a meaningful comparison unless you know ahead of time that the tacticians in question are using the site the same way.
On the subject on CTS psychology: I was about 25 problems away from reaching my next 0.1% accuracy level, when I made a bonehead mistake. It was the same "fog" I spoke of earlier that happens in my OTB games. This "fog" is definitely related to solving problems or choosing moves in an OTB game over time, as I very rarely ever get my first problem of a session wrong, or drop pieces early in an OTB chess game.
Particular to CTS is the fact that failures tend to come in groups. I really should just stop after the first failure, but a desire to "get back" at the system and hurry to reach my next accuracy %age goal leads to more mistakes, which happened today.
Post a Comment