Here is the $$$ quote: "In a study of nearly 1,400 eighth-graders in the Boston public school system, the researchers found that some schools have successfully raised their students’ scores on the Massachusetts Comprehensive Assessment System (MCAS). However, those schools had almost no effect on students’ performance on tests of fluid intelligence skills, such as working memory capacity, speed of information processing, and ability to solve abstract problems."
Ravitch concludes that the fluid intelligence skills being referred to in this study are akin to higher order thinking skills (HOTS). And this make sense. As a matter of fact, this is one of those DUH pieces of research, something that confirms what we expect it to confirm. How can HOTS be a by-product of multiple choice assessments which result in a score? This has long been one of my huge beefs with using AR quiz scores as part of a grade for reading and or English class (something that my former residents of the back bedroom were subject to for years in their schools, even in honors classes). How can kids' answers on multiple choice items evem come close to "measuring" their reading of the book? The answer? It CANNOT.
In a recent article, Shanahan points out that some questions are worthy of asking while others are not (and I will post more about that article in a separate blog entry). The unimportant questions are lower level, recall questions versus questions that involve the WHY and HOW instead of the WHAT. Duh, again! No matter how much is spent on new assessments, multiple choice questions still are largely recall types of questions. And AR is full of those question types, reducing a book to 10-20 questions that ask kids to recall details, facts. What is important? Feelings new thinking, empathy: things that are difficult if not impossible to measure on a test.
Let's hope this new research is reported as widely as faux research done by NCTQ and the still mysterious ephemeral research behind CCSS.