As I was clicking on links in my Twitter feed this morning, I landed on a blog post about CCSS. There was just SO much in this blog that made me want to scream. But it was pre-dawn, my BH was still asleep and Scout was sitting at my feet awaiting his next treat. So, I swallowed the scream, got ready for the office, and headed out the door. As I was driving up the road, the scene from JERRY MAGUIRE popped into my head. You see, I had been asking myself as I read this blog post (and countless others), "where is the research behind all this?" Where does it say that background knowledge impedes comprehension (one of the assertions from the post I was reading this morning). Where does it say that ANY sort of scientific formula accurately measures the LEVEL of a book? Where does it say that forcing students to read at a frustrational level increases performance (this was something from a recent article by one of the supporters of CCSS)? How did the architects of CCSS come up with the percentages of fiction and nonfiction selections that they are proposing? What I see instead is a great deal of tortured logic, of citations from research that does not really apply to the standard or strategy being proposed. Where is the research that proves that multiple choice questions make kids college and career ready (I do not take tests in my career; I do not GIVE tests in my classes. I do not think I am an outlier). (Much of the research about AR, for instance, does NOT demonstrate that taking the test over a book drives increased comprehension. You see, if we eliminate time to read, choice of reading, school climate and the other factors that are part of "reading renaissance," the only unique element is the test. And there is NO research that proves a 10 item multiple choice test does anything more than make kids read for insignificant detail, a skill I am not sure is college/career oriented).
But I digress. I am kind of dangerous when it comes to research. I actually enjoyed my stat classes when I was working on my doctorate. I could explain why I elected to use MANOVA over chi square or multiple regression or another approach. I know that correlation is not equivalent to causation (90% of kids passing the state tests have brown eyes, for instance). I know that sometimes practical significance is more important than statistical significance. And I know these things.
1. The research on the benefits of reading aloud has been replicated over and over again. Despite the fact that the NCLB folks omitted that from their compilation years ago, reading aloud is and should always be one of our best practices.
2. There is ample research showing that schools with certified librarians with good collections score higher on state assessments (and the converse, when schools eliminate the school librarian/library, test scores decline).
3. Extensive reading is as effective or more effective than intensive reading.
Simply, there is research that supports the chief elements of reading/writing workshop. So, "SHOW ME THE RESEARCH!" Demonstrate to me that this unfunded mandate is any better than the previous unfunded mandate (do you not love how CCSS is critical of NCLB, sort of a pot calling the kettle black). Show me the research behind the decisions made, the tests being constructed. What I want to show you are the classrooms where readers are being supported, where choice is key, where community is essential. Follow Ed Spicer, Donalynn Miller, Paul Hankins, Katherine Sokolowski, and others on Facebook and see what they are doing each and every day to help nurture lifelong learners. Examine their reflective practice. Follow their lead. What they are doing is driven by research, too. It is not "anecdotal" (this seems to be a convenient way to dismiss voices of classroom teachers these days); it is "action" research. These are the folks IN the trenches instead of those standing well above the fray (like me, I admit). We ought to be listening these voices; these are the ones that speak for the children.