Despite the fact that taking stat courses scared the living daylights out of us, my doctoral study group (which included Kylene Beers) spent hours coming to understand research. We looked at sample sizes (the "n") and the type of study. We discussed practical versus statistical significance. We talked about correlation versus causation (correlation does not imply causation was one of our mantras then and it still is mine now).
All too often,the headlines obscure the research. They often conflate it as well. One of my prime examples of this occurs each year when Renaissance Learning releases its list of "bestselling" books. In reality, these lists are the lists of the quizzes taken most often each year in schools using Accelerated Reader programs. The lists themselves are misleading, and routinely newspapers publish them accompanied with hand wringing about the books kids today are reading. My biggest quarrel with AR and its parent company is that there is absolutely zero proof that the multiple choice test kids take after reading is in any way responsible for increased reading scores, comprehension, etc. I do not doubt at all that if kids read more and more, they will see growth in reading, in vocabulary, in attitudes toward reading (if CHOICE is allowed and encouraged along with TIME and ACCESS). We have known this for a long time. What companies that sell these programs cannot demonstrate is that the QUIZ is the reason for the gains.
The NCTQ "report" suffers from some of the same problems. Basically, "researchers" rad through syllabi looking for specific texts and objectives and activities. Programs that had those items in the syllabus were deemed to be good programs. If they were not evident, the program was deemed insufficient. And in the case of schools who did not feel inclined to send syllabi and other materials to NCTQ, they were also found wanting.
I am not suggesting everyone take some classes in stats, but I do point you to those folks who examine the research, who discuss its limitations, etc. All too often, educators are sold a bill of good or a program or a set of standards simply by invoking the words "research-based." Look for the research. Is it research in classrooms like yours? How many kids? Who did the research? How was it funded? Has it been replicated? Is the significance statistical, practical, both, neither? Ask questions, lots of lots of questions.
About 6 months ago, after reading a "study" that indicated the majority of teachers surveyed approved of CCSS, I set up my own survey that garnered the exact opposite results. My study is not more valid and reliable than the one that got all the press. But I would write a nice headline: Majority of Teachers Despise CCSS. Number may not lie, but they can be skewed, they can be misleading, they can be data and not evidence.