?

Log in

 
 
16 September 2013 @ 07:32 pm
Survey Says  
Even 20+ years after completing all my stat courses, I still can take a look at how some "research" is reported and conclude that the research itself was not well planned and executed. So this week when I saw a headline that asserted that a recent poll found most teachers favored CCSS, I wanted to know more. Have I been wearing blinders? Most of what I was seeing was not positive support from teachers for CCSS. Maybe I needed to broaden my understanding. Here is the link to the piece: http://neatoday.org/2013/09/12/nea-poll-majority-of-educators-support-the-common-core-state-standards/?fb_action_ids=10151637636018341&fb_action_types=og.likes&fb_ref=addtoany&fb_source=other_multiline&action_object_map=%5B505072406242671%5D&action_type_map=%5B%22og.likes%22%5D&action_ref_map=%5B%22addtoany%22%5D.

There are a couple of things wrong here. First, the survey is of NEA members only. That is not a random sample at all; it is very focused. But, despite that limitation, take a look at the first paragraph and see if you can spot the trouble: "Roughly two-thirds of educators are either wholeheartedly in favor of the standards (26 percent) or support them with “some reservations” (50 percent). Only 11 percent of those surveyed expressed opposition. Thirteen percent didn’t know enough about the CCSS to form an opinion... The survey questioned 1200 NEA members and was conducted in July by Greenberg Quinlan Rosner Research."

See? First, only 26% were truly in favor of the standards. The rest of the "majority" had some reservations (50%). Then, throw in the 24% who had no opinion. Finally, there is no information about the 1200 surveyed and how they might be representative of the NEA membership at large nor how the NEA membership might be similar to the teaching profession as a whole.

This brings to mind the research Reading Renaissance publishes each year about the most popular books. What they do not ever indicate in these "research" reports has to do with the source of the stats they report. The statistics come only from AR, the product Reading Renaissance promotes. The fact that the stats from AR do not reflect the stats from PW or another industry organization is telling. And, of course, it was not long after the annual release, newspapers picked up the "story" and reported it as fact. Ditto the very flawed research from NCTQ on teacher education programs. Ditto so much of what purports to be research these days.

First Books (http://firstbook.tumblr.com/post/59614519675?utm_content=buffer11f8b&utm_source=buffer&utm_medium=twitter&utm_campaign=Buffer) posted the results of a survey of about 275 members of its organizations to Tumblr. Looking at the lovely pie chart c=accompanying the brief report, I wondered what was on the survey exactly. There is a mix of genres (GNs) and strategies (reading aloud), and other things that just do not mesh. I do not know what questions were asked on the survey. Nor do I know the nature of the survey (rank order items, Likert scale for each?). So, I approach the results with some hesitation. This does not mean, though, that I dismiss the posting. Instead, it makes me want to create a survey for students to see if my results are similar or not. A couple of years ago, I asked approximately 1200 high school kids about GNs. Most of them did not show great enthusiasm for graphic novels. Puzzled, I followed up by talking to their teachers and discovered that most of the classes had little or no experience with graphic novels at all. They were not readily available in the classroom or the school library. OK, that made me look at the results in a new light.

So a caveat: before citing stats, know how reliable and valid the instrument and survey is. Let folks know your sample size and how it was selected. Show them the questions you asked. Give me the information I need to use these statistics wisely. For those of you old enough to understand the reference I made in the title of this post, you watched (or at least were aware of) The Family Feud. They would survey 100 audience members and then have contestants try to match the results of most popular answers. Audiences differ. So do the survey answers, then. Just because information comes from a survey does not make it sacrosanct or factual. Be careful of things masquerading as facts. Some of them are not quite "just the facts" as they used to say on Dragnet.
 
 
Current Location: home
Current Mood: contemplativecontemplative