Of course, in this snippet of a story, it is impossible to explore the question in any depth. However, here is the short answer to the question posed: NO. A computerized test or even a multiple choice paper test cannot possibly measure the CCSS. How does one measure items such as Standard 10 under ELA? Is there a way to assess this via multiple choice or even through a computerized test which must also be limited to multiple choices (or failing that, be machine scored in any event).
Here is Standard 10: Read and comprehend complex literary and informational texts independently and proficiently.
Tell me how this can be measured on a computer test? Sounds to me as if the assessment would be the same old same old we have now. Give kids passages and then ask the multiple choice questions following the reading. Is that "understanding"? Nope, it might check for details, for some basic comprehension. But isn't the entire thrust of CCSS that these are more rigorous standards? That these are not low level thinking skills? If that is the case, then multiple choice will not cut it.
Of course, you could ask the teacher. I can tell you whether or not kids are reading and understanding complex texts. How? I talk to them about what they are reading. I ask them to write and reflect on their reading. A machine cannot do this.
How about this standard: Determine central ideas or themes of a text and analyze their development; summarize the key supporting details and ideas. Again, a machine cannot assess this adequately. Take, for instance, WHERE THE WILD THINGS ARE, a classic picture book. I continue to use this book with graduate students and used it with middle school kids and my undergrads as well. We would talk about the themes (and there are multiple themes in this rich text) and point to the "evidence" from the text to support the answers kids would give. How can a machine replicate that class discussion? It cannot.
How about this standard: Delineate and evaluate the argument and specific claims in a text, including the validity of the reasoning as well as the relevance and sufficiency of the evidence. Can this be assessed via machine? No, it cannot.
So, while I appreciate the question being raised at all in the media, I cannot help but wonder why any reporter who actually does her or his homework is unable to provide an answer. I have a suggestion: ask the architects, ask the PD gurus popping up, check with the companies offering the tests, ask PEARSON for heaven's sake. Ask them for specifics. Ask them to demonstrate clearly that these standards can be measured or assessed. Ask them how they can be machine scored. I am no longer interested in the questions. I want answers.