Skip to main content

Testing Troubles

Morgan Polikoff
Monday, August 15, 2016
Common Core State Standards

It was one of those headlines that whizzed through my Twitter feed earlier this week: “Missouri test scores, district report cards will be delayed again.” The story is a simple and increasingly familiar one. Missouri’s state test score results are being delayed a few weeks, as are the state’s report card ratings. The reason given is that the state test changed from the previous year and it was simply taking more time to calculate the scores. The article also noted that standards and assessments were slated to change again in the coming years.

Stories like these have become a cottage industry for the education media. Whether the frequency of testing snafus is actually increasing or whether the media is just more on top of these stories, they are everywhere.

The less serious example of these issues is when score reports are simply delayed, as above in Missouri. In another instance, PARCC results in Ohio were delayed so long that state reports cards arrived a full year after the testing window began.

Whether the frequency of testing SNAFUs is actually increasing or whether the media is just more on top of these stories, they are everywhere.

There are more serious issues however, usually involving technical glitches. For instance, technical issues with Smarter Balanced administration in Montana resulted in the state’s making the test optional in 2015. Tennessee dumped its testing vendor after technical glitches and shipping delays resulted in the state being forced to make the test optional in 2016.

Of course these glitches and technical challenges are unfortunate in their own right. They waste teachers’ and students’ time. They undoubtedly frustrate students and increase their anxiety (having recently conducted a study using a computer-based test that suffered a mid-testing glitch, I can assure you that some kids do not handle it well). They may affect the validity of test scores. The delaying of score reports is less serious, but certainly it undermines even the potential that these scores might be instructionally useful.

The bigger issue, however, is that these kinds of repeated unforced errors substantially undermine the entire assessment enterprise. Recent conversations with educators and policymakers indicate several pernicious effects of these kinds of issues:

  • Teachers think that state summative assessments should be instructionally useful—that teachers should be able to use the results at the start of the year to plan instruction based on students’ incoming abilities. While I am very skeptical that this should be a goal of state tests (they will never provide fine-grained enough detail to inform these kinds of decisions), it’s nonetheless the case that delaying results until well into the school year reinforces the belief that these tests are worthless for teachers. This, in turn, increases educators’ opposition to these tests.
  • Parents can use test results to help them inform decisions about which school to send their children to. If the results are substantially delayed, then they will certainly not be very useful for that purpose. This will undermine parent support for assessment as a policy tool.
  • Poor performance by testing vendors leads jittery state policymakers to cast blame and seek out new vendors. But it’s hard to find new vendors year after year, and constantly changing state tests is highly damaging to assessment as a policy mechanism. It’s politically (and perhaps technically) harder to use assessment results for school or teacher accountability systems that include student growth, for instance, if last year’s test is different from this year’s.

These and other issues likely compound dissatisfaction with the amount and quality of testing in our schools by all stakeholder groups, undermining support for testing. Indeed, a recent report found that opt out was largely driven by a general opposition to standardized testing and its use in evaluating teachers and schools, which is undoubtedly exacerbated by these testing troubles. With little support for testing, parents and policymakers are loathe to spend large sums of money on new tests, instead opting for cheaper options. But these options will have more problems, and the cycle of errors and diminished support will continue.


The solution to these problems must start with state education leaders making an affirmative case for the importance of state tests. This should not be that hard to do—there is good evidence that these assessments have been a tool to improve educational outcomes. Then, there needs to be a renewed sense of political courage to select an appropriate assessment to measure student mastery of state standards and stick with it for at least several years. Constantly changing tests (or using tests as a convenient scapegoat when political blowback against Common Core gets too strong) is a policy response that undermines the goals of standards-based reform in all the ways I have identified above. Going with a consortium test may allow states to spread out the risk and have higher quality assessment without having to go it alone. Of course, if there are testing errors and issues, those need to be dealt with. But establishing a long-term, sustainable vision for the role of assessment in education should be a priority if this is to remain a viable policy approach.