May 30, 2008

Wading through Unified's test data

Racine Unified underwent its annual exercise in self-flagellation on Thursday by releasing its scores on the state standardized tests called the WKCEs. It's always bad news for Unified, usually greeted with headlines like "Unified continues to struggle on state tests."

It's pretty clear our schools score lower on standardized tests compared to other districts throughout the state. Poverty rates largely explain the low scores, though staff cuts, aging schools, an inflexible teacher's union, and a surly malaise when it comes to public education in our community are also culprits. And, no doubt, families and students are to blame.

So, now that we've blamed everyone for our so-labeled failing schools (that list comes out next month), what's really going on? I have no idea - and I suspect all of the combined PhD's in Unified have no idea, either.

Here's why: No one can speak with any sort of clarity or even understanding of the results. Click the link above and read the JT's story on the results. It makes no sense, and I don't think it's bad writing or reporting. Here are the materials, materials, materials Unified gave out explaining the results.

This is the most revealing line (emphasis added):
All buildings at a variety of grade levels, subgroup populations, and other indicators
continue to have performance issues related to WKCE that appear to be somewhat
random.
While I suppose the candor is commendable, attributing test scores to "random" factors does little to instill confidence in the district. District officials have had access to the scores since March. In all that time the couldn't come up with some better than "somewhat random"?

But enough of that. Unified's problems run deeper than how to spin test results (though the could have done a lot better at the spinning). The problem I have the WKCEs is that they're hard (if not impossible) to read. They shift standards every couple of years so past years aren't comparable. They throw around jargon that experts understand, but mean nothing to the public. And, they divorce the results from the actual schools, teachers and students who are taking the tests and teaching the lessons.

Here's one small example. Looking at the numbers this morning, I realized that the scores are reported across grades for each school. For example, Julian Thomas' scores are reported in a chart that lists how students in 3rd-5th grade did on reading tests in 2007.

That doesn't make sense. You can't compare the results from third grade to the results from fourth grade. It's different students, different teachers and different subjects.

What I wanted to know was how students are doing as they progress through the district's reading programs. How did third-graders in 2005 do when they moved to fourth grade in 2006 and fifth grade in 2007? If they did better, that's a good sign that teachers are engaging students and catching up more students to their reading level. If the scores go down, then perhaps it isolates where there may be problems with a teacher or program. For example, if 85 percent of third-graders at a particular school pass the test in 2005, but only 60 percent pass the test when they move to fourth grade, then maybe there's something wrong in fourth grade.

I think that's what Unified was getting at with their "cohort" analysis, but I didn't have the time or brain power to figure it out. So I came up with my own analysis. I re-jiggered the numbers this morning and came up with a spreadsheet that tracks the reading test scores of third-graders in 2005 as they progressed to fifth grade in 2007.

Read the results of the study here.





No comments:

Post a Comment