Saturday, October 28, 2017

Testing Isn't Everything; It's The Only Thing

My students’ test scores went up last year. Quite a bit, actually. And as much as I am skeptical of the validity of standardized test scores (in the abstract, at least), it is definitely cool to see some evidence that my class has made some sort of a difference. The 2016-2017 school year was the first year that the students in grades 6-8 had a class devoted to writing and - since that was the only major change from the previous year and it was only students in grades 6-8 whose scores rose significantly and it was only their ELA scores (not Math) - it seems reasonable to infer that this class caused that rise in test scores. (Unless you’re David Hume or something and you don’t believe in the notion of causation at all, in which case - yeah, I get it, but also, like, come on.)

Let me brag here for just a second. When my most challenging cohort of kids, last year’s seventh graders, took the ELA test at the end of the 2015-2016 school year, 33% of them were judged to be "proficient." On the 2016-2017 test, that number rose to 71%. That's compared to a statewide average of 63% proficiency. So, in other words, that class went from being way below the state average to slightly above it. And they were the lowest of the three grades I teach. 88% of the sixth graders and 86% of the eighth graders were judged proficient on the ELA test, numbers which put our moderately-high-poverty school in league with schools in affluent towns like Bedford and Amherst. (It’s something of an open secret, I guess, that high income levels are generally associated with high test scores.)

And yeah, I’m definitely that guy who would say “you know, test scores aren’t actually a good measure of learning” if my students’ scores had gone down or stayed the same - but who is perfectly willing to automatically, unthinkingly accept them as valid when they make me look good. (Maybe it counts for something that I’m aware of that? Or maybe it actually makes it worse because I’m not doing anything to fight it?) But that’s not really my main point here.

Proof that rich people are just objectively better, I guess.
My point has to do with the fact that, at the end of this year, I will not be able to see if my students’ progress on these tests was just a fluke or an actual pattern. Because they will be taking a completely different test than they did last year. The format of it will be very similar, but it is designed by a different company - AIR instead of SBAC - and will separate out reading and writing instead of grouping them both together under "English Language Arts." And that means there is no way to make comparisons between the 2016-2017 school year and the 2017-2018 one. We will have to wait until 2019 to have any sense of whether our students' scores are improving, declining, or remaining static.

(Two caveats: that doesn’t mean we won’t have any clue if they’re improving at the skills that we teach. Of course we will. And it also doesn’t mean that people won’t try to say that the results of the new test mean something in relation to the old test. Let's say, for instance, only 60% of students are judged proficient this year - I'm sure someone out there will take this to mean their scores have gone down. I discussed that latter issue in more depth the last time I wrote about testing.)

There is nothing inherently wrong with scrapping an old test and finding a new one, of course. If the old test was bad or invalid, then that’s the wisest thing to do. But it sure does seem to be happening an awful lot. New Hampshire has only been using the SBAC test for three years, a short enough time that I, a twenty-six year old “baby teacher," remember when it was first introduced. I was working in a school at the time. Before that, they had the NECAP, which was around for a bit longer - I remember taking that one in middle school. But when I was an elementary student, there were different tests - one in fourth grade which I want to say was called the CAT (but I refuse to look it up) and a different one in third grade. And those are only the state-mandated tests. It says nothing of the local or school-level tests, which in some places may be changed even more frequently. Or the NAEP test that some schools (including mine, last year) are selected to participate in. It’s hard for kids to keep it all straight, keep track of what they are expected to do on each test. (Not to mention all the acronyms. Sometimes I think our culture has AOD - Acronym Obsessive Disorder. [That sounds like a joke Alfie Kohn would make. Apologies to Mr. Kohn if it is.])

When the test du jour was the NECAP, students were often told to “fill the box” with writing. It was a paper and pencil test and many educators knew that longer responses tended to get higher scores - hopefully because those students were more likely to have elaborated on their answers, but also possibly because some primitive part of the human brain actually believes that more equals better. But when it was replaced by SBAC, “filling the box” became objectively impossible; SBAC was a computerized test, where the box would expand to the length of the Bee Movie script if you typed that in there. (If I remember correctly, you couldn’t copy-and-paste into the box, in an attempt to prevent plagiarism. But I am sure there are some dedicated slackers out there who would be willing to type the whole thing out just “for the lulz.” In fact, the name of a particular seventh-grader just popped into my head. He'd definitely do it if he were in the right mood.)
 So we had to change what we advised the students to do.

And I think, somewhere in my three years of administering this test, I had gotten pretty good at preparing my students for the writing portion of it. I knew what they would be expected to do and what sort of practice they needed to be able to do it well. And honestly, I thought the writing component of SBAC was pretty darn good. I didn’t love the time limit, but other than that there wasn’t all that much in there that I could take issue with. I felt pretty confident that my regular approach to teaching writing was going to help them succeed on the test as well as grow as writers in the long-term.
"Ya like jazz?"


So maybe I’m partially just annoyed that a standardized test I am familiar with and kind of actually liked (or at least didn’t mind having to deal with) is being replaced. And I definitely am distrustful of the fact that the writing component of our new test is going to be scored by a computer. (I’ve written about that before, too, and so have a bunch of smarter people than me.) And if my students’ scores suck this time around, I’ll almost definitely blame that.

But it also seems like the people involved in education are constantly trying to reinvent the wheel, like the perennial impulse is to dump out the whole bag of apples and start over from scratch. Veteran teachers have been telling me for years that education is cyclical - things that fall out of favor always come back again. And already I feel like I am starting to see that. But it’s also true that there are certain assumptions and features that are never (seriously) questioned - things like annual standardized testing, or the underfunding and subsequent understaffing of many schools. People talk about these things a lot, of course, but no one ever actually does much to address them.

And then there are the things that are really never questioned, which are the things that I can’t even mention here because I am not capable of seeing them, because they are part of the water that I am dissolved in. And my hunch is that if there is any sort of answer to all of the problems that we face, that would probably be the most promising place to start looking.

No comments:

Post a Comment