Monday, September 25, 2017

The Backfire Effect in Persuasive Writing




I’ve got to assume pretty much everyone knows about confirmation bias by this point. It’s pretty much like that stupid quote from that stupid-but-loveable book The Perks of Being a Wallflower, “we accept the love we think we deserve” (which I have never understood why so many people seem to think is so deep.) Except in this case it’s more like: “we accept the facts that go along with what we already believe.” And find a way to dismiss all the other ones. Most of the time, at least.

And that would be bad enough. But recently, I’ve run into quite a few articles that suggest that being exposed to facts that contradict what we believe can actually strengthen those beliefs. Most of them seem to be based around a 2010 study by two dudes (okay, professors) called Nyhan and Riefler where subjects were exposed to “corrections” (or “fact checks”) of things they were predisposed to believe. After reading these “corrections,” many of them doubled down on what they already believed rather than modifying their beliefs to fit the evidence.
Literally any other quote from this book, please.

For example, let’s say Joe, who has a positive perspective on the Iraq War, reads an article 
 includes some information that supports his beliefs: that there were weapons of mass destruction found in Iraq after the invasion. The existence of WMDs in Iraq was, of course, the primary justification for the war. But then, Joe is told that, in fact, what he just read was wrong and there were no WMDs found in Iraq. Although naively we might think that this would affect his beliefs and make him less likely to support the war, Nyhan and Riefler found the opposite. Now Joe supports the war even more than he did before.

They call this the “backfire effect.” It is like an extreme version of confirmation bias - not only does new information that contradicts someone’s worldview not change what he or she already believes, it actually can work to cement those established beliefs. This does kind of make sense when you think about what people are like, though. We can be pretty oppositional creatures. Think about when someone tells you to do something that you were already going to do - it kind of makes you not want to do it, doesn’t it? We live most of our lives based on emotion and instinct rather than rationality, so really it isn’t all that surprising that this applies to our political beliefs as well.

It is worth noting that there have been attempts to replicate the original study that were not successful. In some cases, people did change their minds when presented with new evidence, or at least said they did. But follow-up studies do not erase the existence of the original one. Even if the “backfire effect” isn’t universal - which would be awfully depressing (as one article says, “if the backfire effect is real, nihilism might be the most appropriate response) - it is still worth examining because the mere fact that it does sometimes happen challenges some of our most basic assumptions.

What’s important for us to recognize is that this whole business is complicated. Sometimes, people change their minds when exposed to new evidence. Other times, they ignore that evidence completely or find a way to rationalize or compartmentalize it, which means it has no effect on their beliefs. Confirmation bias. In some cases, the new evidence may make their existing beliefs stronger. The backfire effect. And then, there is also the possibility, mentioned by Nyhan in that 2016 article (written just days before the election), that people may acknowledge the validity of new information without allowing it to affect their support for a particular policy or candidate.

That is, Joe could accept that there were no WMDs in Iraq but then go on to say that the war was still justified because of Saddam Hussein’s brutality or some other reason. This is different from confirmation bias because, in this case, Joe’s belief is not affecting the way he perceives or responds to the facts. It does suggest, though, that our beliefs are often not really based on the facts that we might offer as evidence to support them. Which is another way that this whole business can be complicated.

And its being complicated is kinda the whole point here. The naive view that facts and evidence are what persuade people to change their minds is clearly insufficient and misleading. And yet - there are still quite a few things that we take pretty seriously that are built off of that model.

*

For instance, the entire way we expect and teach students to do persuasive writing - or, as the Common Core insists on calling it, argument (more on the distinction between the two in a bit) - hinges on the notion that people change their minds when presented with evidence. But clearly that is not always true in the real world. In the real world, there are a myriad of ways that people can respond to being presented with evidence; changing their minds is only one possibility. In the real world, persuasion is a complicated business. So if we don’t really know how persuasion even works - how can we teach or expect our students to do it?

Of course, we do know some things about how persuasion works in the real world. These things have been studied and analyzed. We know that people respond to appeals to emotion, to guilt, to a desire to be liked. They respond when they are made to feel like their lives are incomplete and then offered something that will ostensibly fill that void (a sort of “negging,” really.) These are the techniques that advertisers and marketers use, and they are probably the best real-world examples of persuasion I can think of. Or think about what the founders of multi-level marketing companies say to convince people to join them - or cult leaders - or religious proselytizers. (Or did I just say the same thing three times?)

The problem is that these strategies are often unscrupulous. When persuasion is the only goal, that means it is acceptable to lie, to mislead, to manipulate people by appealing to their base instincts. Encouraging these tendencies in students seems, at best, careless and at worst, downright immoral. Do we really want a generation of people who are extremely effective at manipulation, who - like the fictional Jennifer Barkley from Parks and Recreation or the fictional Jeff Winger from Community or the all-too-real Kellyanne Conway - care very little about the content of what they are saying and only want to “win?” Argumentative mercenaries.

Such an approach does have its roots in the ancient Greeks (though that does not necessarily mean it’s good.) The Sophists, certainly, believed in persuasion above all else; that is exactly what Plato criticized them for. But even Aristotle was never quite willing to give up the notion that the primary purpose of rhetoric was to persuade an audience. It’s not that much of a stretch to imagine a modern-day Aristotle, seeing all that we have learned about confirmation bias and the backfire effect, throwing his arms, saying “Fine, whatever,” and throwing his lot in with the Sophists. (That's why Aristotle was the chillest of the ancient philosophers.)
What a great character. P&R has the best minor characters.

The other option available to us is to follow the guidelines of the Common Core and move away from mere persuasion towards “argument.” The distinction is supposed to be that argument uses evidence, facts, and logic to make its case rather than appeals to emotion or authority. It is all logos, not pathos or ethos. Which all sounds great. The problem is that we just learned that evidence doesn’t actually convince anybody of anything. And sometimes it even has the opposite effect. (And there is no reason to believe that people are immune to this just because they are educated or intelligent [or liberal], though that would be nice.)

So are we willing to accept the conclusion that we may be having students write pieces that could very well do the opposite of what they are intended to do?

Imagine telling a passionate seventh-grader - inspired to change the world for the better, armed with the slogan that “everyone can make a difference” - that her argument essay that laid out all the facts about climate change made the case that the school should “go green" - imagine telling her that reading her essay actually made you and everyone else want to pollute more. Why wouldn’t she say, “well, what’s the point, then?” And if you haven’t spent any time hanging out with nihilistic seventh-graders, it is not something I would recommend.

Also, if the end goal of actually persuading people is removed, then there is no legitimate rhetorical purpose for writing an argument. The purpose of the piece of writing becomes “to demonstrate that you know how to write an argument.” And that is likely going to lead to some insipid writing.

Furthermore, when it comes to assessing this sort of writing, I have a hard time imagining how I could evaluate an argument without relying on the principle of persuasion. What counts as strong evidence? Evidence that makes me more likely to believe the argument. How do we distinguish solid reasoning from crappy reasoning? Well, if the reasoning seems likely to make a hypothetical person believe in the argument, then it must be good. But there is no evidence that I am exempt from these biases, that I can ever be, no matter how hard I try. (Nor are, therefore, the hypothetical people in my head. My imagination is limited by my own abilities.) And anecdotal evidence from my experience backs that up.

Not that long ago, I said, half-jokingly, to a friend that the most convincing argument I have ever heard for the existence of God is the fact that the sun and the moon appear to be pretty much the same size from Earth. And in one sense, I believe that. It does seem like an awfully big coincidence. (Remember how Seinfeld had that whole bit about whether there is such a thing as a “big” coincidence or whether the definition of “coincidence” inherently includes all sizes? Absolutely fascinating.) 
But certainly that would not count as a “good” argument in the traditional sense. Most people would say Descartes’s ontological argument for the existence of God is better, and I consider that to be absolute horse shit. And I think we need to leave open that possibility, that some “good” arguments could be absolute horse shit. And also recognize that some “bad” arguments could be really effective. Aren’t there things that you just know without having evidence to support them - things that you feel more certain about than anything else? There’s something going on there, and we shouldn't just dismiss it as "emotion." When a belief becomes part of your identity, you can’t just eradicate that belief - no matter how many lectures on evidence and logic you have sat through.
Yes, that conversation was right around the time of the eclipse.

But when persuasion is removed, what do we have to fall back on when we evaluate arguments? Structure? But it is entirely possible to write a well-structured argument about something completely meaningless.

(Interestingly, in his book The Testing Trap, George Hillocks, Jr. contends that many of our writing assessments already encourage students to do just that, since they don’t provide students any content to work with. Now, this has sort of changed in some of the new Common Core-aligned tests - but certainly that doesn’t mean every teacher has changed his or her practice to keep up. There are undoubtedly still many out there who evaluate argument essays primarily by their structure.)

Generally, though, we already recognize that structure alone doesn’t mean all that much. We talk about arguments that are valid but not sound - where the structure is fine, but the premises have no truth to them. If Socrates is a pig, and all pigs eat corn, then Socrates eats corn. That’s a perfectly valid argument. But the premises are nonsense. And structure does not help us determine the truth of the premises; that is, structure does not help us distinguish strong evidence from weak evidence, strong reasons from weak reasons. Only the concept of persuasion can do that.

I guess we’re kind of screwed no matter what. We can either value evidence or we can value persuasion, but we can’t pretend that the two always go hand-in-hand, that what people really want to hear, what really gets them going are "the facts." Because that's just plain not true.

*

[Afterthought: the studies that demonstrate the existence of the “backfire effect” and confirmation bias seem to mostly pay attention to subjects’ immediate reaction. But I wonder if the effect is long-lasting. Maybe we initially react to new information by doubling-down on what we already believe, but over time and with repeated exposure we end up changing our minds to accommodate that new information - as in Piaget’s theory of development. Or our minds change, without our being in charge of it. That is: maybe being persuaded by evidence is a long-term, gradual process rather than something instantaneous. Certainly that idea helps avoid some of the nihilistic implications of the above.]

Saturday, September 23, 2017

What Reality TV Can Teach Us About Democracy

Reality TV was a huge part of my childhood. Starting with the first season of Survivor in the summer of 2000 (which is apparently on Hulu and I am looking forward to re-watching sometime soon, if only to hear, in proper context, that amazing bit in the finale where Sue tells Kelly that, if she saw her dying of thirst in the desert, she wouldn’t give her a drink of water) - from then on, I was near obsessed with that genre of reality-competition show. You know, how normal kids like baseball or whatever.

The peak of my obsession was probably season two of The Mole, a show that no one ever seems to talk about or remember, but which I found absolutely fascinating. (It was hosted, incidentally, by Anderson Cooper. Which means that, sadly, the most attention the show gets these days probably comes from people who, for whatever reasons of their own, are reading through the Anderson Cooper Wikipedia page at two in the morning.) Basically, the premise of The Mole was that one player was actually there to sabotage the group’s performance in competitions (he or she was “the mole”) and everyone else was trying to figure out who that was. There were also supposedly hidden clues in every episode for viewers who were also trying to deduce who was “the mole.” So all throughout season two (which initially premiered on my tenth birthday but then got bumped to the summer of 2002 because no one was watching it) - I recorded every episode on VHS so I could re-watch them, looking for clues. I was convinced that a player named Al was the mole. (He wasn’t.) But I must have watched some episodes as many as ten times. (The summer of 2002 was part of the long drought between the fourth and fifth Harry Potter books, my other great childhood obsession.)

It gets worse. Not only did I watch every episode of shows like Survivor, Big Brother, and The Mole - I was also a very active member of an online community called Reality TV Talk from 2002-2005 or so. RTVT (as it was known) was a message board - pretty much the precursor to social media platforms like Reddit. 
Now, this is a part of my life that I don’t really think about too often these days, and when I do, it’s with a weird sense of, “Oh yeah, that did happen, didn’t it?” But it was a huge deal at the time. It was at least three-quarters of my social life for a while.
Jeff Probst, reality TV show host G.O.A.T.


I became a moderator. I wrote episode “recaps.” (A few years back, I went to see if I could find any of them, and I did. It’s kind of amazing that no one could tell they were written by an eleven-year-old. Everyone there was under the impression that I was eighteen, which might be the most innocent, pure instance of lying-about-your-age-on-the-internet ever.) I got way too involved in the drama, such as the time when one of the admins of RTVT decided to go create his own message board and the two became bitter rivals. Or when (on the first anniversary of 9/11), it was announced that RTVT was shutting down and everybody scrambled to find or create replacement message boards. Ultimately, it did not shut down, but some of the “replacement” communities lingered and changed the whole fabric of the social structure. Friendships were destroyed.

And in addition to all of this, watching these reality TV shows was a bonding experience for me and my mom. No matter what else may have been going on - at eight o’clock on Thursday night, we always found our way to the living room for Survivor. Sometimes my dad would join us for a little bit as he ate his second or third dessert of the night; when my sister got older, she was occasionally part of the group. (Although she was more of an American Idol type of person, which I watched as well but didn’t love. I remember her crying almost hysterically when her favorite contestant was voted off, though - but right now I can’t remember who that was.) But most of the time, it was me and my mom. We talked only during the commercials - that was a rule - analyzing people’s behavior, developing theories, making predictions, discussing the structure of the game.

So the point of this excessively long overture is that what I am about to discuss is part of a long and complex history of me thinking about this sort of stuff. And I think reality TV is incredibly important. If you were an anthropologist trying to understand American culture in the early twenty-first century, reality TV shows may be one of the best places for you to look. (Don’t forget: ten years ago, he-who-shall-not-be-named-in-this-post was best known as the host of The Apprentice. Coincidentally, that is perhaps the only major reality show that I never really got into, and I am retroactively proud of my past-self for that.)

And despite everyone in the world having made the trite observation that “reality TV isn’t really real,” there are a lot of things about it that are relevant to the real world. Things that reveal or suggest important truths about human behavior and experience, about politics, even about philosophy. Or, at least, things that can be taken as jumping-off points for more substantive discussions. And this is an attempt at doing just that.

*


I haven’t watched a whole season of a reality show in quite a while, but I ended up watching Big Brother this past summer. The nineteenth season. Probably because I was at my mom’s house a decent amount right when the season started - and because, since I wasn’t working at all, I had a bunch of free time. I didn’t see every episode - it’s on three times a week and that’s a big time commitment even for someone who’s de facto unemployed in July and August. Certainly, I missed quite a bit when I was road-tripping with my sister from New Hampshire to New Mexico (although I did happen to catch Jessica being evicted from a motel room in Memphis, when we were too tired after a day of walking and driving to do anything but order pizza and watch TV.) But Big Brother is a show that assumes its audience is stupid and reminds you of past events almost constantly, so it’s pretty easy to get caught up.

Here’s a quick summary of the entire season: Paul did everything.

This is Paul. He's in a band that I'd have listened to in 2006.
Usually with shows like this, there are power struggles and shifting alliances and complex social structures. This season of Big Brother, though, was pretty much entirely run by one player, a guy named Paul (who, it blows my mind, is actually younger than me, which makes me feel super old). He had pretty much everyone in the house doing his bidding, and even (not inaccurately) referred to himself as a "puppet master." He got everyone to trust him; he got them to throw competitions for him; he went the entire season without being “on the block” once, without anyone really even considering putting him up. It seemed like the other players found it literally unthinkable to go against him. He manipulated and played everyone. It was the Paul show, pretty much beginning to end.

But then he lost.

If you don't know how Big Brother works, essentially it's the players who got voted out who choose the winner from the two finalists. They are called the "jury." On this season's finale, which aired live on Wednesday, Paul lost Big Brother by one vote (for the second time in a row, actually.) The person who actually won was Josh, an immature but entertaining kid (he’s younger than me, too, so I can call him that) who calls everybody he doesn't like a “meatball” and cries almost daily. But nobody in the jury actually voted for Josh. They either voted for Paul or against Paul. And obviously, more of them voted against.

Instantly after the results were revealed, the internet lit up with different interpretations of the results. (I am pretty sure RTVT is defunct by this point, but its spirit lives on.) There were basically two schools of thought on Paul's loss. One way of thinking blames the jury, saying that they were bitter (or “salty” - isn't it interesting how we now have a different taste to associate with the emotion of resentment?) because Paul manipulated and betrayed many of them. They ought to have looked beyond their emotion and recognized that, objectively, Paul played a better game than Josh and rewarded him accordingly.

The other perspective contends that “jury management” is inherently part of the game. Paul, in this view, neglected this important aspect of Big Brother - making sure that the jury members had a positive opinion of him - and lost because of it. He could not have played a good game, according to this way of thinking, because the purpose of the game is to win. His strategy was very effective at getting him to the final two, but that's it.

It is hard to wrap my head around this debate. Both perspectives seem valid in some ways. I can easily imagine my eleven-year-old self making the case for either of them. 
Yes, Paul lost because the jury did not vote for him - that is indisputably true. But it is a question of who we blame for that. Whose responsibility is it? That's a judgment, really. In any voting situation, in any democracy - is it the responsibility of the candidate to persuade voters by any means necessary, or is the responsibility of the voters to use the right criteria?

This makes me think of two things.

First, the 2016 election - a topic which apparently I can’t stop myself from writing about, no matter how hard I try. The debate about Paul and "jury management" shows up here as well. 
Do we blame Hillary Clinton for being a bad candidate, for not going out of her way to appeal to the “white working class” in Pennsylvania, Michigan, and Wisconsin, for not being “exciting” enough to get young people and Bernie-bro-left-liberals to show up to the polls? Or do we blame the voters for being unable to put their own personal feelings aside and vote for the person who would make a better President, or for letting things like racism, sexism, xenophobia, and business-fetishization impact their votes? Does democracy mean that candidates should appeal to the base instincts of the voters? Or does it mean that voters have a moral obligation to educate themselves and vote responsibly?
You know, the working class.
And second, student council elections. This year, I've taken over running the student council at my school and I decided to have representatives chosen by elections. (You know, like every other school in the world.) This decision has made me pretty unpopular for a number of reasons, most of which just have to do with small town politics. But one of the objections, which I have heard from both kids and adults,  is that these elections are merely “popularity contests.” Always a pejorative term. But I don’t think they have to be. Isn’t it possible that the voters - the students - could be responsible and reasonable enough to vote for the person who would be the best member of Student Council? Shouldn't we trust their judgment enough? And honestly, in practice, I think most of them did just that.

Hell, I got elected to Student Council when I was in seventh grade - and, as you now know if you didn't already, I was the type of kid who spent hours a day on a message board devoted to Reality TV. Hardly a recipe for popularity. But I also had a reputation for being “smart," which is almost definitely why I got elected. (Whether that reputation was deserved is another question.) And it’s the same reason why a particular eighth-grader at my school received the votes of her classmates. She is smart and they recognized that.

I'm assuming my personal feelings on this question are starting to shine through here. I think we can and should expect voters in any democratic situation to act responsibly. Middle-school kids are capable of doing it. They are capable of voting for the smart kid. And when they don't - when they choose Zack over Cody, for instance (don't pretend you don't know exactly what episode I'm talking about) - that is their fault. It is not the fault of the candidate for not being "likable" enough. The voter is not always right. Living in a democracy should not mean that we have to lower all of our standards. It should not mean constant appeals to the lowest common denominator.

Paul should have won Big Brother this year. Even though I didn't like him all that much by the end, and there were points where I even found myself rooting for Josh, he should have won.

Cody - the supposed "alpha male" of the house, ex-Marine, stoic and emotionless to the point of appearing to hate the show that he willingly auditioned for and gave up his summer to be on - should have recognized that Paul was the real alpha and given his vote to him, despite personal feelings. (But props to the show's producers for recognizing that showing Cody's vote last would make for the most nail-biting conclusion. It sets it up like it's the "deciding vote," even though in a 5-4 decision, every vote in the majority could be framed as the deciding vote. The main thing that keeps me from believing it was all staged is the fact that Paul winning by one vote the year after he lost by one vote, and that final vote coming as a reluctant, strong-recognizes-strong from his biggest rival in the house - that's a much more satisfying story. Any competent producer, trying to rig the outcome, would have done it like that.)

Although I tell my students all the time that I expect more from them than I expect from most people - I hold them to a higher standard for spelling, grammar, coherence, and moral behavior than we hold our leaders to, apparently - that shouldn't be true. We should absolutely expect people to do better, to vote better, to learn about the issues and be responsible and mature in the decisions they make. That is what makes a democracy function. Otherwise, it is all just a "popularity contest."

Monday, September 18, 2017

How Much Is Learning Worth?

Imagine that you’ve just been hired for your first teaching job. Grade level and subject don’t matter. Because during your orientation, the principal tells you that she doesn’t want you to do any teaching. None whatsoever. Or at least - she doesn’t care if you do or not, and neither does the superintendent, or the school board, or any of the parents. No one in this community cares about learning. Maybe they used to; maybe they never did.

All you’re expected to do is to keep the students in your class under control for the entirety of the school day. Keep them quiet; keep them sitting down. The principal tells you straight up: as long as she can walk down the hallway and not hear or see anything “out of the norm,” you are doing your job perfectly. How you achieve that result is totally up to you. But it must be done.

This would not be an easy job.

Keeping a group of ten, twenty, thirty people quiet and seated for a prolonged period of time is no joke. I think this is true no matter the age of the people in the group. Sure, we do it all the tim
e voluntarily - think waiting rooms, public transportation, airports - but only when we have an actual reason to do so. We sit in a waiting room because we want to speak to the doctor and that is the purgatory one must endure before being allowed to do so. There is some meaning to the experience. It is not just sitting there.

And even in these situations, it is often hard to bear the tedium of “just sitting there.” People who are lucky enough to be with a friend or family member will talk. (In fact, my current leading theory for why people get married is so they have someone to talk to in waiting rooms.) Others will get up and pace, or browse through a magazine or the endless stream of content on their phones; some daring souls may even start a conversation with a stranger. (My greatest fear is accidentally sitting next to one of those people.) The point is: people have this inherent instinct to always be doing something.
Hell is other people in a waiting room who want to chat.


And remember: we’re not talking about adults. We’re talking about kids, somewhere between the ages of six and eighteen. And I think it’s fair to characterize them as a pretty restless bunch. Not a criticism, just an observation. And your job is to keep them sitting down and quiet for the same amount of time that a regular teacher would be trying to teach them. In elementary school, that would be pretty much a full seven-hour school day, with breaks for lunch, recess, and special (when someone else would have the job of making them sit down and be quiet); in middle and high school, it would be for about fifty minutes, but then as soon as they left, a whole new group would come in.

Question #1: How would you do it? (I’m actually curious.)

But my point here is this: keeping students under control is actually an end in itself. It is not something that just happens. It is one of the many ends that teachers try to achieve on a day-to-day basis. Now, I’m not trying to do one of those poor-teachers-our-jobs-are-so-hard, please-pat-us-on-the-back sort of things here - I hate that shit as much as anyone. 


(an incomplete list of my responses to that sort of shit:
A. suck it up, you chose this job 
B. every job is hard 
C. you get paid pretty well 
D. you have ten automatic weeks of vacation every year 
E. you’re exaggerating how many hours you really work, 
[I’m pretty sure I could go through the whole alphabet on this if I wanted to.])

What interests me is what happens when this end of keeping students under control comes into conflict with another end that we are trying to pursue. Because in the real world, it is not enough to keep your students quiet and sitting down. You are also supposed to get them to learn something - some content, a skill, whatever it is that you were actually hired to teach. And that is not to even address the myriad other ends that a particular teacher could be trying to pursue with his or her students - to respect each student’s individuality, to nurture their creativity, to expose them to the “real world,” etc.

There is no guarantee that all these different ends are going to work together. Sometimes there are conflicts. There are learning experiences that inherently involve a certain amount of disorder, noise, or even what may look to an outside observer like chaos. Likewise, it is possible to have a quiet, orderly room where absolutely no learning is going on. (I mean, if you want to keep a roomful of kids quiet for a while, put on a movie. It doesn’t have to be a good movie. They’ll shut up anyway. Of course, this only works in the short term - it would not work day in and day out the way it does for Cameron Diaz in Bad Teacher. So don’t try to use that as your answer for Question #1.) So when there is a conflict, which end do we consider to be paramount?

A couple of years ago, when I was still in college or even during my internship, I probably would have said: you should always go with the learning objective. That is what school is for, after all. And so what if it leads to a little bit of chaos? So what if it makes your job a little harder in the short run? As long as the students are learning, that’s what really matters. And - I would have continued, idealistic as I was, three whole years ago - those experiences are true and authentic learning, not just memorizing and regurgitating things out of a textbook. (Notice how this implies that those two extremes are the only two possibilities.)

But you’ve also got to consider the full scope of the impact here. A learning experience that disrupts the established sense of order and quiet is not necessarily going to go exactly the way you expect or want it to. There may be students who take the absence of sitting-down-quietly as an invitation to behave in ways that you were not anticipating. To play, to talk, to roughhouse, to fight. Especially if they don’t have much experience with anything but sitting-down-quietly. Or maybe it would spill over into their next class (in a middle or high school environment.) Or it might even carry over into the next day. When you violate the established and expected structure, it is always a little bit more difficult to get it back than it would have been if you had just kept it the same. And so sometimes this can lead to a negative impact upon future learning activities.

This, essentially, is what I learned from my first year teaching.

Take the rapping teacher from that commercial that I think we’ve all seen ten thousand times. (Pre-emptive rebuttal to the person who’s going to say they don’t watch TV: just shut up.) In the commercial, the kids just sit there and listen to him rap about why it’s called the remainder. In reality, I bet they’d be so distracted by the idea that he was rapping that they didn’t even hear any of the math content. At least the first time. They’d all want to show off their own rapping skills; they’d brag to everyone else that they got the rapping teacher; they’d ask him if he was going to rap every time he taught a lesson. Which would be enough for many people to be all like, “You know what? If this keeps up, we’re not even going to do this anymore!” And so that’s the end of rapping in the classroom.

Of course, if you are willing to put up with all this disruption in the short-term - and your colleagues are willing to do the same - great things can happen. But it’s a question of priorities. You must decide, every time, whether a given learning experience is worth losing the sense of order, structure, and regularity. Sometimes it will be; sometimes it won’t.
Not the rapping teacher. But definitely the same archetype.


Of course, there are definitely teachers out there who make the decision in favor of order, of sitting-there quietly far too often. (You know what, let’s call it SQT.) These are the teachers who do put on movies with surprising frequency, who rely on worksheets, who give out what we all know as “busy work.” (By the way, that’s my answer to Question #1: pass out busy work. Something easy enough that they don’t really have to think very much in order to do it, but also pretty time-consuming. Word searches are great for that. Or copying definitions from the dictionary or a textbook. That will get you much further than anything else I can think of.) And that is certainly a problem. But not one that is solved by doing the exact opposite. The key here is judgment.

There are also those who tend to assume that sitting-there-quietly and learning are inextricably linked, and that is what I consider to be the most dangerous. Maybe it’s just because this is how I would classify many of the people I work with. “If you can’t behave, you can’t learn” - a sentence that I have heard uttered in many meetings over the past year-and-three-weeks. And the extension of this is: if a classroom does not require students to sit-there-quietly all the time, then there cannot be any learning going on in there. They recognize that there is such thing as STQ without learning, certainly, but not that there can be learning without STQ. Under this way of thinking, there are no judgment calls to be made; there is nothing complex or nuanced about the decision-making process. It’s as if the scenario from the beginning did happen, but then the principal added on her way out, “Oh, and you’ve also got to teach them math.”

And that, I think, is a very limited and limiting way of thinking about things. But it’s the environment I’ve got to work in.

Sunday, September 17, 2017

The Surprising, Doomed Alliance of Liberals and Facts

I judge people by their bumper stickers. Sometimes I even try to construct a whole identity for the person driving, based solely on the way they have chosen to advertise themselves to the world. This is especially likely to happen if I am stuck behind them in traffic (which, admittedly, doesn’t happen too often where I live, which is one of the very few perks of living here, unless you consider knowing and being known by every single waitress at every single restaurant to be a perk, which I don’t think I do.)

Not too long ago, I saw a bumper sticker that read: “Facts Matter.” In this case, it only took me a couple of seconds to imagine the identity of the driver - young and liberal, the sort of person who posts on social media about how awful Donald Trump is pretty much every time he opens his mouth, someone who watches John Oliver and/or Samantha Bee clips. Someone not all that unlike me, that is. At the very least, someone who is on my “team,” politically speaking. (Politics has definitely become a team sport. And I think it will be a while before it stops being one. And we certainly can’t stop it from being one just by saying it shouldn’t be.) And putting that bumper sticker on her car (I instinctively imagine the driver as a woman, in her twenties or thirties) is her way of making a statement against Donald Trump and all he stands for.

But ten years ago - or, hell, even two or three - if I had seen that exact same bumper sticker, it would have conjured up a very different mental image. This driver would be a middle-aged man, conservative. Possibly an avid fan of Bill O’Reilly or Sean Hannity. Definitely someone who rails against “political correctness” and believes that young people these days, particularly on college campuses, are too sensitive and entitled. The sort of “facts” that he would believe in defending are things like “women are physically weaker than men” or “there are only two biological sexes.” Or maybe things like “black Americans commit more crimes on average than white Americans” - who knows. I’ll try to avoid the temptation to stereotype this guy too harshly. The point is: the statement this driver would be making by slapping that bumper on his car is a statement against the wishy-washy, moral relativist views of liberals.

Remember when that’s what they used to accuse us of being?

There’s been a shift over the past few years. Now it is largely the Trumpians who talk about truth being relative - remember “alternative facts?" - and the anti-Trump crowd who are the staunch defenders of facts and objectivity.

I’m interested in two main things here. First of all, how did this happen? I can’t pretend to have a definitive answer, but I do have a couple of theories that I’d like to explore. Second, and more importantly, who is right? And for this one, I do have my answer, but it’s kind of an uncomfortable one for me. Because in this case, I’m pretty sure it’s Donald Trump.


*

I consider that there are basically two ways of understanding the world. This is an oversimplification, of course (just as it always is when someone says there are only two of anything) - but I find it to be a useful framework. There are lots of different ways to refer to them, but I am going to use the terms modern and postmodern. The modern paradigm basically posits that we pretty much perceive the world the way it really is, that truth and meaning are objective and can be arrived at through rationality, that tools like science and logic allow us to reach certainty about things. Enlightenment thinking, basically. The postmodern paradigm, in contrast, sees truth and meaning as relative because pretty much everything is a matter of interpretation. Nietzsche and Heidegger and Sartre - and pretty much every annoying hippie stoner you knew in college. Full disclosure: my sympathies lie with the postmodern worldview.

Taken from one of those pamphlets your Christian
neighbors will be giving out on Halloween

These two worldviews are not inherently tied to politics. I think they are more fundamental than politics. Certainly, there are historical examples of movements that were politically liberal but modern in their worldviews - the progressives of the early twentieth century, for instance, who believed they could use scientific principles to create efficient systems like assembly lines. However, in recent years, I think we have become accustomed to associating modernist thinking with conservative politics and postmodernist thinking with liberal politics. I mean, consider the line, “If you aren’t liberal when you’re twenty, you have no heart; if you’re not conservative when you’re forty, you have no brain.” Conservatives are supposed to be the thinkers; liberals the bleeding-heart feelers. And what is assumed to go along with your bleeding heart? A mind so open that your brain falls out. Tolerance and diversity taken to ridiculous, contradictory extremes.

Or flip it around - what was the usual way that liberals would criticize conservatives in the pre-Trump era? Cold, unfeeling, heartless. Acting in rational self-interest, perhaps, but unable or maybe unwilling to care about the marginalized and the downtrodden. Ebenezer Scrooge or, his green counterpart, the Grinch.

There are absolutely still elements of this stuff around, of course. The old stereotypes still linger in our culture like ghosts. Any criticism of Paul Ryan, for instance, is pretty much indistinguishable from how someone would have criticized conservatives ten years ago. And a lot of the vilification and mockery of “social justice warriors” accuses them of valuing “feels over reals.”

But there has also been an important shift in the past couple of years corresponding with the rise of Trump. Trumpism has become its own thing, a sort of postmodern conservatism. (And yeah, you could say that Trump isn’t a real conservative. But first of all, that’s pretty much just a “no true Scotsman” argument, where the goal posts for who counts as a conservative are constantly moved to wherever the speaker wants them to be. But more importantly, the plans of Trump and those around him include such things as: lowering taxes, especially on the wealthy; reducing the size and scope of government agencies; cutting funding to social programs; being tough on immigration and drugs; reducing access to abortion and birth control; and rolling back LGBT rights. If that’s not conservative bread-and-butter [conservative biscuits-and-gravy], then what is? In fact, I think what a lot of people are really picking up on when they say “Trump’s not a conservative” is the same postmodern tendency that I am trying to discuss here.)

In what way is Trumpism postmodern? Well, he and his surrogates certainly seem to believe that truth and meaning are relative rather than absolute. Kellyanne Conway's "alternative facts" line is the pièce de résistance of this whole thing, but the administration’s whole approach is based on the notion that truth doesn’t really matter - or exist. They say whatever they want to in order to advance their agenda. And it is working. While they are not abiding by any of the established “rules”, they are definitely succeeding in the task of persuading people. The election itself is the best proof of that. But even after that - nearly half of those who voted for him still believe he won the popular vote, and an even larger group believes
The liberal version would be "kale and quinoa"
that voter fraud is widespread.


And being fact-checked does not bother Trump or those who speak for him. They believe that bias - whether it belongs to an individual or an organization - negatively impacts the ability to be objective about anything. Therefore, they can discredit anything that comes from the “liberal media.” Trump has also recognized that one’s personal background can be an influence upon one’s judgment - as when he claimed that a Mexican-American judge could not be impartial in a case involving him. (Of course, a true postmodernist would also recognize that a white judge could not be objective, either, because he too would have past experiences that colored his judgment. But more on the half-hearted nature of Trumpian postmodernism later.)

And they have plainly recognized that words do not have fixed meanings: they took “fake news” and turned it from meaning a deliberately deceptive news story to meaning, basically, anything they don’t like. (As demonstrated in this Samantha Bee clip which, incidentally, is what got me thinking about all this in the first place, although I think it suggests the wrong conclusion.) And again - they’ve been successful at this. Now no one ever even says “fake news” unless they’re trying to imitate Trump, whether sincerely or mockingly. The meaning of the term has been transformed.

The examples could go on and on. Nietzsche, perhaps the most important postmodern thinker of all time (in my book, he definitely is, and I think you could make the case that the past one hundred and fifty years of Western thought has pretty much been just “footnotes to Nietzsche” the way it used to be to Plato) - he made a distinction between master morality and slave morality. Trumpists divide the world into “alphas” - who take what they want - and “cucks,” who are content to let other men have it. Nietzsche also said that human existence life was about “will to power.” I mean, let’s just pause for a second. Who would be more likely to use a phrase like that: Donald Trump or motherfucking Tim Kaine? Right, exactly.


*

So now for the first important question: how did we get here? Let’s not give Donald Trump too much credit - he did not create any of this himself, as an individual. Nor was it whatever shadowy figure we are currently assuming is “really” pulling the strings on the puppet-king (it used to be Steve Bannon; I’ve got to wonder who we’re going to blame now.) No - as many smart people (and me) have said before, it was the cultural atmosphere that created Trump, not the other way around.

My first theory is that the alliance of postmodernism and conservatism began as a sort of sarcasm which gradually transformed into sincerity. I mean, the other side has always kind of done this weird thing where they pretend to believe in what we believe in - in order to make us realize how absurd it is, I guess. Or just to demonstrate to each other that they understand how absurd it is. They will say things like “my gender identity is refrigerator!!!!” or “now that marriage is all about love, I’m going to marry BACON!!!!” The point being that the supposedly open-minded postmodernist liberal has to accept even these ridiculous examples because they postmodern thinking allows for them.
Who would EVER choose rocky road over MCC????


Reductio ad absurdum is a legitimate rhetorical technique, of course. But I have always found it strange how gung-ho they are about it. It’s almost like they are obsessed with postmodernism - in the same way that social conservatives like Rick Santorum sometimes come across as obsessed with homosexuality, seem to spend more time thinking about the gory details of gay sex than even gay people do. Postmodern ideas clearly interest them, even if they don’t subscribe to them.

I suppose it’s kind of like this. You spend your childhood and adolescence in a community that is wholly devoted to Christianity - you believe in God, you read the Bible, you pray every night. You know that you can’t do anything that goes against God’s commandments because it means that you will be sent to Hell. You sort of implicitly assume that everyone is either a good Christian like you or else a godless, murdering, raping piece of shit. But then, one day, you meet an atheist. And this atheist seems to be a fairly average person - not a fantastic one, necessarily, he’s not friggen Mr. Rogers or anything, but certainly someone who doesn’t kill or rape or intentionally deceive people.

Maybe that’s when you start to obsess about the idea of atheism. You’ve just been exposed to it for the first time, so now it interests you. It doesn’t matter whether this interest is positive or negative, just that it exists. You start to say things like, “How can an atheist be a moral person? If you don’t believe in God, where’s your morality come from?” Or maybe it’s, “I don’t trust atheists. Even when they seem nice, you still have got to remember there’s nothing stopping them from snapping and killing you.” Of course, this is a rather superficial and misguided understanding of atheism, but that’s not surprising since this person has just been exposed to it for the first time. So too for postmodernism. Those edgy “I identify as an attack helicopter” statements are the same sort of thing - an attempt to show the absurdity of a worldview that you have only just been introduced to.

But then suppose this hypothetical person does somehow come around, after a while, and realize there is no God. (By the way, this isn’t just an analogy. I do see postmodernism and atheism as fundamentally linked. I’m not sure how you can really be one without the other. Which is one of the reasons why I suspect that Donald Trump’s postmodernism might be pretend. Or else his Christianity is bullshit. Or maybe both. Probably both.) Then, you realize the truth of Dostoevsky’s famous line: “If God does not exist, everything is permitted.” You can do anything! You can rape or murder or even be gay!

When this is a new revelation for you, you’re bound to dwell on it for a little while. It’s just like before, except that this time it is personally relevant. Maybe you pull a Raskolnikov and actually murder a little old lady whose only crime was being a shrewd businesswoman - just because you can. Or you don’t do anything but you sure do think about it a lot.


Yeah, he's a little smug, but he is right on this.
For those of us who have been lifelong atheists, though, this is the oldest news in the world. We were small children when we first realized that everything is permitted, and we have gotten the fuck over it since then. Penn Jillette puts it pretty well when he says: “I do rape all I want. And the amount I want is zero. And I do murder all I want. And the amount I want is zero.” What seems like it would be a big dilemma to the lifelong Christian or the recent convert to atheism is no big deal at all to the practiced atheist.

And the same is true of all tenets of postmodernism. (The big difference is that I don’t think there’s any such thing as a lifelong postmodernist. We are all exposed to both modern and postmodern ways of thinking in our culture and I think we all adopt both stances to some degree at one point or another.) When you’re first exposed to them, they may seem shocking and revolutionary. There’s no such thing as objective truth! All meaning is subjective! Everyone interprets the world in a unique way based on their background and experiences! Holy shit. And so that’s the first theory - that Trump and anyone who supports him are people to whom postmodern ideas are new and so they are still sort of playing around with them. Partially in jest, partially in earnest. Somewhere between the guy who’s just met an atheist and the convert. Baby’s first postmodernism.


*

The second theory is that, because we live in a culture that contains a mix of both modern and postmodern ideas, Trumpists have been familiar with postmodernism for a long time - perhaps their entire lives - but only in a very superficial way. (In a way, I sort of think this is true of all of us to some degree. We tend to operate on a day-to-day basis as though words do have fixed, standard meanings - we rely on tools like dictionaries to tell us what they mean; we believe in the authority of science; we listen to people like Neil deGrasse Tyson, arguably America’s foremost modernist thinker; we tend to trust our own senses most of the time. But we are also undeniably influenced by postmodernism as well, especially when it comes to ethics and morality. So we are all a weird mélange of the two, which is especially interesting because they’re pretty much incompatible ideologies.)

I mean, let’s think about it for a second. What are the chances that Donald Trump has ever spent a long, quiet night in reading Derrida or Foucault - or hell, even taken five minutes to skim their Wikipedia pages? Reading comprehension abilities aside (and I would pay a lot of money to watch the sitting President of the United States take one of the reading tests that the eighth graders at my school are required to take and see the results) - he’s not really known as the most thoughtful person in the world. He’s not even the most thoughtful person in his family. (Still holding out hopes for Tiffany to prove she’s decent, to be the Ron Reagan, Jr. of her family.) So his understanding of these things is probably not very deep or nuanced. Nor does there seem to be anyone around him who does understand and appreciate the significance of a postmodern approach to truth, meaning, and morality, and is influencing him.

So a Trumpist who has heard “the meaning of words is subjective” may misunderstand this to mean something more like “the meaning of every word is completely arbitrary.” Which is not the point at all. Just as no one is legitimately claiming that “refrigerator” counts as a legitimate gender identity - no one worth taking seriously, anyway - no one is really saying that the word “tree” means the same thing as the word “cow” just because one asshole uses one instead of the other. It’s a more complicated point than that. Sure, the sound “tree” could, over time, come to signify “four legged farm animal with spots that produces milk” - but only if there were an actual speech community that used the word like that, and that would probably only happen over a long period of time.

(Remember the book Frindle from elementary school? About the kid who wanted to prove a point to his teacher so he got everyone to use the word “frindle” instead of “pen” and she fought against him every step of the way, but then finally conceded that he had won when the word “frindle” appeared in a dictionary. Man, talk about something that blends the modern and the postmodern. Frindle acknowledges that language evolves through usage over time, but still weirdly holds up the dictionary as the arbiter of “real” meaning. Or at least that uptight, stick-in-the-mud fifth grade teacher does.)

Holy shit.

More importantly, there is nothing about any postmodern principle that suggests an obligation to act in a particular way. They are descriptions of the way the world seems to work, not prescriptions. An is that does not entail any ought. But the superficial, misguided Trumpian interpretation is something like: meaning is subjective, therefore we should change the meaning of words whenever we feel like it. Or: objective facts do not exist, therefore let’s lie as much as we possibly can. Indeed, it is really not so much an interpretation of postmodernism as it is an application of postmodernism. Which means it doesn’t matter whether they actually believe in this shit or not, and makes it seem likely again that they probably don’t. 

So that's the third theory - that they aren't postmodernists at all, just opportunists who happen to be using postmodernism to gain and retain power.

It’s just like when people took Darwin’s theory of natural selection and distorted it into Social Darwinism. The mere fact that weaker members of a species tend to die before they can procreate does not compel you to kill off the members of your species that you perceive as “lesser.” Nor does that fact forbid you from trying to save humans or animals who might die without assistance. Descriptions of the world are neutral; they don’t tell you anything about what you should or shouldn’t do. And - in both cases - I think there’s a pretty good chance that they would have done it anyway and found a different rationale. So if we don’t blame Darwin for eugenics, let’s not blame postmodernism for Trump, either.

*

What we need to be careful about,
though, is the way that we respond to Trump. As I mentioned at the beginning, I think there is a movement among liberals to pivot back to modernism because postmodernism smells too Trumpy. It’s present in that “facts matter” bumper sticker. It’s also present in that Samantha Bee clip that talks about the way Trump and Conway use words. And that's what I am afraid of here. We’ve got to be better than that. Because falling back on modern understandings of truth and meaning is ultimately going to be untenable, unless we’re going to go all the way back to pre-Kantian times. Unless we're going to declare our belief in something that, as far as I am concerned, is tantamount to God.

The first step has got to be recognizing that none of this is anything new.

“Post-truth” was named the 2016 Word of the Year by Oxford Dictionary. And there have been news stories written all throughout the past two years echoing the same point. “We are now living in a post-truth era.” But that’s bullshit. Not because there is still objective truth out there. But because there never was.

Let’s go back to that Mexican judge who Trump claimed was biased because of his ethnicity. Well, he was technically right about that. But he was wrong in his implication that a white judge would have been unbiased. Any human being you put on that bench would have had previous experiences, thoughts, and emotions that would impact the way that he or she ruled on that case - that’s just part of being a human being. And the challenge of being a human being is recognizing that we have those biases and will never completely transcend them, that we will never be objective, but trying to be as fair as we can anyway.


Cute.
We have lived in a post-truth world ever since Nietzsche first wrote, “God is dead.” Or, more accurately, Nietzsche’s writing those words was his way of observing that he was already living in a post-truth world. And even that’s not quite right. To Nietzsche, God had always been dead. Objective truth had always been an illusion, but it was an illusion that people still insisted on believing in. So let’s not give Donald Trump too much credit. He didn’t invent any of this shit. At best, he's someone who just learned about it and hasn't fully grasped it yet; at worst, he's just an asshole appropriating it for his own selfish, narcissistic ends. So don't let him poison it.


TL;DR (Since these things keep getting longer and longer):

Postmodernism: “There is always going to be uncertainty in meaning.”

Postmodern Conservatism / Trumpism: “There is always going to be uncertainty in meaning. Therefore, let’s make our meanings as unclear as possible.”

Modernist liberalism: “Certainty in meaning is possible.”

Postmodern anti-Trump liberalism: “There is always going to be uncertainty in meaning. Therefore, let us use our understanding of that to try to improve our communication.”

Saturday, September 9, 2017

No, Not Every Student Can Succeed

In any discussion about education, you can be pretty darn sure that somebody’s going to say some version of the following: “We need to make sure that every student can succeed.” Even our most recent federal education law, passed in 2015, is called the Every Student Succeeds Act. And the one that it replaced was the infamous No Child Left Behind, which is pretty much the same platitude expressed in different words. “All children can learn.” Or “every kid can be successful.” So on and so forth. They’re cliché, yeah, but these phrases are still a pretty good, succinct summary of the whole point of compulsory public education (which is, after all, a fairly recent invention, historically speaking - and one that not everyone is completely sold on yet.)

But these inspiring stock phrases tend to obscure an important question: what if it’s not possible for every student to succeed?

I’m not talking about natural ability or work ethic or the far-reaching effects of poverty and trauma. (Those are important questions, too, but beyond the scope of this post.) What I want to talk about are the ways that we measure success. Because we often measure and define success in relative terms: you succeed by doing better than other people. And in such a system, it is literally impossible for everyone to succeed. Trying to get everyone to succeed when success is understood as relative makes about as much sense as - to paraphrase Alfie Kohn, who said he was paraphrasing Deborah Meier - telling your whole class that they should all be in the front half of the line.

I think this conception of success as relative is ubiquitous in education. Sometimes it is explicit, sometimes implicit. Probably the most obvious example that we can discuss is a norm-referenced test. In a norm-referenced test, each participant receives a score that really only tells us how he/she performed compared to others who took the test, not anything absolute. IQ tests are famously norm-referenced. The average is always set at 100. So if we imagine a national movement to “raise IQ scores” - it wouldn’t matter what methods were used, it wouldn’t even matter if the whole country got smarter in the process, the movement would be doomed to fail from the get-go. The average would always remain 100 - because that’s what 100 means in the context of an IQ test. 

Same darn thing.

Though it does have a complicated and interesting history, the SAT is also basically a norm-referenced tests. Which made perfect sense when it was primarily used for individual college admissions. One student’s relative ability to read, write, and do mathematics is absolutely relevant in college admissions decisions, since after all, the whole point of admissions is to compare that student to other students. But now some states (such as New Hampshire, my own state) have chosen to use the SAT as the mandatory standardized test for high school students. If they are going to try to show growth or progress in SAT scores, it’s going to be rather challenging. Not impossible, because there are still students in forty-nine other states that NH high-schoolers would be up against. But it’s a zero-sum game. In order for the scores of New Hampshire students to improve, it has to be at the expense of students in other states. And then imagine if every state adopted the SAT as it’s high-school standardized test. Then there would be no possibility for growth or improvement, and we’d get to see a million articles shared on Facebook all about how scores were stagnating and students weren’t getting any better at reading or writing or doing math (when they very well could be).

Hopefully, though, we never really reach the point of such absurdity. I don’t think we are in danger of it - not when it comes to tests that are explicitly, openly norm-referenced. But I also believe there is a much more subtle and invisible form of normativity that pervades many measures of success, which therefore makes them subject to the same potential problems. And that is my real point here: that we often understand success in relative terms without even realizing that we are doing so.

For instance, let’s think about classroom grades. (Full disclosure: I did my entire Master’s research colloquium on the detrimental impact grades have upon learning, so that’s where my bias lies. (It's over a hundred pages long with three appendices and I’d be thrilled if anyone besides my advisor ever wanted to read it.) And yeah, I still give grades in my class. Because I am required to and it will be a long time before that ever could change. Just like how your local Marxist bought his untouched-beyond-page-ten copy of Capital at a Barnes and Noble.) Now, I don’t think anyone who has spent more than an hour in a public school would ever seriously claim that grades are completely objective. We do, however, have a sense that grades tell us something about what a particular student can and can’t do, that they communicate something absolute. But my suspicion is that grades are largely relative.

I think most people who work in schools - as well as most people who live in our culture - have really taken to heart and internalized the notion of the bell curve. It is incredibly intuitive. I’d bet even people who have never heard the terms “bell curve” or “normalized distribution” believe in the general concept. We all sort of expect that, in any group of students, there will be a few who excel and a few who do poorly, while the majority will fall somewhere in the middle. And because that is our expectation, we sort of semi-unconsciously tweak things so they do happen that way. I mean, imagine for a second that you’re a math teacher and you see that every one of your students has just failed your latest test. What are you going to do? Probably conclude that the test was too hard and make the next one easier. Likewise, what if every student gets a 100? Test must have been too easy. Then take a look at your grades at the end of the semester. If everyone’s got an A, your class is too easy; if everyone’s failing, your class is too hard. If a few have high A’s, a few failed, and the majority fall in the middle, you’ll probably take that as a sign that the class is, as Goldilocks might say, "just right."
"It's, like, all about how the workers need to
control the means of production and stuff, bro."


(There are, of course, a few teachers out there that take pride in the fact that a lot of students fail their class. And there are others who hand out good grades far too easily to placate parents. But I think the majority of teachers, and the majority of what we would consider good teachers - assign grades in a way that conforms more or less to the bell-curve. And this may be less because students “naturally” fall into that pattern than because we unconsciously impose that pattern upon populations of students.)

Certainly, many districts that have moved to standards-based grading - which everyone is supposed to be doing in the near future, or actually were supposed to have already done, like, three years ago - have attempted to do so in a way that makes grading less about comparing a student to his or her peers and more about comparing him or her to an objective, stated standard. Unfortunately, I can’t really say much about how that works in practice. I’ve worked in three school districts since 2014 and, in all three, traditional (A-F; 0-100) grades were still being used, with a dim, atmospheric sense that we were “moving in the direction” of standards-based grading.

But it is also worth noting how vague some of the standards are. For instance, one of the Common Core State Standards for Writing - one of the standards that I am expected to “teach to” - expects an eighth grader to “write arguments to support claims with clear reasons and relevant evidence.” Well, there’s a lot of room for judgment in there. What counts as a clear reason? What counts as relevant evidence? And how much evidence has to be included in order for the argument to be sufficient? (All the writing standards are like this. Which is not necessarily a bad thing. Assessing writing is always going to be subjective, which is fine as long as we embrace that instead of pretending that we are being objective. Which comes back to one of the many reasons why I would support using narrative reports for assessment instead of grades but oh well.)

Furthermore, every school whose standards-based grading system I have actually seen has included categories for “exceeds the standard” and “partially meets the standard” as well as the expected groups of “meets” and “does not meet.” So that’s four categories that students can fall into. Seems awfully easy to apply the bell-curve model to that framework - a few students per class or grade will exceed the standard, a few will not meet it at all, and most will fall somewhere in the middle. So there is certainly a possibility that grading in a standards-based system is still, essentially, a process of comparing one student to other students - that a student’s grade is really a determination of relative ability.

(By the way, the fact that each teacher is engaging in this process only with regard to his or her classes, in his or her district, means that you really can’t compare one student’s grades at one school to those at another school - even if they’re ostensibly learning the same content based on the same standards. I have seen this firsthand: a paper that I would probably give a B in the district where I work right now would have likely gotten an A at my previous school - the overall performance of the students, for a variety of reasons, was just generally lower there.)

Which, I suppose, is why standardized tests exist. Most standardized tests are not norm-referenced and they are graded by people (or computers, but we’ll get there in a second) who have no connection to the students, no reason to give them anything other than an objective score. Or so it would seem. I’d like to look first at standardized assessments of writing - for a couple of reasons. The first is that my job is to teach writing, so these are the tests that reflect on me; the second is that I am currently reading a book about writing assessments, so it’s all fresh in my mind; the third is that writing tests are unique in that they can be graded by either computers or human beings, and that this significantly and substantially affects their scoring. (You can grade multiple-choice tests either by computer or by hand, but they are scored the same either way.)

First let’s tackle computers. (Just not literally.) I think it’s pretty easy to imagine the problems that can arise when computer programs are in charge of grading writing. For instance, research has shown that they can be fooled by big words, convoluted sentences, and lengthy but ultimately substance-less responses. (Some of this research is summarized here.) Because they can’t pay attention to the things that actually matter in writing - coherence, audience awareness, voice - they presumably focus on the superficial features of the writing instead. But we really don’t know for sure, and that’s one of the main problems here. The testing companies who use computers to score the writing sections of their standardized tests tend to be pretty protective of them, so that we don’t really know how they work. That means we don’t have any real way of knowing whether they are valid and reliable. All we can do is trust that the corporations who make millions off of these tests really have our students’ best interests at heart.

Now for humans. In theory, having humans score writing should be a million times better than using computers. And it is - in some ways. Humans are capable of making decisions about whether a piece of writing uses “relevant evidence” or contains “clear claims,” at least. But the way that the people who do the grading are expected and trained to do so leads to other sorts of problems. A lot of it is explained pretty well in these articles (and other similar ones.) In the community of test-scorers, there is a strong emphasis on consensus. Which is putting it pretty lightly. Anyone who does not fall in line and score writing pretty much the same as others do is let go from the position. There is no room for difference of opinion; there are no conversations about why you gave a particular score. It’s like a jury which demands a unanimous verdict but, instead of permitting long sessions of deliberation, simply dismisses anyone who doesn’t vote wth the majority.

So if you’re a writing-scorer and you have a vested interest in continuing to be one, in keeping your job - you are going to want to make sure that you give every piece the score that other people would give it. It’s an exercise in conformity. It’s more like playing Family Feud than playing Jeopardy - the goal is not to find the best answer, but just the most common answer. (Which is one of several reasons why Family Feud is the dumbest game show.) And if everybody is doing that, what are they all going to default to? The lowest common denominator. The bell curve. A few really high scores, a few really low scores, and a lot that fall somewhere in the middle. That is the safest route. And besides, I really think it is so deeply embedded in our minds that we almost can’t help but think in terms of it.

"If we evolved from monkeys, why we still got monkeys?"
So when we talk about human beings grading writing for standardized tests, we’re not really talking about human beings as human beings. That would mean that they had the freedom to be subjective, to offer and defend their own opinions (even if they were unpopular.) What we’re really talking about here are human beings who are trying their best (who are expected to try their best) to be computers. Hell, one person I’ve met who worked as a test-scorer for one of the big testing companies has told me that they refer to the process of making sure you agree with the majority of other test scorers as being “calibrated.” So if it turns out that humans are slightly worse at being computers than the computers are - well, who’s really surprised by that?

As for the tests that make sure each piece of writing is scored by a computer and a human being - I guess the question is: where does the buck stop? When there’s a discrepancy, who do they go with: the computer or the human? But then I suppose it doesn’t really matter that much. If you go with the computer, you go with a score that is objective, but could very well be completely unreliable and invalid; if you side with the human, you are getting a score that is based on conformity and probably relative.

So my point is that some aspects of standardized testing are subject to the same problems as an explicitly norm-referenced test like an IQ test. A movement to improve the writing scores of all students nationwide would likely be similarly doomed. (A probably obvious point: this is not the same as a movement to improve the writing of all students nationwide.) If the people grading these tests are falling back on the idea of a bell curve, of a normal distribution of scores - then those scores are never going to improve. Can you imagine being a test-scorer for and trying to say that every single essay that fell into your pile belonged in the highest category? That’d take some serious guts, that’s for sure.

Of course, it is true that the people who grade the writing of standardized tests have more at their disposal than just the idea of a bell curve. They have samples of writing that correspond to each of the score categories as well. But I don’t think this actually invalidates my point - it simply expands the scope of it. Though again we can’t actually know for sure, I think it’s a reasonable assumption that they don’t use the same samples year after year. If they don’t change them every year, I bet they change them every couple of years or so. And what would happen if there was a massive improvement in the quality of writing that was coming in? I imagine the writing samples would change to reflect that, so that what was once the top category would now be the second-from-the top, and so on. (Yes, this is all speculation, but I think it is makes sense.) It would be a gradual process, of course, but eventually the bar for success would be raised.

And that is how this same principle affects not just assessments of writing - which, admittedly, is inherently more subjective than other subjects - but all standardized tests. It’s just like what classroom teachers do. When everyone is doing well, we take that as a sign that we need to make the tests a bit harder.

For instance, during the 2013-2014 school year, 77% of sixth grade students in New Hampshire scored proficient or above on the NECAP Reading test. 70% scored proficient or above in math. That sounds pretty darn good.

But then the following school year, we switched to using the Smarter Balanced test. That brought the numbers down to 57% for reading and 46% for math who were proficient or above (or, in SBAC parlance, “meet or exceed the achievement level.”) Now, these are completely different tests. But I don’t think it’s a coincidence that the proficiency rates are lower. It’s hard to even imagine anyone implementing a test that led to higher proficiency rates; they’d be accused of “dumbing down education” and (without fail) “giving everyone a trophy.” But the way it did happen, we got to be subjected to a slew of articles about how abysmal students were these days at reading and math, and how they used to be so much better before [insert whatever innocuous thing you feel like demonizing here. Hip-hop or iPhones or Gogurts that are too easy to open or whatever].

And yet the simplest conclusion - Occam’s razor and all that - is: it was a harder test.

And that’s going to keep happening. We are going to keep adjusting all of our assessments
The worst-designed meme of all time?
- standardized or otherwise - so that they correspond to where the students actually are and what they can actually do. We are going to keep trying to make it so that they fall into a normal distribution because we implicitly believe that that’s the way the world naturally is. And maybe there’s nothing wrong with that. But we’ve got to recognize that we are perpetually constructing a system in which it is actually impossible for every student to succeed.

*

I think that this way of thinking about success is basically an instance of the fallacy of composition. It’s possible for each member of a group to do something, so we tend to assume that it is possible for every member of a group to do the same thing. To go back to that intentionally absurd “front half of the line” example: yes, of course it is possible for any individual student who is in the back half of the line to move up. But it’s not possible for them all to move up. If they all tried to move up, it would change the structure of the line, which is the very system within which they were trying to move. And the same principle is true in many real-world cases.

For instance, at my school, students are divided into an A group and a B group (except for in my class, but that’s a whole separate issue.) And it is certainly possible for an individual student from the “B group" to improve so much, or to show so much determination and grit, that he or she is moved up to the “A group.” It has happened before. But you can’t move all of them up - no matter how much they all improve - or else you no longer have an A group and a B group at all. So if insist upon measuring success in relative terms, then we can’t also insist upon every student being successful. It’s just plain intellectually dishonest. We've got to pick one or the other.

This wouldn’t be an issue if it was confined to such benign things as middle-school A groups and B groups - but the same fallacious logic seems to be almost ubiquitous in our conversations about education. There is all the assessment related stuff that I have already discussed, of course. But sometimes it goes even further than that. For example, in the book The Teacher Wars, Dana Goldstein mentions a 2006 paper that estimated that “firing the bottom 25 percent of first-year teachers annually . . . could create $200 billion to $500 billion in economic growth for the country, by enabling poor children to earn higher test scores and go on to obtain better jobs.”

Really? Every poor student will be able to obtain a better job? How would that even be possible? It’s not as if low-skill, low-wage jobs would just disappear. Nor would we need more doctors and lawyers just because more people were educated enough to pursue those careers. What we’d end up would probably be a bunch of highly educated people working in the food service industry. (Shout out to my 2015-self, who had a Master’s degree and worked at Subway.) But it’s the fallacy of composition again. Yes, any individual can receive a better education and therefore obtain a “better job.” The principle even works if you think about a small enough group - the students at one school, for instance, or even at the schools of one city. But when you try to apply it to all students across the country, which is what those authors did, it falls apart.

The same goes for the plans of people like Bernie Sanders (and many others on the left) who want, through various methods, to make it so that more young people can go to college. I don’t disagree with the idea, necessarily, but I think we need to not kid ourselves: the value of a college degree is mostly relative. The reason I can get a higher-paying job than many others my age is because I have a degree, not because of what I actually learned or did in college. (Which, by the way, is not the same as saying that what I did in college has no value. It has a heck of a lot of value to me. Just not to anyone who would sign my paychecks.) So to the extent that the value is relative, the same principle applies. We’d end up with degree inflation.

Like the bad guy from The Incredibles says: “If everyone’s super, no one is.” (Because he’s defining super as a relative term, of course.)

Perhaps the best solution to this is to try to stop defining success as relative. Instead of setting up competitive, zero-sum frameworks (like A groups and B groups, like norm-referenced tests, or writing scored with the assumption of a bell curve) - we could try to set up systems where it is possible for everyone to succeed. Maybe we are moving in that direction. Maybe that is where standards-based grading is headed (though I don’t think it is quite there yet). But until then, we’ve got to at least change our platitudes to match the reality.

So I’m looking forward to the upcoming “A Predetermined Percentage of Students Can Succeed Act."