I’ve got to assume pretty much everyone knows about confirmation bias by this point. It’s pretty much like that stupid quote from that stupid-but-loveable book The Perks of Being a Wallflower, “we accept the love we think we deserve” (which I have never understood why so many people seem to think is so deep.) Except in this case it’s more like: “we accept the facts that go along with what we already believe.” And find a way to dismiss all the other ones. Most of the time, at least.
And that would be bad enough. But recently, I’ve run into quite a few articles that suggest that being exposed to facts that contradict what we believe can actually strengthen those beliefs. Most of them seem to be based around a 2010 study by two dudes (okay, professors) called Nyhan and Riefler where subjects were exposed to “corrections” (or “fact checks”) of things they were predisposed to believe. After reading these “corrections,” many of them doubled down on what they already believed rather than modifying their beliefs to fit the evidence.
![]() |
| Literally any other quote from this book, please. |
For example, let’s say Joe, who has a positive perspective on the Iraq War, reads an article includes some information that supports his beliefs: that there were weapons of mass destruction found in Iraq after the invasion. The existence of WMDs in Iraq was, of course, the primary justification for the war. But then, Joe is told that, in fact, what he just read was wrong and there were no WMDs found in Iraq. Although naively we might think that this would affect his beliefs and make him less likely to support the war, Nyhan and Riefler found the opposite. Now Joe supports the war even more than he did before.
They call this the “backfire effect.” It is like an extreme version of confirmation bias - not only does new information that contradicts someone’s worldview not change what he or she already believes, it actually can work to cement those established beliefs. This does kind of make sense when you think about what people are like, though. We can be pretty oppositional creatures. Think about when someone tells you to do something that you were already going to do - it kind of makes you not want to do it, doesn’t it? We live most of our lives based on emotion and instinct rather than rationality, so really it isn’t all that surprising that this applies to our political beliefs as well.
It is worth noting that there have been attempts to replicate the original study that were not successful. In some cases, people did change their minds when presented with new evidence, or at least said they did. But follow-up studies do not erase the existence of the original one. Even if the “backfire effect” isn’t universal - which would be awfully depressing (as one article says, “if the backfire effect is real, nihilism might be the most appropriate response) - it is still worth examining because the mere fact that it does sometimes happen challenges some of our most basic assumptions.
What’s important for us to recognize is that this whole business is complicated. Sometimes, people change their minds when exposed to new evidence. Other times, they ignore that evidence completely or find a way to rationalize or compartmentalize it, which means it has no effect on their beliefs. Confirmation bias. In some cases, the new evidence may make their existing beliefs stronger. The backfire effect. And then, there is also the possibility, mentioned by Nyhan in that 2016 article (written just days before the election), that people may acknowledge the validity of new information without allowing it to affect their support for a particular policy or candidate.
That is, Joe could accept that there were no WMDs in Iraq but then go on to say that the war was still justified because of Saddam Hussein’s brutality or some other reason. This is different from confirmation bias because, in this case, Joe’s belief is not affecting the way he perceives or responds to the facts. It does suggest, though, that our beliefs are often not really based on the facts that we might offer as evidence to support them. Which is another way that this whole business can be complicated.
And its being complicated is kinda the whole point here. The naive view that facts and evidence are what persuade people to change their minds is clearly insufficient and misleading. And yet - there are still quite a few things that we take pretty seriously that are built off of that model.
*
For instance, the entire way we expect and teach students to do persuasive writing - or, as the Common Core insists on calling it, argument (more on the distinction between the two in a bit) - hinges on the notion that people change their minds when presented with evidence. But clearly that is not always true in the real world. In the real world, there are a myriad of ways that people can respond to being presented with evidence; changing their minds is only one possibility. In the real world, persuasion is a complicated business. So if we don’t really know how persuasion even works - how can we teach or expect our students to do it?
Of course, we do know some things about how persuasion works in the real world. These things have been studied and analyzed. We know that people respond to appeals to emotion, to guilt, to a desire to be liked. They respond when they are made to feel like their lives are incomplete and then offered something that will ostensibly fill that void (a sort of “negging,” really.) These are the techniques that advertisers and marketers use, and they are probably the best real-world examples of persuasion I can think of. Or think about what the founders of multi-level marketing companies say to convince people to join them - or cult leaders - or religious proselytizers. (Or did I just say the same thing three times?)
The problem is that these strategies are often unscrupulous. When persuasion is the only goal, that means it is acceptable to lie, to mislead, to manipulate people by appealing to their base instincts. Encouraging these tendencies in students seems, at best, careless and at worst, downright immoral. Do we really want a generation of people who are extremely effective at manipulation, who - like the fictional Jennifer Barkley from Parks and Recreation or the fictional Jeff Winger from Community or the all-too-real Kellyanne Conway - care very little about the content of what they are saying and only want to “win?” Argumentative mercenaries.
Such an approach does have its roots in the ancient Greeks (though that does not necessarily mean it’s good.) The Sophists, certainly, believed in persuasion above all else; that is exactly what Plato criticized them for. But even Aristotle was never quite willing to give up the notion that the primary purpose of rhetoric was to persuade an audience. It’s not that much of a stretch to imagine a modern-day Aristotle, seeing all that we have learned about confirmation bias and the backfire effect, throwing his arms, saying “Fine, whatever,” and throwing his lot in with the Sophists. (That's why Aristotle was the chillest of the ancient philosophers.)
![]() |
| What a great character. P&R has the best minor characters. |
The other option available to us is to follow the guidelines of the Common Core and move away from mere persuasion towards “argument.” The distinction is supposed to be that argument uses evidence, facts, and logic to make its case rather than appeals to emotion or authority. It is all logos, not pathos or ethos. Which all sounds great. The problem is that we just learned that evidence doesn’t actually convince anybody of anything. And sometimes it even has the opposite effect. (And there is no reason to believe that people are immune to this just because they are educated or intelligent [or liberal], though that would be nice.)
So are we willing to accept the conclusion that we may be having students write pieces that could very well do the opposite of what they are intended to do?
Imagine telling a passionate seventh-grader - inspired to change the world for the better, armed with the slogan that “everyone can make a difference” - that her argument essay that laid out all the facts about climate change made the case that the school should “go green" - imagine telling her that reading her essay actually made you and everyone else want to pollute more. Why wouldn’t she say, “well, what’s the point, then?” And if you haven’t spent any time hanging out with nihilistic seventh-graders, it is not something I would recommend.
Also, if the end goal of actually persuading people is removed, then there is no legitimate rhetorical purpose for writing an argument. The purpose of the piece of writing becomes “to demonstrate that you know how to write an argument.” And that is likely going to lead to some insipid writing.
Furthermore, when it comes to assessing this sort of writing, I have a hard time imagining how I could evaluate an argument without relying on the principle of persuasion. What counts as strong evidence? Evidence that makes me more likely to believe the argument. How do we distinguish solid reasoning from crappy reasoning? Well, if the reasoning seems likely to make a hypothetical person believe in the argument, then it must be good. But there is no evidence that I am exempt from these biases, that I can ever be, no matter how hard I try. (Nor are, therefore, the hypothetical people in my head. My imagination is limited by my own abilities.) And anecdotal evidence from my experience backs that up.
Not that long ago, I said, half-jokingly, to a friend that the most convincing argument I have ever heard for the existence of God is the fact that the sun and the moon appear to be pretty much the same size from Earth. And in one sense, I believe that. It does seem like an awfully big coincidence. (Remember how Seinfeld had that whole bit about whether there is such a thing as a “big” coincidence or whether the definition of “coincidence” inherently includes all sizes? Absolutely fascinating.) But certainly that would not count as a “good” argument in the traditional sense. Most people would say Descartes’s ontological argument for the existence of God is better, and I consider that to be absolute horse shit. And I think we need to leave open that possibility, that some “good” arguments could be absolute horse shit. And also recognize that some “bad” arguments could be really effective. Aren’t there things that you just know without having evidence to support them - things that you feel more certain about than anything else? There’s something going on there, and we shouldn't just dismiss it as "emotion." When a belief becomes part of your identity, you can’t just eradicate that belief - no matter how many lectures on evidence and logic you have sat through.
![]() |
| Yes, that conversation was right around the time of the eclipse. |
But when persuasion is removed, what do we have to fall back on when we evaluate arguments? Structure? But it is entirely possible to write a well-structured argument about something completely meaningless.
(Interestingly, in his book The Testing Trap, George Hillocks, Jr. contends that many of our writing assessments already encourage students to do just that, since they don’t provide students any content to work with. Now, this has sort of changed in some of the new Common Core-aligned tests - but certainly that doesn’t mean every teacher has changed his or her practice to keep up. There are undoubtedly still many out there who evaluate argument essays primarily by their structure.)
Generally, though, we already recognize that structure alone doesn’t mean all that much. We talk about arguments that are valid but not sound - where the structure is fine, but the premises have no truth to them. If Socrates is a pig, and all pigs eat corn, then Socrates eats corn. That’s a perfectly valid argument. But the premises are nonsense. And structure does not help us determine the truth of the premises; that is, structure does not help us distinguish strong evidence from weak evidence, strong reasons from weak reasons. Only the concept of persuasion can do that.
I guess we’re kind of screwed no matter what. We can either value evidence or we can value persuasion, but we can’t pretend that the two always go hand-in-hand, that what people really want to hear, what really gets them going are "the facts." Because that's just plain not true.
Generally, though, we already recognize that structure alone doesn’t mean all that much. We talk about arguments that are valid but not sound - where the structure is fine, but the premises have no truth to them. If Socrates is a pig, and all pigs eat corn, then Socrates eats corn. That’s a perfectly valid argument. But the premises are nonsense. And structure does not help us determine the truth of the premises; that is, structure does not help us distinguish strong evidence from weak evidence, strong reasons from weak reasons. Only the concept of persuasion can do that.
I guess we’re kind of screwed no matter what. We can either value evidence or we can value persuasion, but we can’t pretend that the two always go hand-in-hand, that what people really want to hear, what really gets them going are "the facts." Because that's just plain not true.
*
[Afterthought: the studies that demonstrate the existence of the “backfire effect” and confirmation bias seem to mostly pay attention to subjects’ immediate reaction. But I wonder if the effect is long-lasting. Maybe we initially react to new information by doubling-down on what we already believe, but over time and with repeated exposure we end up changing our minds to accommodate that new information - as in Piaget’s theory of development. Or our minds change, without our being in charge of it. That is: maybe being persuaded by evidence is a long-term, gradual process rather than something instantaneous. Certainly that idea helps avoid some of the nihilistic implications of the above.]

















