I don’t like it when people misrepresent others. Argue passionately. Yell if you must. But don’t misrepresent. That’s a sin.
We know it’s an issue. We even have an expression for it: straw man. However, few of us would admit to doing it ourselves, and that interests me.
I’m curious about exactly how and when the distortion occurs. Describing somebody else’s ideas (either explicitly or implicitly by responding to a version of them) is a two-step process: you build a representation of them in your own head, and then in turn describe that representation. In other words, there are two layers of representation.
Misrepresentation then, is when at least one of these processes is corrupt. Which one it is makes a huge difference. In one case we unintentionally have the wrong impression of somebody, but communicate that impression accurately. This is an honest mistake. In the other case we have the correct impression, but intentionally communicate something else. This is a lie.
I don’t find any of those convincing. This is a problem because there seems to be no more options. What to do? De we just split the difference? It’s a little bit of a mistake and a little bit of lying, case closed and let’s call it a day? Clearly it’s something like that but I don’t feel we’ve learned anything. How do they combine, exactly?
Various kinds of misrepresentation is ubiquitous in argumentation and rhetoric. In addition to putting up straw men, we deploy there are plenty of other rhetorical tools and tricks in an unprincipled and inconsistent fashion that results in misrepresenting an opposing case. The question is the same there: do we do this intentionally or unintentionally?
They’re not honest mistakes. They just can’t be, because we are nowhere near stupid enough for that. For example, nobody really believes that one counterexample disproves a trend or pattern (this is a misrepresentation of a claim about overall patterns as a claim about categorical rules), and we can spot the error in a fraction of a second if it is used against us. We’re also perfectly able to see what’s wrong with representing a whole group, movement or ideology by pointing to one or a few nutcases, except when it’s, you know, them.
So it’s all lies? We’re manipulative cynics saying whatever will help us? Public discourse tries to look like rational deliberation with all its makeup caked on but it’s calculating sociopathy all the way down?
Some seem to think so but I don’t. Just like it’s obvious that our selective stupidity is just that — selective — it’s equally obvious to me that we don’t do this with full-on intentionality. We aren’t all sociopaths. We don’t typically get a full range of possible interpretations listed in our mind, think about it and then decide to go with a dishonest pick.
Rather, something screens our options before they even get to us.
It’s bleeding obvious when somebody misrepresents us or those we have sympathy for by arguing against something other than what’s intended. It’s feels absolutely plain to anybody not blinkered by ideology or cognitive biases and nobody could be a big enough idiot to not see the error. But when we or our friends misrepresent our enemies’ claims it’s not like similar objections are acknowledged and painstakingly dealt with in our heads beforehand. They’re just absent.
Some part of us makes those choices about what to become aware of. But they don’t feel like choices, they feel like mere recognition of something correct. I can’t prove it, but if my own introspection is any guide at all, and unless my interpretations of the many millions of words of online discussion I’ve been reading for my whole adult life to the detriment of further real world accomplishment is way, way off, treating this as fully aware and intentional behavior does not capture what’s going on.
My feeling is not all I have in support. Consider this TEDx talk I recently came across. It’s by Cassie Jaye, director of the documentary The Red Pill about the men’s rights movement. I haven’t seen her movie, and while the subject matter itself is interesting from an erisology perspective I won’t go into any of it here. What’s relevant is her description of the change she went through when interviewing activists for the movie. According to her, she — a card-carrying feminist — came to the project with the intention of exposing “the dark underbelly of the men’s rights movement”, but as time went by her interpretation of what she heard started to change. She kept a video diary during the whole process and reviewing it made the change apparent:
[L]ooking back on the 37 diaries I recorded that year, there was a common theme. I would often hear an innocent and valid point that a men’s rights activist would make, but in my head, I would add on to their statements, a sexist or anti-woman spin, assuming that’s what they wanted to say but didn’t.
She goes on to give some examples. I won’t quote them in full but the talk is worth watching if only as an account of the experience of having your mind slowly turned inside-out.
I see no reason to believe she’s lying. She interpreted what she heard one way in the beginning, and then in another way later. It wasn’t as simple as her feelings changing, nor was she convinced by force of evidence that some factual claim she thought was false was true, or vice versa. No, her interpretation of what she heard changed. And she was surprised by that. There was no cynical ploy at work here; her assessment in the beginning was uncharitable but still honest, in one sense of the word. It was a choice in that it could have been done otherwise, but not a deliberate choice by her in the way we usually mean it.
The missing piece
I’ve been mulling over this process of half-conscious hostile interpretation for a long time. Here’s an early stab at it, from late 2016:
An encounter with an ambiguous yet controversial-sounding claim starts with an instinctive emotional reaction. We infer the intentions or agenda behind the claim, interpret it in the way most compatible with our own attitude, and then immediately forget the second step ever happened and confuse the intended meaning with our own interpretation. This is a complicated way of saying that if you feel a statement is part of a rival political narrative you’ll unconsciously interpret it to mean something false or unreasonable, and then think you disagree with people politically because they say false and unreasonable things.
I still think this is close to correct, but at that time I didn’t have a good framework in place for understanding it. I had a pretty simple idea of free will and agency: you have crude, mostly evolved emotional impulses and a rational mind on top of them with nothing in between. Any sophisticated cognition had to be deliberate and conscious. Sure, unconscious processes are important but they’re pretty dumb, certainly not socially and philosophically savvy.
Reading The Elephant in the Brain last winter made it impossible for me to keep believing that any longer.
For those who haven’t read it, the book (my full review here) argues that many of our behaviors are driven by hidden motives. Instead of our professed selfless desires we really do things to benefit ourselves (or at least not hurt ourselves), typically by trying to increase our own power, popularity and prestige. These motives aren’t just hidden from others, they’re hidden from ourselves too because it’s vital that they appear genuine and we aren’t perfect liars. As the authors say:
[J]ust like camouflage is useful when facing an opponent with vision, self-deception is useful when facing an opponent with mind-reading powers.
Our conscious self is less the brain’s CEO than its PR officer, tasked not so much with making the decisions but with maintaining good relations with others and put a positive spin on things. As such we’re fed information on a need-to-know basis, and sometimes we’re even lied to.
The idea that there are other parts of the mind that are just as clever (or cleverer) as our conscious selves are but do their work outside awareness is an important and I think a necessary one if we’re going to make sense of misrepresentation. Looking at how people argue, even small children, shows that we’re clearly capable of complex rhetorical maneuvers and subtle philosophical points without exceptional intelligence or much or any training. As mentioned, we’re also capable of equally astounding obtuseness when that helps.
We can make up songs without a degree in music theory and we can catch a ball without knowing how to solve quadratic equations. Interpreting natural language is an even more complex process, yet we do it effortlessly. Virtually all statements are open to interpretation but most of the time they don’t come across that way to us. The meaning just pops into our head. Given that we have motives we’re unaware of and are fed what is often misinformation by an internal Machiavellian cabal, we have good reason to doubt that those pop-ins are always fair and accurate when we’re dealing with somebody we consider an enemy.
We tend to consider our selves singular, coherent, indivisible entities. That works a lot of the time, but it isn’t perfect. When we zoom in closer on our mental processes that model starts failing. I think it, and the clear division between intentional and unintentional that comes with it, hurts more than it helps when trying to understand disagreement in general and misrepresentation in particular.
The Prince and the Figurehead
In two recent posts (here and here) I’ve countered some criticisms of my belief that talking more explicitly about how disagreement works is going to improve discourse. People aren’t communicating in good faith, they say; they aren’t actually trying to understand each other but fail to do so. I’m aware of all that (believe me) but it isn’t that simple and I think I needed to write this post to communicate exactly why I think I’m right when I say that we make mistakes and act dishonestly at the same time.
It’s a hard not to crack. There are many, many things that can be described as “a bit of A and a bit of B, where A and B are seemingly contradictory”. I’ve been thinking long and hard about a general way of restructuring such descriptions to work “from the middle and out” but I haven’t had much luck yet. In this particular case though, I think there’s a way, using Elephant‘s description of the mind. From my review:
Elephant suggests we need new solutions for some important questions. If my brain and my self are not the same and my mind is manipulating my self into doing something for reasons I’m not aware of, does this count as a motive? Is it an intentional action? Can I or can I not wash my hands of it? /—/
Our traditional view of will and motivation doesn’t hold up very well, hence the concepts dependent on it are on shaky ground and we’ve got a lot of refactoring to do. We should get on it, not just because it’s necessary but because it could potentially help us address a lot of problems.
Instead of having a unified mind we have one with two parts. One of them is what Simler and Hanson call the PR officer — basically our conscious self, honestly convinced of its own goodness and correctness. It gets fed partial and often spun information from the other part: a shadowy figure outside consciousness that looks after our interests and makes sure the self doesn’t have to get its hands dirty.
Given how Machiavellian the hidden part is I think we should call it The Prince. “PR officer” is a little pessimistic for the conscious self (although good for making the point of Elephant), since it can — if it really tries and knows what to look for — question and critically examine what the Prince is feeding it. It can also assert power if so inclined. I prefer to call it The Figurehead, because while relegated to partially ceremonial duties we are the rightful monarchs of our own states and we can make the effort to actually rule, with wisdom and integrity. Or we can keep standing on the balcony, waving at the crowd, unwittingly providing cover for the dirty operations of state.
I don’t mean to say that this Prince and Figurehead model is neurologically correct or anything. I see it more like Freud’s division of the mind into Id, Ego and Superego — “wrong”, but useful enough to be worth keeping in mind and take out now and then. It would be a good thing if the Prince and the Figurehead were as well known as that trio.
With this model we could stop confusing ourselves by thinking that actions are either intentional or unintentional, because either one is going to be wrong and will mislead us if we use it as a starting point for our thinking. While the model is applicable to much of our behavior I think it’s especially helpful for understanding rhetorical misrepresentation. It (and the strategic use of principles and objections) is the result of two parts of the mind working together with neither having full agency. The Prince doesn’t have self-awareness or a conscience, and the Figurehead doesn’t have all the facts. Together they produce not intentional nor unintentional actions, but, semitentional ones. Semitentional actions shouldn’t be considered just half of each, but as something qualitatively different.
We’re not as evil nor as stupid as we would have to be for the state of discourse to make sense with us as a singular, fully aware and responsible agent. However, the Prince and the Figurehead can, through the miracle of division of labor, do stupid-evil things while being nice-smart at the same time. It’s all very clever.
Being explicit about what the Prince does arms the Figurehead — our selves, our conscience, and our better angels — and improves its ability to question whatever it’s being told by the Prince. It’s true that this requires that we aren’t completely corrupt, but any anti-corruption effort does.
• • •
Strawmanning means to specifically argue against a weaker position than the one your opponent actually holds, but I think misrepresentation is a broader phenomenon and a bigger issue than that. Misrepresenting a position can range from outright lies, via partial narratives pretending to be the whole truth to plain description in unflattering terms (called “cryptonormativity” by the philosopher Joseph Heath). If it’s in fiction it can mean having something expressed by unappealing characters.
An even subtler practice is describing ideas fairly neutrally but in an “exoticizing” way, suggesting they’re foreign, a curiosity that the reader is not expected to hold — thereby establishing something as “figure” rather than “ground”. Imagine an article describing, say, basic Christianity as something unknown and peculiar, or democratic values, or a belief in objective reality, etc. It’s not hostile, specifically, but it certainly makes you feel excluded if you hold such ideas — and you easily suspect it’s intentional.
If somebody does believe that I suspect it says more about them than it does about everybody else. Yes, that’s right everybody: people who disagree with me are evil sociopaths.
I sympathize so much with this because similar things have happened to me. For example, I used to be very hostile to ideas you might, for lack of a better word, call “postmodern”. That is, ideas problematizing objectivity, the authority of scientific knowledge, the stable and objective meaning of words, the straightforward truth of facts etc. When I saw claims that gave off signs of being part of this whole cluster of ideas I remember reading all kinds of things into them — I’m not saying all of them were completely unwarranted, there exist bad motives there as well just like there certainly are among men’s rights activists — even though they weren’t necessarily there. It took hearing similar ideas but phrased differently from people I didn’t consider hostile to me and my values before I could see what they were really saying.
Now I recognize the slipperyness of meaning and the partial relativity of fact and truth and I’ve made it a cornerstone of my thinking. I see the point and value of a concept like “social construction”, and I’m acutely aware of how science is not the purely the rationalistic pursuit of knowledge in a vacuum but very much a social process. That doesn’t mean I “switched sides” any more than Jaye did. At the end of her talk she says she now has a more balanced view, and that goes for me too.
Part of the reason I believed this was because I wanted to keep my materialist, biologically based account of the mind but not have to adopt a vulgar determinism that denies agency — and I didn’t agree that one implied the other. I still don’t, but I recognize that I ignored and implicitly ruled out sophisticated unconscious thought partly because it made it easier to keep agency.
Although one should remember that this is more true of the social sciences than the natural sciences and way, way more true of the humanities.