The Prince and the Figurehead

I don’t like it when people misrepresent others. Argue passionately. Yell if you must. But don’t misrepresent. That’s a sin.

We know it’s an issue. We even have an expression for it: straw man[1]. However, few of us would admit to doing it ourselves, and that interests me.

I’m curious about exactly how and when the distortion occurs. Describing somebody else’s ideas (either explicitly or implicitly by responding to a version of them) is a two-step process: you build a representation of them in your own head, and then in turn describe that representation. In other words, there are two layers of representation.

Misrepresentation then, is when at least one of these processes is corrupt. Which one it is makes a huge difference. In one case we unintentionally have the wrong impression of somebody, but communicate that impression accurately. This is an honest mistake. In the other case we have the correct impression, but intentionally communicate something else. This is a lie.

I don’t find any of those convincing. This is a problem because there seems to be no more options. What to do? De we just split the difference? It’s a little bit of a mistake and a little bit of lying, case closed and let’s call it a day? Clearly it’s something like that but I don’t feel we’ve learned anything. How do they combine, exactly?

Various kinds of misrepresentation is ubiquitous in argumentation and rhetoric. In addition to putting up straw men, we deploy there are plenty of other rhetorical tools and tricks in an unprincipled and inconsistent fashion that results in misrepresenting an opposing case. The question is the same there: do we do this intentionally or unintentionally?

They’re not honest mistakes. They just can’t be, because we are nowhere near stupid enough for that. For example, nobody really believes that one counterexample disproves a trend or pattern (this is a misrepresentation of a claim about overall patterns as a claim about categorical rules), and we can spot the error in a fraction of a second if it is used against us. We’re also perfectly able to see what’s wrong with representing a whole group, movement or ideology by pointing to one or a few nutcases, except when it’s, you know, them.

So it’s all lies? We’re manipulative cynics saying whatever will help us? Public discourse tries to look like rational deliberation with all its makeup caked on but it’s calculating sociopathy all the way down?

Some seem to think so but I don’t[2]. Just like it’s obvious that our selective stupidity is just that — selective — it’s equally obvious to me that we don’t do this with full-on intentionality. We aren’t all sociopaths. We don’t typically get a full range of possible interpretations listed in our mind, think about it and then decide to go with a dishonest pick.

Rather, something screens our options before they even get to us.

It’s bleeding obvious when somebody misrepresents us or those we have sympathy for by arguing against something other than what’s intended. It’s feels absolutely plain to anybody not blinkered by ideology or cognitive biases and nobody could be a big enough idiot to not see the error. But when we or our friends misrepresent our enemies’ claims it’s not like similar objections are acknowledged and painstakingly dealt with in our heads beforehand. They’re just absent.

Some part of us makes those choices about what to become aware of. But they don’t feel like choices, they feel like mere recognition of something correct. I can’t prove it, but if my own introspection is any guide at all, and unless my interpretations of the many millions of words of online discussion I’ve been reading for my whole adult life to the detriment of further real world accomplishment is way, way off, treating this as fully aware and intentional behavior does not capture what’s going on.

My feeling is not all I have in support. Consider this TEDx talk I recently came across. It’s by Cassie Jaye, director of the documentary The Red Pill about the men’s rights movement. I haven’t seen her movie, and while the subject matter itself is interesting from an erisology perspective I won’t go into any of it here. What’s relevant is her description of the change she went through when interviewing activists for the movie. According to her, she — a card-carrying feminist — came to the project with the intention of exposing “the dark underbelly of the men’s rights movement”, but as time went by her interpretation of what she heard started to change. She kept a video diary during the whole process and reviewing it made the change apparent:

[L]ooking back on the 37 diaries I recorded that year, there was a common theme. I would often hear an innocent and valid point that a men’s rights activist would make, but in my head, I would add on to their statements, a sexist or anti-woman spin, assuming that’s what they wanted to say but didn’t.

She goes on to give some examples. I won’t quote them in full but the talk is worth watching if only as an account of the experience of having your mind slowly turned inside-out.

I see no reason to believe she’s lying. She interpreted what she heard one way in the beginning, and then in another way later. It wasn’t as simple as her feelings changing, nor was she convinced by force of evidence that some factual claim she thought was false was true, or vice versa. No, her interpretation of what she heard changed. And she was surprised by that. There was no cynical ploy at work here; her assessment in the beginning was uncharitable but still honest, in one sense of the word. It was a choice in that it could have been done otherwise, but not a deliberate choice by her in the way we usually mean it[3].

The missing piece

I’ve been mulling over this process of half-conscious hostile interpretation for a long time. Here’s an early stab at it, from late 2016:

An encounter with an ambiguous yet controversial-sounding claim starts with an instinctive emotional reaction. We infer the intentions or agenda behind the claim, interpret it in the way most compatible with our own attitude, and then immediately forget the second step ever happened and confuse the intended meaning with our own interpretation. This is a complicated way of saying that if you feel a statement is part of a rival political narrative you’ll unconsciously interpret it to mean something false or unreasonable, and then think you disagree with people politically because they say false and unreasonable things.

I still think this is close to correct, but at that time I didn’t have a good framework in place for understanding it. I had a pretty simple idea of free will and agency: you have crude, mostly evolved emotional impulses and a rational mind on top of them with nothing in between. Any sophisticated cognition had to be deliberate and conscious. Sure, unconscious processes are important but they’re pretty dumb, certainly not socially and philosophically savvy[4].

Reading The Elephant in the Brain last winter made it impossible for me to keep believing that any longer.

elephant

For those who haven’t read it, the book (my full review here) argues that many of our behaviors are driven by hidden motives. Instead of our professed selfless desires we really do things to benefit ourselves (or at least not hurt ourselves), typically by trying to increase our own power, popularity and prestige. These motives aren’t just hidden from others, they’re hidden from ourselves too because it’s vital that they appear genuine and we aren’t perfect liars. As the authors say:

[J]ust like camouflage is useful when facing an opponent with vision, self-deception is useful when facing an opponent with mind-reading powers.

Our conscious self is less the brain’s CEO than its PR officer, tasked not so much with making the decisions but with maintaining good relations with others and put a positive spin on things. As such we’re fed information on a need-to-know basis, and sometimes we’re even lied to.

The idea that there are other parts of the mind that are just as clever (or cleverer) as our conscious selves are but do their work outside awareness is an important and I think a necessary one if we’re going to make sense of misrepresentation. Looking at how people argue, even small children, shows that we’re clearly capable of complex rhetorical maneuvers and subtle philosophical points without exceptional intelligence or much or any training. As mentioned, we’re also capable of equally astounding obtuseness when that helps.

We can make up songs without a degree in music theory and we can catch a ball without knowing how to solve quadratic equations. Interpreting natural language is an even more complex process, yet we do it effortlessly. Virtually all statements are open to interpretation but most of the time they don’t come across that way to us. The meaning just pops into our head. Given that we have motives we’re unaware of and are fed what is often misinformation by an internal Machiavellian cabal, we have good reason to doubt that those pop-ins are always fair and accurate when we’re dealing with somebody we consider an enemy.

We tend to consider our selves singular, coherent, indivisible entities. That works a lot of the time, but it isn’t perfect. When we zoom in closer on our mental processes that model starts failing. I think it, and the clear division between intentional and unintentional that comes with it, hurts more than it helps when trying to understand disagreement in general and misrepresentation in particular.

The Prince and the Figurehead

In two recent posts (here and here) I’ve countered some criticisms of my belief that talking more explicitly about how disagreement works is going to improve discourse. People aren’t communicating in good faith, they say; they aren’t actually trying to understand each other but fail to do so. I’m aware of all that (believe me) but it isn’t that simple and I think I needed to write this post to communicate exactly why I think I’m right when I say that we make mistakes and act dishonestly at the same time.

It’s a hard not to crack. There are many, many things that can be described as “a bit of A and a bit of B, where A and B are seemingly contradictory”. I’ve been thinking long and hard about a general way of restructuring such descriptions to work “from the middle and out” but I haven’t had much luck yet. In this particular case though, I think there’s a way, using Elephant‘s description of the mind. From my review:

Elephant suggests we need new solutions for some important questions. If my brain and my self are not the same and my mind is manipulating my self into doing something for reasons I’m not aware of, does this count as a motive? Is it an intentional action? Can I or can I not wash my hands of it? /—/

Our traditional view of will and motivation doesn’t hold up very well, hence the concepts dependent on it are on shaky ground and we’ve got a lot of refactoring to do. We should get on it, not just because it’s necessary but because it could potentially help us address a lot of problems.

Instead of having a unified mind we have one with two parts. One of them is what Simler and Hanson call the PR officer — basically our conscious self, honestly convinced of its own goodness and correctness. It gets fed partial and often spun information from the other part: a shadowy figure outside consciousness that looks after our interests and makes sure the self doesn’t have to get its hands dirty.

Given how Machiavellian the hidden part is I think we should call it The Prince. “PR officer” is a little pessimistic for the conscious self (although good for making the point of Elephant), since it can — if it really tries and knows what to look for — question and critically examine what the Prince is feeding it. It can also assert power if so inclined. I prefer to call it The Figurehead, because while relegated to partially ceremonial duties we are the rightful monarchs of our own states and we can make the effort to actually rule, with wisdom and integrity. Or we can keep standing on the balcony, waving at the crowd, unwittingly providing cover for the dirty operations of state.

I don’t mean to say that this Prince and Figurehead model is neurologically correct or anything. I see it more like Freud’s division of the mind into Id, Ego and Superego — “wrong”, but useful enough to be worth keeping in mind and take out now and then. It would be a good thing if the Prince and the Figurehead were as well known as that trio.

With this model we could stop confusing ourselves by thinking that actions are either intentional or unintentional, because either one is going to be wrong and will mislead us if we use it as a starting point for our thinking. While the model is applicable to much of our behavior I think it’s especially helpful for understanding rhetorical misrepresentation. It (and the strategic use of principles and objections) is the result of two parts of the mind working together with neither having full agency. The Prince doesn’t have self-awareness or a conscience, and the Figurehead doesn’t have all the facts. Together they produce not intentional nor unintentional actions, but, semitentional ones. Semitentional actions shouldn’t be considered just half of each, but as something qualitatively different.

We’re not as evil nor as stupid as we would have to be for the state of discourse to make sense with us as a singular, fully aware and responsible agent. However, the Prince and the Figurehead can, through the miracle of division of labor, do stupid-evil things while being nice-smart at the same time. It’s all very clever.

Being explicit about what the Prince does arms the Figurehead — our selves, our conscience, and our better angels — and improves its ability to question whatever it’s being told by the Prince. It’s true that this requires that we aren’t completely corrupt,  but any anti-corruption effort does.

 

• • •

 

Notes

[1]
Strawmanning means to specifically argue against a weaker position than the one your opponent actually holds, but I think misrepresentation is a broader phenomenon and a bigger issue than that. Misrepresenting a position can range from outright lies, via partial narratives pretending to be the whole truth to plain description in unflattering terms (called “cryptonormativity” by the philosopher Joseph Heath). If it’s in fiction it can mean having something expressed by unappealing characters.

An even subtler practice is describing ideas fairly neutrally but in an “exoticizing” way, suggesting they’re foreign, a curiosity that the reader is not expected to hold — thereby establishing something as “figure” rather than “ground”. Imagine an article describing, say, basic Christianity as something unknown and peculiar, or democratic values, or a belief in objective reality, etc. It’s not hostile, specifically, but it certainly makes you feel excluded if you hold such ideas — and you easily suspect it’s intentional.

[2]
If somebody does believe that I suspect it says more about them than it does about everybody else. Yes, that’s right everybody: people who disagree with me are evil sociopaths.

[3]
I sympathize so much with this because similar things have happened to me. For example, I used to be very hostile to ideas you might, for lack of a better word, call “postmodern”. That is, ideas problematizing objectivity, the authority of scientific knowledge, the stable and objective meaning of words, the straightforward truth of facts etc. When I saw claims that gave off signs of being part of this whole cluster of ideas I remember reading all kinds of things into them — I’m not saying all of them were completely unwarranted, there exist bad motives there as well just like there certainly are among men’s rights activists — even though they weren’t necessarily there. It took hearing similar ideas but phrased differently from people I didn’t consider hostile to me and my values before I could see what they were really saying.

Now I recognize the slipperyness of meaning and the partial relativity of fact and truth and I’ve made it a cornerstone of my thinking. I see the point and value of a concept like “social construction”, and I’m acutely aware of how science is not the purely the rationalistic pursuit of knowledge in a vacuum but very much a social process[5]. That doesn’t mean I “switched sides” any more than Jaye did. At the end of her talk she says she now has a more balanced view, and that goes for me too.

[4]
Part of the reason I believed this was because I wanted to keep my materialist, biologically based account of the mind but not have to adopt a vulgar determinism that denies agency — and I didn’t agree that one implied the other. I still don’t, but I recognize that I ignored and implicitly ruled out sophisticated unconscious thought partly because it made it easier to keep agency.

[5]
Although one should remember that this is more true of the social sciences than the natural sciences and way, way more true of the humanities.

 

Did you enjoy this article? Consider supporting Everything Studies on Patreon.

16 thoughts on “The Prince and the Figurehead

  1. This is great, but you’re missing a trick, surely? Misrepresentation, under the “brain-elephant” view, serves a useful purpose: it is much more important, from the perspective of predicting future actions, to understand the enemy’s Prince than to accurately depict their Figurehead.

    Liked by 1 person

    1. You make a very good point. I’ve held the “real” meaning of a statement constant so I can focus on the interpretation mechanicsm, but as you say, the same dynamic occurs on the other side at the same time. That greatly complicates things.

      You could phrase it that we also misrepresent *ourselves*, but positively. That implies we should do the same thing there and be skeptical of what our Prince tells us about our own opinions and trust our intuitive judgments of right and wrong less. I touched upon that in my Elephant-review, and I have tried to cultivate the habit of “interrogating” my own beliefs and opinions for some time. Sadly, the resulting humility is costly and not enjoyable so it’s hard to maintain.

      I still think there’s value in keeping it the default that we judge people on what they think they say, not what our Princes think they say. It’s a matter of building social capital: I prefer a world where empowered figureheads speak to each other with a conscience rather than Princes trying to outmaneuver each other Game of Thrones-style.

      Liked by 1 person

      1. I prefer a world where empowered figureheads speak to each other with a conscience rather than Princes trying to outmaneuver each other Game of Thrones-style.

        Mm, I’d strongly agree. Telling people what you think they subconsciously want is just going to make them mad.

        Liked by 1 person

  2. 1. Men’s Rights Advocates tend to believe sexist things
    2. A Men’s Rights Advocate is making this claim
    3. This claim can be a result of believing sexist things
    4. Therefore, it is likely this claim is, in fact, a result of the person believing sexist things.

    Of course, a claim that isn’t sexist is evidence that maybe they’re not as sexist as you think they are, but if your evidence for premise 1 is strong enough (e.g. many people you trust say MRAs are sexist and have given good arguments for why their claims are sexist, and you’ve seen some clearly-sexist statements from MRAs), then one thing that isn’t clearly sexist but could be interpreted as sexist isn’t going to be strong enough evidence to change your mind.

    …and I would consider that “convinced by force of evidence that some factual claim she thought was false was true”—namely, the factual claim about what MRAs actually believe and what they’re saying, as opposed to the object-level factual claims about men’s rights. (In other words: the claim “person X believes Y” is, itself, a factual claim, that one can have evidence for or against.)

    More generally, if you have reason to believe that someone is likely to be lying or misrepresenting what they actually believe, it’s rational from a truth-seeking perspective to be more skeptical of their claim and consider that they might actually believe something else. If, as you’re suggesting, people do tend to have hidden motives, then we should be “misrepresenting” people’s arguments more because the “misrepresentation” is going to be closer to their actual intent.

    ———

    Even if there was never any deception going on at any level (self or otherwise) and everyone knew this, we should still expect people to misrepresent the other side more than their own.

    For one, our own evidence for our side is generally going to be better than what we convey; if, for instance, I know many examples of a particular thing and I only give one of them, I know that I don’t think a trend is wrong because of a single counterexample, and someone else on my side who also likely knows of other examples (that’s why they’re on my side) knows that I don’t think a trend is wrong because of a single counterexample, but to someone on the other side that might look like I just have that one counterexample.

    For another, we may have different assumptions that aren’t explicitly stated and misinterpret each other because of that; for instance, if I’m consequentialist but haven’t really thought about moral philosophy, I might not be aware that other moral systems exist, which would lead to misinterpreting what non-consequentialists are saying if they’re likewise not aware that other moral systems exist and therefore aren’t explicit about not being consequentialist; and it may be that people on my side tend to be consequentialists.

    (In other words: communication is hard, so we don’t have to be stupid to make mistakes; and the things that cause people to have different beliefs might also make them more likely to misinterpret each other.)

    ———

    When reading the elephant post, I had doubts about whether people’s brains actually hide ally-seeking behavior (as in, has the book actually ruled out the possibility that people do tend to be ally-seeking but are consciously aware of it and don’t try to hide it?). (I don’t think most people even claim to be purely altruistic, and if people try to hide ally-seeking behavior, then why are words like “friend” or “popular” in our everyday vocabulary?) If some people aren’t aware of ally-seeking behavior, one possible explanation is that those people actually are unusually non-ally-seeking, but I just thought of another possible explanation while reading this…

    We can make up songs without being consciously aware of music theory concepts, but when people do learn music theory, it’s often by being explicitly taught those concepts, and we might end up believing what we’re taught even if it’s different from the intuition we’d otherwise use. When people do learn to consciously reason about grammar, it’s often from being taught it… and then when people are taught that saying “I did good” is wrong, rather than “I did well”, or “Me and my friend did something” is wrong, rather than “My friend and I did something”, or whatever, they might think that they’re trying to speak a language where the former is wrong and making a mistake when they say it, when they’re actually speaking a language where the former is perfectly okay and were taught a made-up rule. And if people are taught that debate is about truth-seeking and that various things are just mistakes in reasoning, they might end up believing that, even if their instincts about debating actually involve ally-seeking.

    (In other words: it could be that the “lie” comes from what one is taught by other people, rather than from some PR department in the brain.)

    ———

    I’m not sure “exoticizing” in footnote 1 is necessarily bad? It could get the person to look at their view in a new light (isn’t that pretty much what you were arguing for in the article about talking like a robot?), and it might be accurately portraying how their opponent’s position actually feels to them (perhaps Christianity really does seem unknown and peculiar to some people… not quite to me—it’s not unknown, but it’s a bit peculiar and definitely not the default for me).

    Liked by 1 person

    1. That’s a long comment and I’m not going to be able to adress all of it (thanks for making the effort though), but a few comments:

      About your first section, I was referring specifically to the process of interpreting claims. Whatever change means interpreting things differently could be construed as change in background belief, but that’s different from changing belief in the particular things being said. I wasn’t entirely clear on that.

      See my reply to dirdle on the strategicness of self-presentation. It’s a factor, but one I simplified away here for practical reasons and, I think, because I myself consider it sort of a moral right to be taken at your word. (I guess I also see a difference between what you believe, which is intentional, and why you believe it, which is not so much).

      I don’t have a lot to add to your middle section, other than to reiterate that I find it important that while communication is hard, we don’t appreciate how hard it is and we feel we understand people better than we do, part of which is strategic.

      Your next to last part I don’t quite understand. Are you saying we’d be more explicitly aware of how rhetoric works of we weren’t taught rational norms of argumentation? Maybe but I doubt it’s important, we’re still good at it without knowing how it works and I don’t think we’ve unlearned the philosophy of concept building and -wielding that would be required for explicit understanding.

      About the last… I do see your point and it must come down to intention, conscious or not, and relation to reality. I.e. what do I want people to think, and is the assumption of exoticness accurate?

      Like

      1. > See my reply to dirdle on the strategicness of self-presentation. It’s a factor, but one I simplified away here for practical reasons and, I think, because I myself consider it sort of a moral right to be taken at your word.

        If someone thinks the other person is being strategic and doesn’t share your moral view, they might consider the person’s hidden motive to be the “real” meaning, and therefore consider their assessment of the person’s actual motive as more accurate than a face-value interpretation of what the person’s saying. (And, to clarify, my intent in that section wasn’t to say that I believe that line of reasoning, but that it’s something that one could come up with given certain beliefs about what others think, through ordinary rational reasoning, without requiring some sort of hidden strategic process. Especially since I’m pretty sure I’ve heard people explicitly state that they think people who make certain claims actually believe something else—they’re not trying to hide that they’re doing this.)

        > I don’t have a lot to add to your middle section, other than to reiterate that I find it important that while communication is hard, we don’t appreciate how hard it is and we feel we understand people better than we do, part of which is strategic.

        My point in that section, though, is that it doesn’t have to be strategic. Even without all the strategic stuff, arguments from one’s own side will be easier to understand (and therefore harder to accidentally misrepresent) because you’re more likely to already know and understand parts of it (because a similar argument might be what caused you to be on their side to begin with, and because people are more likely to learn about something they think is true), including parts of the argument that haven’t been explicitly stated.

        (This might also mean that you’re likely to misrepresent an argument on your side as a better argument, because you know how it’s supposed to go, but no one notices that because doing so doesn’t generally cause as many issues.)

        > Are you saying we’d be more explicitly aware of how rhetoric works of we weren’t taught rational norms of argumentation?

        …maybe, or maybe they’d have no explicit belief rather than a wrong explicit belief. This is pretty much just one possibility I thought of, not something I’m at all confident about, and if it’s true, it probably depends on the specifics of how it’s taught and probably other stuff as well. (I didn’t take any class in school that focused on how to argue, and I don’t assume arguments are always purely about truth-seeking, but n=1.)

        (Going back to the main article… to me, the most obvious way to reconcile people misrepresenting things strategically/lying vs. honest mistakes is that sometimes people misrepresent things strategically and other times they make honest mistakes. But sometimes people could make the mistake of believing someone’s lie…)

        Like

        1. …thinking about my third point more, I think what I was really trying to get at was that there’s a distinction between not knowing what one’s brain is doing because it’s specifically adaptive not to know (what the elephant thing seems to be getting at), and not knowing what one’s brain is doing because not knowing is the default (which is what’s likely the case in catching a ball, composing music, and understanding language). The former would imply that one’s belief about what their brain is doing is something like lying to them; the latter implies that some of one’s information about what their brain is doing comes from the same sorts of sources as one’s information about anything else, which means one can be wrong about what one’s brain is doing for the same sorts of reasons that they can be wrong about anything else (e.g., being taught bad information, mistakes in reasoning, not enough or non-representative data, etc.).

          Being taught (perhaps implicitly) that debate is about truth-seeking is one way that one could get wrong information about what one is actually trying to do in debates, and was the first possibility to pop into my head, but it’s not the only possibility.

          Like

    2. Referring to to the first portion.

      I feel like this applies best with avowed adherents to specific ideologies, such as the capital lettered Men’s Rights Advocate in your example. In my interactions online and with friends and family, I rarely observe people “misrepresenting” other’s with such a clear cut allegiance to a particular worldview (…at least since I gave up on Twitter.)

      In a circumstance that’s more like:
      1. Men’s Rights Advocates tend to believe sexist things
      2. The person making this claim *says things that remind me of things Men’s Rights Advocates say.*
      3. This claim can be a result of believing sexist things

      It seems somewhat less valid to jump to:
      4. Therefore, it is likely this claim is, in fact, a result of the person believing sexist things.

      (Also, even in the case of MRAs, what constitutes sexism? See also: https://everythingstudies.com/2018/11/16/anatomy-of-racism/ )

      Like

      1. In that case, there would probably be an additional belief “People who say ____ are generally MRAs”, or they could just start with a belief like “People who say ___ are generally sexist” and not bring specific groups into it at all. (Perhaps a more general belief, like “People who say things explicitly in favor of privileged groups are usually actually complaining about things getting more fair for non-privileged groups”, which might follow from certain assumptions about how privilege works.)

        I’m not necessarily agreeing with those beliefs or this line of reasoning; but I think some people do hold beliefs of that sort (I’m pretty sure at some point I’ve heard people explicitly express such beliefs), and this sort of reasoning can follow from such beliefs.

        (And since I’m talking generally about how someone else might reason… which I probably should have made clearer in the first comment… it doesn’t matter that much for my point what they consider sexism, as long as whatever definition they’re using for sexism is something they consider bad.)

        Like

  3. From the Yud:

    “Gilovich’s distinction between motivated skepticism and motivated credulity highlights how conclusions a person does not want to believe are held to a higher standard than conclusions a person wants to believe. A motivated skeptic asks if the evidence compels them to accept the conclusion; a motivated credulist asks if the evidence allows them to accept the conclusion.”

    Liked by 1 person

  4. Nice, I knew where to come when I had a topic like this on my mind! Thanks.

    Does the book go into, like, scientific experiments/observations of people using rhetoric?

    I often see studies about “changing minds” that focus on people passively reading info, not looking at people who are arguing their own views. So I haven’t seen much scientific talk of (for example) the rhetorical moves of people who are wrong.

    I also haven’t seen another important kind of study: testing back-and-forth conversation to see if they (scientists, with presumably correct views) can successfully change people’s minds that way. This is different from having people read pre-made articles, because it would (for example) give a chance to respond to the specific objections that that particular person/subject/skeptic has.

    So…do you know of the kinds of scientific studies I’m looking for? 😀

    Like

    1. Not really, unfortunately. The Elephant in the Brain doesn’t go into that much either. It’s probably a think learnt better by practical experience than controlled experiments. I’m reading “How to have impossible conversations” right now and it’s related. Maybe check that out?

      Like

Leave a comment