The Romeo And Juliet Fallacy

I’m jumping the gun on my New Year’s resolution to write more short, off-the-cuff pieces instead of fiddling with long treatises for weeks, so forgive me if this is unpolished. The reason is that Scott Alexander just published one of his erisology-in-all-but-name posts and I have some thoughts on it.

He writes about the phenomenon where trying to make public opinion more moderate is considered extreme because, it’s implied, if you don’t 100% support one idea/solution/explanation, you must 100% support the alternative. And that’s of course extreme. He calls it the “reversed moderation fallacy”:

Popular consensus believes 100% X, and absolutely 0% Y.

A few iconoclasts say that X is definitely right and important, but maybe we should also think about Y sometimes.

The popular consensus reacts “How can you think that it’s 100% Y, and that X is completely irrelevant? That’s so extremist!”

Some common forms of this:

Reversed moderation of planning, like in the geoengineering example. One group wants to solve the problem 100% through political solutions, another group wants 90% political and 10% technological, and the first group thinks the second only cares about technological solutions.

Reversed moderation of importance. For example, a lot of psychologists talk as if all human behavior is learned. Then when geneticists point to experiments showing behavior is about 50% genetic, they get accused of saying that “only genes matter” and lectured on how the world is more complex and subtle than that.

Reversed moderation of interest. For example, if a vegetarian shows any concern about animal rights, they might get told they’re “obsessed with animals” or they “care about animals more than humans”.

Reversed moderation of certainty. See for example my previous article Two Kinds Of Caution. Some researcher points out a possibility that superintelligent AI might be dangerous, and suggests looking into this possibility. Then people say it doesn’t matter, and we don’t have to worry about it, and criticize the researcher for believing he can “predict the future” or thinking “we can see decades ahead”. But “here is a possibility we need to investigate” is a much less certain claim than “no, that possibility definitely will not happen”.

It reads like a Slate Star Codex post from 2014, which isn’t a bad thing because 2014 SSC was great. But the observation seems a little shallow now, considering how far his erisological writing has come. Of course, it’s a short post so I guess it’s just an offhand comment and not a stab at something more general.

But I think there’s something more general to discuss. What he’s talking about is similar to what I’ve called partial narratives. As described in The Signal and the Corrective:

From the inside, when you subscribe to a narrative, when you believe in it, it feels like you’ve stripped away all irrelevant noise and only the essence, The Underlying Principle, is left — the signal, in the language of information theory. However, that noise you just dismissed as irrelevant has other signals in it and sometimes people will consider them stronger, truer and more important.

Our intuition has problems with the idea that the same set of facts can have different “signals” behind them and none is The Single Underlying Truth. In particular it’s hard to grasp that this allows multiple narratives to coexist, even if they appear to contradict each other. Why? Well, narratives contradicting each other means that they simplify and generalize in different ways and assign goodness and badness to things in opposite directions. While that might look like contradiction it isn’t, because generalizations and value judgments aren’t strictly facts about the world. As a consequence, the more abstracted and value-laden narratives get the more they can contradict each other without any of them being “wrong”.

My original thought in Partial Derivatives and Partial Narratives was that just like any mathematical function can have as many derivatives as it has variables, concepts and entities in the world can have any number of different narratives making sense of them.

How is it similar? Well, things like solutions, explanations and priorities are clothed in narratives, which suggests they might work like narratives do. And partial narratives tend to be substitutes; for some psychological reason, different narratives covering the same issue have great trouble coexisting. Adopting one means others become invisible, like in the famous “duck or rabbit” illusion.

Duck-Rabbit_illusion

But the examples Scott’s article brings up aren’t either/or like that! No, but for some reason we think they are, or we feel they are, and I’m interested in why.

In a comment, David Friedman gives example of how this applies to the kind of narrative that explains something as resulting from a cause:

This reminds me of a related fallacy I have seen. An economist argues that one result of increasing welfare payments available to unmarried mothers will be more children born to unmarried mothers. The response is “do you really believe women have children to get the money — isn’t it obvious that a child costs more than even generous levels of welfare?”

The response is attributing to the economist a unicausal model in which the only reason to have a child is money. His actual model is multicausal. The decisions that lead to becoming an unmarried mother involve balancing a large number of costs and benefits. Increase one benefit or decrease one cost and the balance shifts for women who are on the margin.

We easily fall into the trap of thinking that something has one cause, called “the” cause. “What caused [thing]?”, we ask, implicitly expecting one answer. If there are several, they’re expected to compete, in a “this town ain’t big enough for the both of us” kind of way. Sure, we’re capable of thinking multicausally but it requires effort. It’s not the way our minds work and, notably, it’s not the way our languages work. “X caused Y” is easy to express and explain. I’m still looking for phrasing that captures multicausal models in an intuitive way.

Why is it like this? Why are some things such that there’s only one “free slot” so there can be only one thing to worry about, one sort of solution to global warming to favor or one reason why somebody does something? If not only one in total then at least only one that fits in our mind at any given time.

I think it’s related to the fact that our beliefs aren’t bags of disconnected ideas. Most of us may be able to decouple ideas from each other when we want to, but it’s not really intuitive behavior and the ideas, beliefs and values we hold are arranged in a complex web of associative and implicative connections[1]. Believing (in the sense of adopting or endorsing, which means you can “believe” in values as well as propositions) in an idea has consequences — it implies other ideas about the nature of the world, what will work and what won’t, what another person is like and wants, who is right and wrong, good and bad, responsible or not responsible for what etc.

In other words, activating a node in a network of beliefs will also activate surrounding nodes. Around an idea or narrative there’s an implied expansion that looks a certain way, and it may look very different from the implied expansion around another idea.

I hesitate to call this implied structure an “aesthetic”, but I do think it has a lot in common with that concept. This allows me to connect the idea of partial narratives to one of the more fertile ideas I’ve come across over the last few years: Sarah Perry’s conception of “mess”. What is a mess?

Flat uniform surfaces and printed text imply, promise, or encode a particular kind of order. In mess, this promise is not kept. The implied order is subverted. Often, as in my mess of text and logos above, the implied order is subverted by other, competing orders.

The information theory equivalent of a mess might be a chunk of data, pieces of which have been encoded using different symbolic systems, according to no particular order. If we discover the correct encoding for a part of the message, this seems to promise that it will work for the whole thing; but this promise is not kept.

Mess is only perceptible because it produces in our minds an imaginary order that is missing. It is as if objects and artifacts send out invisible tendrils into space, saying, “the matter around me should be ordered in some particular way.” The stronger the claim, and the more the claims of component pieces conflict, the more there is mess.

Objects project an implied order and creates a feeling of mess when combined with other objects that imply a different, incompatible order. Ideas also imply an order — an order where the role of aesthetic harmony is played by moral and intellectual clarity — an ideology, if you will, which is the intellectual and moral equivalent of a style or aesthetic.

A mess is an ugly, uncomfortable state full of internal contradiction and dissonance. In system theoretical terms it has high tension, high strain, high energy, and is liable to collapse into a more stable (but less accurate) state. Keeping multicausal models in mind is like balancing on the top of a hill.

A picture and a little story always help. So here we go:

transpara1

These dots can sit next to each other without causing any problems. But if they invite all their friends we have a problem:

transpara2.png

Their friends don’t get along. They step on each other’s toes and fights start to break out. The party turns into… a mess.

I briefly touched upon something like this last year in Science, the Constructionists and Reality and called it “transparadigmatic dissonance”. Now, I’m thinking there are two opposite ways such transparadigmatic dissonance can manifest.

One is the “outside-in” case, where two or more different interpretive paradigms are applied to a single idea or fact (which is obviously compatible with itself) and return different, incompatible interpretations of that idea. Examples are scientific ideas about causation in the brain being interpreted differently through a traditional, quasi-magical account of how the will works (discussed at length in Erisology of Self and Will, mostly in part 4), and an innocuous standard assumption in genetics turning into dynamite when transplanted into politics (discussed in A Deep Dive into the Harris-Klein Controversy).

The other is the “inside-out” case we’re talking about here, where two or more compatible ideas feel incompatible because they evoke incompatible paradigms.

Using people as metaphors again:

The outside-in case is when a person seems different depending on what people they’re with. Bob’s mother is going to have a different impression of him than his friends do. So will his workmates or associates at the local sex dungeon.

The inside-out case is Romeo and Juliet. Alone they’re compatible, but as part of their families they can’t be together.

Is there a way to deal with this? One answer is individualism: disconnect the person from their social environment. Or rather, not disconnect completely, but make the connections optional; make it possible to activate or deactivate them at will. All your friends and family don’t have to come to everything.

That means context-dependent decoupling or compartmentalization when applied to ideas. We need to understand that ideas’ implications don’t always have to be active because they are, in many cases, artifacts of the models our minds use. They’re in the map, not in the territory and they should be available, not obligatory.

Another answer is simply to become more comfortable with dissonance and seek coherence on a different level.

• • •

Note

[1]
Of course these webs play a central role in coalition formation, so our powerful “with-us-or-against-us” social instincts are recruited to do important galvanizing work.

Did you enjoy this article? Consider supporting Everything Studies on Patreon.

5 thoughts on “The Romeo And Juliet Fallacy

  1. The idea that saying P is taken to imply ~Q because of the omission of saying Q is actually a feature of statutory canons of interpretation (“expressio unius est exclusio alterius”). There seems to be a long history of this sort of thinking, arising out of non-logical conventions related to language and the distribution of emphasis (we take it to be true that spending 75% of your time talking about P and only 25% of your time talking about Q indicates greater commitment to P over Q, which is of course ridiculous, since you might have other reasons to take more about P like if P is undervalued, in which case there is a kind of intellectual arbitrage opportunity).

    Like

  2. ““X caused Y” is easy to express and explain. I’m still looking for phrasing that captures multicausal models in an intuitive way.”

    What about something dead simple, like “X contributes to Y”?

    A few that I have been playing with recently in different contexts:

    1) X contributes to Y
    2) X correlates with Y
    3) X is predictive of Y

    Only #1 really deals with the causal relationship, of course.

    Liked by 1 person

    1. You can definitely do that but it’s not quite what I have in mind. In your example you still talk about the different factors separately, you just make sure to say each is only part of the story. I’m thinking of a way if describing it “from the middle”, not as a combination of separate things.

      Like

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s