Beliefs as Endorsements

This is a little chaser to last week’s Wordy Weapons of Is-Ought Alloy. That got too long and I decided to make this section its own post. Like Wordy Weapons it discusses something that’s quite obvious when you think about it, but is still valuable to have spelled out.

Last time I argued that the division between between facts and values is alien to how the mind works. Claims and statements refers to models of reality and not reality itself, and instead of expressing facts or values they invite application of labels.

This is fact-like in that models are modeling the real world, but also value-like in two ways: firstly it endorses the use of certain labels and categories and particular definitions of them, and secondly, labels and categories are often value-laden themselves. Not equally much — “banana” isn’t at all, “capitalist” somewhat and “monstrous” very much.

What does this mean for beliefs? A belief is a claim you hold as true. But if claims aren’t just factual and therefore not simply true or false, then what does it mean to hold them as true?

In other words: What does collapsing the distinction between facts and values do to the nature of beliefs?

I used to take it for granted that to believe something means to consider it likely to be true. “Technically” true. Not good, not desirable, not useful, interesting, rewarding or beneficial. Just true. In correspondence with reality. A fact.

Connotations (and implications) simply didn’t count towards beliefs at all, because you could only “believe” the factual aspects of something. That’s all the word applied to and the rest was irrelevant.

Believing something meant implicitly translating it into a concrete, factual statment in your mind, and in the process remove all connotation and implication from it — “compiling it into machine code” as I put it in The Big List of Existing Things. In my mind, to “believe” in a moral proposition or in a “cosmically correct” class on which to place an example was a category mistake.

Given that, these examples I’ve collected of people using “believe” when arguing online make no sense:

I believe that taxation is a literal instance of theft.

What if you just damned believed that other people were worth less than you?

I, personally believe now that vegetarianism is a personal thing.

Do you believe it is the duty and obligation of other people, through the government, to support the poor through welfare payments?

Because libertarians believe any compulsion is slavery.

If you don’t believe that the lower classes are exploited, then I don’t know what to tell you.

The reason I’m so curious is because I genuinely believe that all human beings, irrespective of talent, wealth, etc., are equal.

Lena Dunham believes eating sushi is cultural appropriation.

Religions promote a lot of mundane beliefs as well (about how to treat others, how to live virtuously, etc.).

Jordan Peterson sincerely believes that journalism is a noble duty to seek truth and uphold freedom of speech.

If one person believes abortion is murder, then they are likely correct to vote for the one who will limit its scope.

I believe I have zero duty to follow the law

If you believe a sub is a sandwich, you must believe a hot dog is a sandwich.

But Jesus did believe you had to pay taxes.

A quiz administered, with questions such as “Do you believe that men and women should be equal?”

Huge proportion of ethicists believe that eating animals is wrong.

Do you really believe a man doesn’t have the right to know if he is the father of a child?

I believe that most, if not all, such regulations should be abolished.

For all the use of “believe”, none of these are beliefs in the factual sense. They don’t have truth values, instead they’re calls to apply particular labels to particular things. That should be familiar by now.

Of course I always knew this use was common, but I misunderstood why. I thought of them as simple mistakes. Bugs, basically. Slips of the mind, like thinking you’re more likely to roll a six because it’s been so long since a six last came up, or doing the wrong thing in the Monty Hall problem.

It runs much deeper. These uses aren’t pathological edge cases or mistakes but quite standard. No subconscious “compilation” into concrete propositions that we then factually believe or disbelieve actually happens[1]. We just think using the labels themselves without much thought to how they relate to physical reality. Ill-written pseudocode, not binary.

Well, kind of. Because I was also somewhat right: beliefs aren’t necessarily factual, but they’re trying to look like they are.

Try substituting “think” for “believe” in each of the examples above. What happens? To me they sound a lot better. “I think X” is an honest phrase, because it doesn’t pretend to distinguish between matters of fact and matters of judgment. “I think Lisbon is the capital of Portugal” and “I think chocolate is the best ice cream flavor” both make equal sense[2]. One isn’t imitating the other.

What I appeared to say above is that “believe” works the same way, but that’s not true. It has a more complex, unstable meaning, a result of it being used semi-metaphorically for rhetorical purposes: “I believe chocolate is the best ice cream flavor” doesn’t make sense the way the “think” version does, because it’s not a matter of fact and you can’t “believe” it. Not literally.

We use “believe” to cheat (consciously or subconsciously, the rhetorical games I discussed in Wordy Weapons are not entirely conscious) and dress up an opinion as fact — i.e. this question has an objectively correct answer and this is it, I arrived at it not by choice or subjective emotion but by simply apprehending the facts, so you should believe it too. “Believe” in this metaphrhetorical ahem use means “this has a right answer, dammit, just like factual issues do (even though it isn’t one, strictly speaking)”.

We do this even when it’s totally transparent, to signal commitment. It’s the equivalent of motivating someone by going “I believe in you!”. It’s an expression of personal support and loyalty, not the literal belief in someone’s capability it entirely unconvincingly barely pretends to be.

Indeed, to “believe” a claim often means to personally support a label-thing fit, to show loyalty to it the way you show loyalty to a person (or a god, an enterprise, an event, an entity of any kind).

The Is-Ought Snare Tightens

One of the major consequences of using “believe” this way to merge believing something to be true with personal support, is that it becomes difficult to express what you believe to be true if it has connotations and implications you don’t necessarily support. This creates a vicious circle, where only people who support the implications of controversial beliefs express them (because the people who don’t will avoid such issues so as to not be misunderstood), making it even more difficult to separate the two aspects of belief. Repeat.

There’s another concept that suffers from this same problem, namely expectation. To expect means to anticipate something both in a factual way (I expect it to snow tomorrow) and a moral way (I expect you to do the dishes when I cook).

What do we do when we expect something factually but not morally[3]?

I tend to believe that stereotypes are broadly accurate, in other words that they capture real and significant clusters of traits and behaviors, for the most part. Factually, then, I expect people to conform to stereotypes more than predicted by chance. But morally I don’t. I don’t think people are somehow obligated to conform to stereotypes, nor do I resent them when they don’t (in fact I quite enjoy that, it replenishes my faith in self-determination and individual agency).

Now, if someone asks me whether I believe some stereotypical claim (or asserts with unjustified confidence that to do so must be wrong), what the hell am I supposed to say? Yes I do expect it to be true, on average. No, I wouldn’t consider someone at fault for not conforming to that expectation, nor would I even slightly resist changing it facing contrary evidence.

I know of no way to say this that’s short and simple enough to be quickly understood and resistant to hostile interpretation.

It’s obvious that the two kinds of expectation are different if we think about it carefully, but manifestly we don’t think about it carefully. Much of the damage done by stereotypes comes from the is/ought-conflating thought patterns exemplified by “expect” (and “believe”). People are often hostile towards those who don’t conform to factual expectations, which suggests we actually don’t separate them from moral expectations: “doesn’t act the way I expect -> unlikely to fulfill expected obligations -> can’t be trusted”, apparently[4].

This is awful. This whole thing is a disease. Neither “believe” nor “expect” (or really any such word) can be trusted to have only factual meaning and not spill over into unwanted implications.

Isn’t that strange when you think about it? That we can’t easily and cleanly talk about something as basic as factual beliefs?

Isn’t it quite astonishing?

Well, it does have a function. It’s just a function you don’t like if you have a thing for technical correctness.

• • •


I didn’t quite get the extent of this until my late 20s. I defend myself by noting that I, like most people, assumed that others were like me. For me the difference between is and ought (and map and territory) has always felt obvious. It just made intuitive sense and I can’t remember having been explicitly taught it. Once I did realize this, a lot of things began to make sense, like that materialism is unintuitive to most people.

The difference between saying “I think X” and just “X” is that the first admits some uncertainty. Interestingly, it can be several kinds of uncertainty and it doesn’t matter which. If there is epistemic uncertainty about facts “I think X” means “X is probably true but I don’t feel entirely sure”, and if there is ontological uncertainty about value judgments it means “X should be affirmed but I recognize that there’s an element of subjectivity to it”. It’s telling that we treat these completely different kinds of deviation from perfect knowledge the same way.

We can get confused the other way too, but it’s less destructive. What if I expect something morally but not factually? I expect people to be rational and charitable in debate. I expect media to be fair and unbiased. I expect people not to defect in Prisoners Dilemma-type situations. If you confuse such moral expectations with factual expectations, you’ll think I’m naive. I’m not, I have ideals. It’s vitally important that we can have ideals, that we can have moral expectations that’s clearly separate from factual expectations. It’s like sin, interestingly. God morally expects us to not sin, but factually expects us to fail. And that’s okay. It’s just not okay to not try.

Another possible explanation is that not conforming to other people’s (factual) expectations imposes extra cognitive work on them, since they can’t accurately represent you in their mind with a standard template. This extra load causes irritation and by association negative feelings towards non-conformers. If this is a significant factor we can predict that greater cognitive resources like higher intelligence, more education and larger working memory capacity would correlate negatively with stereotyping and hostility to the non-conforming. I can’t prove that but it seems plausible.

Did you enjoy this article? Consider supporting Everything Studies on Patreon.

One thought on “Beliefs as Endorsements

Leave a comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s