[Note: Quick and unpolished. Might be messy and confused.]
I read Kevin Simler’sThe Leaning Tower of Morality the other day. It’s a good article, but something about it bothered me. It bothered me more than it would have if someone else had written it, because Simler is usually so good and insightful.
In Leaning Tower, he talks about the evolutionary origins of morality, and defends the idea that our precarious “tower of unselfish behavior” is built from what is ultimately individual self-interest, instead of that often reached for but scientifically dubious group selection.
/—/ whatever explanation we come up with will necessarily entail two unpalatable conclusions: two ‘bitter pills’ we would have to swallow if we want to ascribe our moral instincts to individual selection:
Bitter pill #1: Accepting that there’s no instinct for true altruism. Individual selection simply can’t evolve a creature that doesn’t optimize for its own bottom line; self-interest is non-negotiable. In the language of our tower metaphor, this means accepting that the penthouse doesn’t exist.
Bitter pill #2: Accepting that even the highest floors, just below the missing penthouse, are manifestations of self-interest. In other words, every instinct for empathy, compassion, charity, and virtue, to the extent that it’s inborn, evolved because it benefitted our ancestors who expressed it.
Thus, if we want to explain morality as a product of individual selection, we have to accept that the entire tower is based on self-interest.
Alright, what’s bothering me?
The conflation of “self-interest” in an evolutionary sense with self-interest in an ordinary sense.
Let me just start by pointing out that the dichotomy between “group selection” and “individual selection” is partly a red herring. Neither individuals nor groups are selected per se, as Richard Dawkins spends his entire seminar 1976 book The Selfish Gene talking about: it’s genes that are selected in this way (which is the reason some truly individually unselfish behaviors exist — towards kin) and this is fundamentally different. Basically, evolutionary pressure hasn’t made humans that maximize their own personal success, but humans that feel and act in such a way that maximizes the prevalence of their own genes in the future gene pool.
While this is an important point it cashes out to little difference in practice because what we’re concerned about isn’t that self-sacrifice for kin is “fake” but that the rest of it is.
In truth, insights into the evolutionary origins of altruistic behaviors tend to bother people because it gives the impression that apparently unselfish behaviors are “really” selfish, deep down. But they aren’t, and thinking that they are confuses our motives with the mindless optimization process that created those motives. This is very common and very bad.
When we call somebody “selfish” we have certain things in mind: screwing over others, not cooperating, being explicitly calculating in order to only maximize one’s own gain in interactions with others, hurting people for personal benefit, being untrustworthy, ignoring costs to others etc.
We use the word to classify actions and those who perform them, and specifically to indicate disapproval.
That’s what we care about when we fear and dislike selfishness, and “genetic self-interest” is not this at all. It’s a completely separate thing, with an altogether different subject (genes, not people). We can and do have altruistic instincts and knowing how evolution created those won’t make them any less altruistic in any way we care about.
But it’s the same word!
So what? It’s just a word. There is no “essence of selfishness” they have in common that makes one “really” the other, no ultimate/Platonic/cosmic meaning of “selfishness” that applies across contexts (just like there is no such meaning of “exist“). That’s just something our great proficiency with abstraction and analogy makes us believe.
When biologists use “self-interest” in an evolutionary sense they are being metaphorical.
Our needs and wants are justified by their own existence, not by the hypothetical goals of the meaningless process that created them. I don’t care for my children because I’m on a mission to spread my genes. I love them. That love springs from how my brain is set up, and yes, I understand why it’s set up that way, but that doesn’t change how I feel.
I’ve been thinking this for years and Simler’s article is by no means unusually bad, it’s not even “usually” bad (it’s not entirely clear if he’s actually confused about this or just expresses things in an unfortunate manner). So why do I write this now? I probably wouldn’t have if I hadn’t been pushed over the edge by that day reading another article doing the exact same thing. Paul Bloom writes in The New Yorker:
Some evolutionary psychologists and economists explain assault, rape, and murder as rational actions, benefitting the perpetrator or the perpetrator’s genes. No doubt some violence—and a reputation for being willing and able to engage in violence—can serve a useful purpose, particularly in more brutal environments. On the other hand, much violent behavior can be seen as evidence of a loss of control. It’s Criminology 101 that many crimes are committed under the influence of drugs and alcohol, and that people who assault, rape, and murder show less impulse control in other aspects of their lives as well.
Bloom acts as if the second is a counterargument to the first: on the one hand aggressive behavior is sometimes “rational” from your genes’ perspective, but at the same time it’s often the result of a failure of the rational mind to control the impulses it shares a head with. That’s a contradiction only if you believe that metaphorically rational genetic interests is the same thing as literally rational human thinking, and it’s not.
I wonder if this happens again and again because people think that what’s genetically “rational” must also be implemented in the rational part of the mind? “Rational” goes with rational, right? Same word = same thing? It wouldn’t be the first time. It’s the sort of foolishness that results from expecting too much from ordinary words. God, I hate words sometimes.
Can we please stop acting as if things are the same just because we use the same word to describe them?
Simler echoes Bloom’s confusion when he says:
If you say, “I do the right thing because it’s the right thing, period, end of story”, you leave no crack in your facade through which a pesky interlocutor might question your motives. However, if you say, “I’ll do the right thing because it’s usually the right thing for me”, you’re practically inviting unwanted speculation
This is what we fear: that others are just pretending to care about us out of a cold, calculated conviction that it’ll benefit them later on. Some people do think that way, they’re called psychopaths and they’re distinctly different from normal people.
Because normal people don’t work like that. You do do the right thing because it’s the right thing. The evolutionary explanation is for why it’s the right thing, how it became the right thing in our minds: what process built us in such a way that we honestly feel compelled to self-sacrifice? This impulse comes from somewhere and it, like everything else we have a habit of taking for granted, requires an explanation.
Explaining what we take for granted is what evolutionary psychology is about, and it doesn’t imply we consciously do the right thing just because it helps us any more than it imply that we have sex because we want to make babies. We desire sex. Full stop.
Yes, full stop. The chain of instrumental motivations stops and doesn’t continue out the back of our heads — what is “instrumental” to our genes isn’t instrumental to us but fundamentally valuable. There is no unbroken series of rationales that moves smoothly from our rational reasoning via hidden motivations to “self-interested” evolutionary purposes. Instead there is a complete break between the inside of our psychologies and the outside process that created them. The buck does stop. It stops at emotions and social instincts.
A neurotic desire for approval
Like “north” and “south” stops meaning anything when we leave the surface of the Earth, the world outside human minds is without intentions, feelings or morals — and words like “selfish” doesn’t mean anything. We use such metaphors only to help us understand a purposeless, algorithmic process very unlike what we’re good at understanding intuitively.
We can’t find moral truth by stepping out of our minds and looking for ultimate sources, because out there is only the void. If we think otherwise, we’re projecting. We’re so desperate to find something — anything — outside ourselves to anchor our morality that we latch on to evolution (which is spectacularly badly suited to the task) and then get disappointed when it can’t deliver what we want.
Simler seems to want (or I should say, channels others who want) non-kin altruism to get “evolutionary validation” as an adaptation that evolved specifically not because it helped the carrier themselves survive and reproduce. Examples of truly altruisitic behaviors must not be evolutionary “mistakes” or “misfirings”.
That’s his first “bitter pill” that rejection of group selection requires us to swallow: there is no true altruism.
Of course there are instances of perfect self-sacrificial altruism: people do occasionally fall on their grenades, undertake suicide missions, risk their own lives to save a stranger’s, etc. But there’s an important sense in which we should analyze these as mistakes or accidents, rather than deliberate (strategic) behavior — at least from the perspective of the genes that fashioned our brains.
Sure, from genes’ perspective. But why is this such a bitter pill? Who cares? What are we expecting? Why are we so desperate for evolution’s approval? It’s not God. In some ways it’s the closest thing we got, but that doesn’t grant it moral authority.
“True altruism”. What exactly would that be?
Let’s say there’s a module in my brain that makes me feel that helping people is good and I want to do so (among other things I want). Assuming it looks like the module that actually exist actually does, what difference does it make how it got there?
Maybe sometimes group selection exerted selective pressure, maybe it didn’t. Maybe the module was installed by an omnipotent deity, maybe it wasn’t. If it still looks the same, how it got there doesn’t matter. How could I be “truly selfish” in one case or “truly altruistic” in another if there’s no difference between the two versions of me?
I think part of the resistance to ideas like this — objections like “that not real altruism”, “brain chemicals aren’t real love” and “compatibilist free will isn’t real free will” et cetera et cetera ad nauseum — stems from a fundamental discomfort with the idea that there’s a frame outside our mental universe at all. We realize that e.g. the rules of Monopoly are fundamental, immovable givens only inside the game and that there is a whole world outside it, where the rules are made up by some process (in this case, a game designer catering to a market). We do not realize, apparently, that there is also a whole universe outside our minds, where what is fundamental givens inside them are ordinary objects to be explained by physical processes.
Or if we do, we think it means we have to become nihilists. Apparently, neither perspective can live while the other survives.
It doesn’t have to be this way.
Oh but wait, there’s more. This thing just won’t leave me alone. The next day I saw a book in a storefront on my way to work. A crappy translation of its title is “The Solidaric Gene” — “solidaric” isn’t a word used much in English but I think the meaning is pretty clear — and it’s by the (where I live) well known left wing pundit Göran Greider.
The title is obviously a play Dawkins’s book and that makes my heart sink. The famous title reads “the selfish gene”, not “the selfishness gene” and as I’ve argued at length: the difference matters. A lot. But so many (many who haven’t read the book and apparently many who have) don’t seem to get the difference. Genes are necessarily “selfish” because of how the evolutionary algorithm works — what spreads, spreads and that is all — but still produce organisms that are decidedly not selfish. The Selfish Gene isn’t a book about how we are all selfish but why we are not.
So there’s no such thing as a “solidaric” gene. There are, however, “solidarity genes” that form our prosocial instincts. Does Greider understand that and simply got sloppy with the title? I have little faith. I haven’t read his book (life is short) but I’ll wager it’s full of examples of humans behaving in unselfish and communitarian ways and probably something about how not-individualistic hunter-gatherers are. In that case it makes the same argument as the book it supposedly rebuts.
But I thought shouldn’t yell at a book I haven’t read and decided to at least skim a few reviews. Now I wish I hadn’t. Most of them were were way worse than I suspected the actual book to be, and they confirmed my suspicions. Here are some gems from one:
Evolutionary biologist Richard Dawkins published The Selfish Gene in 1976. The book marked a return of biology and sociobiology in political thought. The idea that humanity is fundamentally selfish resonated greatly in the era of Reagan and Thatcher.
Groan. Apparently this thought is right from the book.
“The Solidaric Gene” is a well written and well researched attack on the notion of a fundamentally selfish human nature. Instead of rejecting biology as playing no part in understanding humans, the author discusses modern biological and sociobiological research. He finds an area where even Dawkins and his compatriots and predecessors have moved away from the notion of the evolutionary selfish human.
Greider presents a plethora of research showing that humanity’s social and egotistical instincts are equally old, and that the former has been the basis of our evolution as least as much as everybody’s war against everybody.
If only there had been some other book that said exactly that, like 40 years ago.
I think I need a drink.
I don’t know what to do. Why do so many obviously intellgent people stumble on this? I read The Selfish Gene at 17 and I didn’t find its message hard to grasp at all. What’s going on? What’s so particularly difficult about this?
Is there a simple, effective explanation somewhere one can point to? Even if there is, it probably wouldn’t help. That need to anchor morality outside ourselves is strong, and it will contort minds in whatever ways it needs.
• • •
It’s not clear if Simler actually believes this — there’s a sense in the article that he’s making an argument because it needs to be made, not so much because he believes in the problem himself. Something of substance that isn’t true — but I guess I haven’t even noticed because I never believed it in the first place and only now begin to realize people might — is that there’s some untapped reservoir of primordial unselfishness that capitalism has beaten out of us and could in theory be leveraged into a communistic utopia. If you believe that, then yes, this isn’t the abstract, philosophical matter I treat it as in the rest of this post.
Religion is actually somewhat better since it morphs into whatever we need. Lag is a problem though.
I’m hoping the second season of Westworld will touch upon this. I got the feeling it might when (Spoiler) Maeve decided to go back to her daughter instead of escaping, even though she’s aware she loves her because she’s programmed to. Well, we’re all like that.
It’s surprising how often group selection is treated like some savior that would enable “true” moral virtue. It just trades selfish individuals in competition for selfish groups in competition (no group selection model I’m aware of is based on indiscriminate, universal altruism for insiders and outsiders alike). This is not a moral improvement. Considering what group vs. group competition historically looks like, the ugliness of the ideologies that embrace it, and the fanaticizing and outgroup-dehumanizing effect it has on people’s minds, give me individual competition any day.
I just finished Robert Kegan’s In Over Our Heads: The Mental Demands of Modern Life the other day and it seems relevant. Kegan’s basic idea is that our minds develop in stages where the fundamental building blocks that constitute the mind at earlier stages become, in later stages, objects belonging to it. I think I’m saying something similar here: we must understand that our motivations are fundamental parts of our minds but not fundamental parts of the universe outside them, and identify with them by choice instead of because we don’t know any different. That’ll neutralize the threat of nihilism.
I feel sorry for Dawkins when it comes to his title choice. It’s a good title. It’s catchy and evokes the central metaphor well. But in terrible, terrible public discourse, people do latch on to the word “selfish” and assume they know what it all means. The whole process is barely conscious. I think he underestimated how stupid public discourse is and how extraordinarily bad it is at transmitting any ideas with more than a few bits of complexity.