A Lament on Simler Et Al

[Note: Quick and unpolished. Might be messy and confused.]

I read Kevin Simler’sThe Leaning Tower of Morality the other day. It’s a good article, but something about it bothered me. It bothered me more than it would have if someone else had written it, because Simler is usually so good and insightful.

In Leaning Tower, he talks about the evolutionary origins of morality, and defends the idea that our precarious “tower of unselfish behavior” is built from what is ultimately individual self-interest, instead of that often reached for but scientifically dubious group selection.

/—/ whatever explanation we come up with will necessarily entail two unpalatable conclusions: two ‘bitter pills’ we would have to swallow if we want to ascribe our moral instincts to individual selection:

Bitter pill #1: Accepting that there’s no instinct for true altruism. Individual selection simply can’t evolve a creature that doesn’t optimize for its own bottom line; self-interest is non-negotiable. In the language of our tower metaphor, this means accepting that the penthouse doesn’t exist.

Bitter pill #2: Accepting that even the highest floors, just below the missing penthouse, are manifestations of self-interest. In other words, every instinct for empathy, compassion, charity, and virtue, to the extent that it’s inborn, evolved because it benefitted our ancestors who expressed it.

Thus, if we want to explain morality as a product of individual selection, we have to accept that the entire tower is based on self-interest.

Alright, what’s bothering me?

The conflation of “self-interest” in an evolutionary sense with self-interest in an ordinary sense[1].

Let me just start by pointing out that the dichotomy between “group selection” and “individual selection” is partly a red herring. Neither individuals nor groups are selected per se, as Richard Dawkins spends his entire seminar 1976 book The Selfish Gene talking about: it’s genes that are selected in this way (which is the reason some truly individually unselfish behaviors exist — towards kin) and this is fundamentally different. Basically, evolutionary pressure hasn’t made humans that maximize their own personal success, but humans that feel and act in such a way that maximizes the prevalence of their own genes in the future gene pool.

While this is an important point it cashes out to little difference in practice because what we’re concerned about isn’t that self-sacrifice for kin is “fake” but that the rest of it is.

In truth, insights into the evolutionary origins of altruistic behaviors tend to bother people because it gives the impression that apparently unselfish behaviors are “really” selfish, deep down. But they aren’t, and thinking that they are confuses our motives with the mindless optimization process that created those motives. This is very common and very bad.

When we call somebody “selfish” we have certain things in mind: screwing over others, not cooperating, being explicitly calculating in order to only maximize one’s own gain in interactions with others, hurting people for personal benefit, being untrustworthy, ignoring costs to others etc.

We use the word to classify actions and those who perform them, and specifically to indicate disapproval.

That’s what we care about when we fear and dislike selfishness, and “genetic self-interest” is not this at all. It’s a completely separate thing, with an altogether different subject (genes, not people). We can and do have altruistic instincts and knowing how evolution created those won’t make them any less altruistic in any way we care about.

But it’s the same word!

So what? It’s just a word. There is no “essence of selfishness” they have in common that makes one “really” the other, no ultimate/Platonic/cosmic meaning of “selfishness” that applies across contexts (just like there is no such meaning of “exist“). That’s just something our great proficiency with abstraction and analogy makes us believe.

When biologists use “self-interest” in an evolutionary sense they are being metaphorical.

Our needs and wants are justified by their own existence, not by the hypothetical goals of the meaningless process that created them. I don’t care for my children because I’m on a mission to spread my genes. I love them. That love springs from how my brain is set up, and yes, I understand why it’s set up that way, but that doesn’t change how I feel[3].

I’ve been thinking this for years and Simler’s article is by no means unusually bad, it’s not even “usually” bad (it’s not entirely clear if he’s actually confused about this or just expresses things in an unfortunate manner). So why do I write this now? I probably wouldn’t have if I hadn’t been pushed over the edge by that day reading another article doing the exact same thing. Paul Bloom writes in The New Yorker:

Some evolutionary psychologists and economists explain assault, rape, and murder as rational actions, benefitting the perpetrator or the perpetrator’s genes. No doubt some violence—and a reputation for being willing and able to engage in violence—can serve a useful purpose, particularly in more brutal environments. On the other hand, much violent behavior can be seen as evidence of a loss of control. It’s Criminology 101 that many crimes are committed under the influence of drugs and alcohol, and that people who assault, rape, and murder show less impulse control in other aspects of their lives as well.

Bloom acts as if the second is a counterargument to the first: on the one hand aggressive behavior is sometimes “rational” from your genes’ perspective, but at the same time it’s often the result of a failure of the rational mind to control the impulses it shares a head with. That’s a contradiction only if you believe that metaphorically rational genetic interests is the same thing as literally rational human thinking, and it’s not.

I wonder if this happens again and again because people think that what’s genetically “rational” must also be implemented in the rational part of the mind? “Rational” goes with rational, right? Same word = same thing? It wouldn’t be the first time. It’s the sort of foolishness that results from expecting too much from ordinary words. God, I hate words sometimes.

Can we please stop acting as if things are the same just because we use the same word to describe them?

Simler echoes Bloom’s confusion when he says:

If you say, “I do the right thing because it’s the right thing, period, end of story”, you leave no crack in your facade through which a pesky interlocutor might question your motives. However, if you say, “I’ll do the right thing because it’s usually the right thing for me”, you’re practically inviting unwanted speculation

This is what we fear: that others are just pretending to care about us out of a cold, calculated conviction that it’ll benefit them later on. Some people do think that way, they’re called psychopaths and they’re distinctly different from normal people.

Because normal people don’t work like that. You do do the right thing because it’s the right thing. The evolutionary explanation is for why it’s the right thing, how it became the right thing in our minds: what process built us in such a way that we honestly feel compelled to self-sacrifice? This impulse comes from somewhere and it, like everything else we have a habit of taking for granted, requires an explanation.

Explaining what we take for granted is what evolutionary psychology is about, and it doesn’t imply we consciously do the right thing just because it helps us any more than it imply that we have sex because we want to make babies. We desire sex. Full stop.

Yes, full stop. The chain of instrumental motivations stops and doesn’t continue out the back of our heads — what is “instrumental” to our genes isn’t instrumental to us but fundamentally valuable. There is no unbroken series of rationales that moves smoothly from our rational reasoning via hidden motivations to “self-interested” evolutionary purposes. Instead there is a complete break between the inside of our psychologies and the outside process that created them. The buck does stop. It stops at emotions and social instincts.

A neurotic desire for approval

Like “north” and “south” stops meaning anything when we leave the surface of the Earth, the world outside human minds is without intentions, feelings or morals — and words like “selfish” doesn’t mean anything. We use such metaphors only to help us understand a purposeless, algorithmic process very unlike what we’re good at understanding intuitively.

We can’t find moral truth by stepping out of our minds and looking for ultimate sources, because out there is only the void. If we think otherwise, we’re projecting. We’re so desperate to find something — anything — outside ourselves to anchor our morality that we latch on to evolution (which is spectacularly badly suited to the task) and then get disappointed when it can’t deliver what we want[2].

Simler seems to want (or I should say, channels others who want) non-kin altruism to get “evolutionary validation” as an adaptation that evolved specifically not because it helped the carrier themselves survive and reproduce. Examples of truly altruisitic behaviors must not be evolutionary “mistakes” or “misfirings”.

That’s his first “bitter pill” that rejection of group selection[4] requires us to swallow: there is no true altruism.

Of course there are instances of perfect self-sacrificial altruism: people do occasionally fall on their grenades, undertake suicide missions, risk their own lives to save a stranger’s, etc. But there’s an important sense in which we should analyze these as mistakes or accidents, rather than deliberate (strategic) behavior — at least from the perspective of the genes that fashioned our brains.

Sure, from genes’ perspective. But why is this such a bitter pill? Who cares? What are we expecting? Why are we so desperate for evolution’s approval? It’s not God. In some ways it’s the closest thing we got, but that doesn’t grant it moral authority.

“True altruism”. What exactly would that be?

Let’s say there’s a module in my brain that makes me feel that helping people is good and I want to do so (among other things I want). Assuming it looks like the module that actually exist actually does, what difference does it make how it got there?

Maybe sometimes group selection exerted selective pressure, maybe it didn’t. Maybe the module was installed by an omnipotent deity, maybe it wasn’t. If it still looks the same, how it got there doesn’t matter. How could I be “truly selfish” in one case or “truly altruistic” in another if there’s no difference between the two versions of me?

I think part of the resistance to ideas like this — objections like “that not real altruism”, “brain chemicals aren’t real love” and “compatibilist free will isn’t real free will” et cetera et cetera ad nauseum — stems from a fundamental discomfort with the idea that there’s a frame outside our mental universe at all. We realize that e.g. the rules of Monopoly are fundamental, immovable givens only inside the game and that there is a whole world outside it, where the rules are made up by some process (in this case, a game designer catering to a market). We do not realize, apparently, that there is also a whole universe outside our minds, where what is fundamental givens inside them are ordinary objects to be explained by physical processes[5].

Or if we do, we think it means we have to become nihilists. Apparently, neither perspective can live while the other survives.

It doesn’t have to be this way.

It’s everywhere

Oh but wait, there’s more. This thing just won’t leave me alone. The next day I saw a book in a storefront on my way to work. A crappy translation of its title is “The Solidaric Gene” — “solidaric” isn’t a word used much in English but I think the meaning is pretty clear — and it’s by the (where I live) well known left wing pundit Göran Greider.

The title is obviously a play Dawkins’s book and that makes my heart sink[6]. The famous title reads “the selfish gene”, not “the selfishness gene” and as I’ve argued at length: the difference matters. A lot. But so many (many who haven’t read the book and apparently many who have) don’t seem to get the difference. Genes are necessarily “selfish” because of how the evolutionary algorithm works — what spreads, spreads and that is all — but still produce organisms that are decidedly not selfish. The Selfish Gene isn’t a book about how we are all selfish but why we are not.

So there’s no such thing as a “solidaric” gene. There are, however, “solidarity genes” that form our prosocial instincts. Does Greider understand that and simply got sloppy with the title? I have little faith. I haven’t read his book (life is short) but I’ll wager it’s full of examples of humans behaving in unselfish and communitarian ways and probably something about how not-individualistic hunter-gatherers are. In that case it makes the same argument as the book it supposedly rebuts.

But I thought shouldn’t yell at a book I haven’t read and decided to at least skim a few reviews. Now I wish I hadn’t. Most of them were were way worse than I suspected the actual book to be, and they confirmed my suspicions. Here are some gems from one:

Evolutionary biologist Richard Dawkins published The Selfish Gene in 1976. The book marked a return of biology and sociobiology in political thought. The idea that humanity is fundamentally selfish resonated greatly in the era of Reagan and Thatcher.

Groan. Apparently this thought is right from the book.

Further:

“The Solidaric Gene” is a well written and well researched attack on the notion of a fundamentally selfish human nature. Instead of rejecting biology as playing no part in understanding humans, the author discusses modern biological and sociobiological research. He finds an area where even Dawkins and his compatriots and predecessors have moved away from the notion of the evolutionary selfish human.

Sigh.

Greider presents a plethora of research showing that humanity’s social and egotistical instincts are equally old, and that the former has been the basis of our evolution as least as much as everybody’s war against everybody.

If only there had been some other book that said exactly that, like 40 years ago.

I think I need a drink.

I don’t know what to do. Why do so many obviously intellgent people stumble on this? I read The Selfish Gene at 17 and I didn’t find its message hard to grasp at all. What’s going on? What’s so particularly difficult about this?

Is there a simple, effective explanation somewhere one can point to? Even if there is, it probably wouldn’t help. That need to anchor morality outside ourselves is strong, and it will contort minds in whatever ways it needs.

• • •

Notes

[1]
It’s not clear if Simler actually believes this — there’s a sense in the article that he’s making an argument because it needs to be made, not so much because he believes in the problem himself. Something of substance that isn’t true — but I guess I haven’t even noticed because I never believed it in the first place and only now begin to realize people might — is that there’s some untapped reservoir of primordial unselfishness that capitalism has beaten out of us and could in theory be leveraged into a communistic utopia. If you believe that, then yes, this isn’t the abstract, philosophical matter I treat it as in the rest of this post.

[2]
Religion is actually somewhat better since it morphs into whatever we need. Lag is a problem though.

[3]
I’m hoping the second season of Westworld will touch upon this. I got the feeling it might when (Spoiler) Maeve decided to go back to her daughter instead of escaping, even though she’s aware she loves her because she’s programmed to. Well, we’re all like that.

[4]
It’s surprising how often group selection is treated like some savior that would enable “true” moral virtue. It just trades selfish individuals in competition for selfish groups in competition (no group selection model I’m aware of is based on indiscriminate, universal altruism for insiders and outsiders alike). This is not a moral improvement. Considering what group vs. group competition historically looks like, the ugliness of the ideologies that embrace it, and the fanaticizing and outgroup-dehumanizing effect it has on people’s minds, give me individual competition any day.

[5]
I just finished Robert Kegan’s In Over Our Heads: The Mental Demands of Modern Life  the other day and it seems relevant. Kegan’s basic idea is that our minds develop in stages where the fundamental building blocks that constitute the mind at earlier stages become, in later stages, objects belonging to it. I think I’m saying something similar here: we must understand that our motivations are fundamental parts of our minds but not fundamental parts of the universe outside them, and identify with them by choice instead of because we don’t know any different. That’ll neutralize the threat of nihilism.

[6]
I feel sorry for Dawkins when it comes to his title choice. It’s a good title. It’s catchy and evokes the central metaphor well. But in terrible, terrible public discourse, people do latch on to the word “selfish” and assume they know what it all means. The whole process is barely conscious. I think he underestimated how stupid public discourse is and how extraordinarily bad it is at transmitting any ideas with more than a few bits of complexity.

7 thoughts on “A Lament on Simler Et Al

  1. Thanks for this very clear and thoughtful discussion! The “Leaning Tower” article got a pretty mixed reception, and I’m not sure what to make of it. Certainly I could have done a better job communicating with the reader, and I’m open to revising some of my thinking here.

    I think I understand the heart of our disagreement, but I’ll start by listing some important points of agreement with things you bring up in your post:

    1. At the end of the day, genes are all that matter, evolutionarily speaking; as you say, “What spreads, spreads.” In light of that, “individual selection” and “group selection” (and “kin selection”…) are fuzzy concepts.

    2. The most important distinction between morally good people and morally bad people is whether they’re playing the sociopath strategy (exploitative, negative sum) or the prosocial strategy (cooperative, positive sum). Of course it’s probably more of a spectrum than a dichotomy, but that’s the main axis. Both strategies are genetically selfish, but that’s irrelevant: we still judge the sociopath as evil and the helpful neighbor as good, and we’re right to do so.

    3. Our genes’ motivations aren’t our conscious motivations. Similarly, what is “rational” from a gene’s perspective isn’t necessarily the kind of thing we call “rational” in the human realm, and vice versa.

    Here’s what I consider to be the crux of where our perspectives diverge (although I could be wrong about this): I don’t think #3 is the full buck-stopper it’s often deployed as.

    You write:

    > There is no unbroken series of rationales that moves smoothly from our rational reasoning via hidden motivations to “self-interested” evolutionary purposes.

    I think I disagree. I would say:

    > There is (often) a (loosely-connected) series of rationales that moves (haphazardly) from our rational reasoning via hidden motivations to “self-interested” evolutionary purposes.

    Let me try to illustrate with a scenario. Suppose Mark is at work, and he’s tasked with making a decision that will affect his entire team. It’s not black and white, but one choice has first-order consequences that are better for his team, and the other choice has first-order consequences that are better for Mark personally. Suppose he thinks about this and makes the “right” decision — the one that’s good for his team. Consciously, I think he’s doing it because it’s the right thing. But I also think that less-conscious parts of his mind are processing the higher-order consequences, which in this case are good for Mark. In other words, he’s in an environment that (statistically) rewards good behavior. Counterfactually, if he was in a different environment where “doing the right thing” was an entirely self-defeating strategy (even after tallying up all the higher-order consequences, reputational effects, etc.), I think he might choose differently.

    I find these scenarios hard to talk about sometimes, largely there are so many exceptions, caveats, and shades of gray. Of course there will be some Marks who would “do the right thing” in pretty much any environment, and for any given Mark, it’s hard to imagine which features of the scenario would tip them from good behavior to bad behavior. My point is that a scenario like the above is possible and maybe even common, and the appeal to conscious motives (and their divergence from genetic motives) doesn’t always stop the buck.

    Does that make sense? Are we connecting here, or talking past each other?

    The other place where I sense some disagreement is where you write,

    > Let’s say there’s a module in my brain that makes me feel that helping people is good and I want to do so (among other things I want). Assuming it looks like the module that actually exist actually does, what difference does it make how it got there?

    Unfortunately this was the main point I was trying to make in my post, so I guess I failed to communicate. It *definitely* matters how a module got installed in our brains — because if the module got installed because of environmental conditions that are no longer in effect, then we’re in trouble (sociopaths will start to win, the tower will start to collapse, etc.). Does that not ring true to you?

    Liked by 1 person

    1. Hi and thanks for stopping by 🙂

      I’ll skip your points of agreement (because, well, we’re in agreement) and jump directly to the crux:

      > There is no unbroken series of rationales that moves smoothly from our rational reasoning via hidden motivations to “self-interested” evolutionary purposes.

      Funny you should go for that sentence, as I was *this* close to cutting it before I hit “publish”. I felt it went perhaps just a tad too far. Truth is, my main reason for writing this is pent-up frustration with the kind of thing I mentioned at the end (and your article pattern-matched just enough to trigger a reaction), and as a result I’m not focusing as much on the subtleties as I should. I basically assume here a model where the mind only has two parts, rational thinking and basic emotions, and argue that while we might think the evolutionary calculus happens in the first, it has already happened outside us and resulted in the second.

      It’d have been more honest, I guess, to acknowledge that there is a muddy middle between them full of half-conscious thoughts and murky motivations that contains impulses more self-interested than we might want to admit. I’ll defend myself by saying that this piece is meant as a corrective to the strawmannish “we’re all selfish deep down” narrative and is to be taken against that background and in synthesis with it.

      My most important point still stands, I think, and that’s a rejection of that scary thought that moral feelings and moral behaviors are somehow fake. The warm fuzzies we get when helping an old lady cross the street aren’t a lie or some deception. I don’t mean that all our moral impulses are buckstoppers, they certainly arent, but that some such feelings exist.

      How much of the “cold calculus” goes on in our head (without us necessarily knowing) and how much of it is “precompiled” into buckstopping terminal values is an interesting question, and I don’t know the answer. Whether the answer has any moral significance is another interesting question. Ultimately I think not so much, but my moral philosophy is pretty pragmatist.

      >Unfortunately this was the main point I was trying to make in my post, so I guess I failed to communicate. It *definitely* matters how a module got installed in our brains — because if the module got installed because of environmental conditions that are no longer in effect, then we’re in trouble (sociopaths will start to win, the tower will start to collapse, etc.). Does that not ring true to you?

      Sure, I hadn’t come across that point before, I thought it was very interesting (and true). This is probably just a misunderstanding due to bad phrasing on my part: I agree it matters where the “moral module” came from to the extent that the answer has bearing on how it actually works (and how it will work in an altered environment). But my hypothetical assumed that it’s exactly the same despite different histories, and in that case I don’t think it matters unless we think it’s important that our morals come from some “pure” source we can use as ultimate justification.

      Like

      1. Cool, I think we’re in pretty strong agreement about the role of genes, conscious motives, and everything in between.

        This paragraph probably explains why we’re coming at it from such different perspectives:

        > It’d have been more honest, I guess, to acknowledge that there is a muddy middle between them full of half-conscious thoughts and murky motivations that contains impulses more self-interested than we might want to admit. I’ll defend myself by saying that this piece is meant as a corrective to the strawmannish “we’re all selfish deep down” narrative and is to be taken against that background and in synthesis with it.

        If your problem is (over)reacting to a strawmannish misunderstanding of evolution (that you encounter all too regularly), my problem is that I’m far too steeped in the role of half-conscious thoughts and murky motivations, having just finished writing a whole book on the subject. So I don’t think we disagree on any material facts here; we’re just giving things radically different emphasis.

        I do wonder if we’re connecting on this other point, though, about the provenance of “moral” modules.

        I’m with you as a moral pragmatist; I mostly don’t care what’s going on inside, so long as we get good outcomes. But I’m worried about which set of outcomes we’re likely to get!

        Suppose we have an instinct that makes 95% of humans go out of their way to help elderly people cross the street, via some of those inexplicable warm fuzzies. There’s no nuance to it, just some brains get off on helping old ladies, while a small percentage don’t. I think we agree that the outcome of acting on these warm fuzzies is definitely good (in a utilitarian sense): more elderly people get to cross more streets safely, without falling or getting smashed. So what does it matter how those warm fuzzies came to reside in our brains?

        The problem (I will argue) lies with the 5% of people who don’t get the warm fuzzies. If the warm fuzzies evolved in an environment similar to the one we’re in today, then the 95% are in evolutionary/game-theoretic balance with the 5%. BUT if the warm fuzzies evolved because of conditions that are no longer present, then the 95% of people who act on them today won’t get rewarded (by their environment), and will slowly bleed EV to the 5% as they “waste” their time helping people cross the street for no benefit whatsoever. Of course, it’s not a waste from a global (utilitarian) perspective, but still it would be better if those 95% of people were getting sustainably rewarded for their good deeds.

        So the point I’m trying to make is that what we learn about our evolutionary history gives us a weird triangulation on facts about our current environment and which strategies that are likely to succeed over the coming years. All else being equal, I’d rather live in a world in which good behavior is likely to succeed.

        Does that make sense?

        Like

        1. >If your problem is (over)reacting to a strawmannish misunderstanding of evolution (that you encounter all too regularly), my problem is that I’m far too steeped in the role of half-conscious thoughts and murky motivations, having just finished writing a whole book on the subject. So I don’t think we disagree on any material facts here; we’re just giving things radically different emphasis.

          Hah, yes, obvious now that you say it. We’re telling stories that puts different aspects of the whole front and center, and consider one to be more fundamental than the other. Funny, because I’m writing a post on just that right now.

          The rest does make sense, but I still think you’re talking about something else than I am. Your worry seems to be that depending on the history of the fuzziness module it will work or not work in large-scale societies? To me that’s still a question of how it actually works and not about its history if that history is irrelevant to its configuration, right? Whatever, that’s an abstract, definitional issue. I do agree, strongly, that the exact nature of our moral and social instincts are of great importance for the viability of modern society in the future.

          I’d go as far as saying that what you’re describing is already happening. Large societies, where we live isolated lives, outside a strong single social structures, and interact economically mostly with strangers whose reputations we don’t know nearly as well as people we’ve lived our whole lives with, enables dysfunctional and antisocial behaviors that wouldn’t be possible for long in a traditional environment where both social support and punishment is everpresent, local and personal – and thus effective.

          You can see human history since the invention of agriculture (but especially since the industrial revolution) as a continuing series of efforts to construct artificial institutions to deal with problems cropping up as a result of us living in an unnatural (large, settled, full of strangers) environment our moral instincts aren’t well equipped to deal with on their own. These unnatural societies tend to have flaws (oppressive, massively unequal, uncaring, prone to destructive wars etc.) and require a lot of effort to build and maintain. On the other hand, the moral and political philosophies developed in response (human rights, democracy, acceptance of non-conformity etc.) point towards a possibility of achieving something even better than the “natural state”.

          Maybe. It’s a crazy project for some naked apes, really.

          Like

  2. I think that a lot of the confusion is created by the intuition that our selves are somehow identical to our individual bodies. But they are not; our minds are regulators that just happen to run on the brains that are carried around by them. While this creates evolutionary pressure for the mind to contribute to the success of that body, the relationship between mind and carrier organism is a bit more complicated than that.

    In the same way as the software running on your phone does not care about the phone, the software running on an organism does not automatically care about that organism. There is no a priori reason to care about anything; if you want a software to regulate a particular system, something must make it so. Due to the evolutionary pressure, our nervous system has found ways to subterfuge our mind sufficiently to identify with our model of our body, and care about its well-being. However, the evolutionary success of our genotype does not just depend on the maintenance of the individual organism, but also the success of our offspring, and the performance of our social group, and the result evolutionary pressure means that our minds are also regulators of the well-being of our children, our social environment, and the systems of meaning that give rise to the normative operating systems of our societies. This means that the software of our minds is usually implemented in such a way that it also identifies with things outside of our body. Our children and families, our circle of friends, our social group and the systems of meaning that drive the group’s egregore are all part of our self concept, i.e. we identify with them as a part of us, and thus we try to regulate for their success, on average approximating the proportion in which they have contributed to the evolutionary fitness of our ancestors.

    Thus, true altruism does exist, it is the result of the pressures of multi-level selection, and there is a difference between people capable of altruism and sociopaths. Altruists are identified with an extrapersonal telos, while sociopaths are not.
    The existence of an extrapersonal telos enables non-transactional cooperation, in which you don’t help another individual, because you expect to get something back in the long run, but because you perceive them as being part of a larger system of meaning that you are in service of yourself. The recognition that you are both serving the same extrapersonal telos is the core of love.

    Since there is no physical entity that would correspond to the extrapersonal telos, it must be entirely a projection by the cognitive system of the individual. The boundaries of the shared teloi are the boundaries between competing foodchains. We love exactly those entities that are part of our telos, and are generally willing to devour those that are not, and the art of manipulating, shaping and stabilizing this projection is at the core of religion, ideological movements and charismatic leadership.

    Liked by 1 person

    1. That’s a very interesting take, I hadn’t quite thought of it in those terms before (the closest is perhaps the self that extends outside the body, from here) but I think you’re right. It would explain how people can get so attached to ideologies or cultural products that aren’t even alive. Caring for things like nature, works of art you don’t own, or non-kin others may be different facets of the same type of instinct, even though we think of them as different.

      Like

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s