Facing the Elephant

I should’ve come out with a review of Kevin Simler’s and Robin Hanson’s book The Elephant in the Brain last winter when I finished reading it. But then I had almost 10,000 words of quotes and random thoughts to organize, so I put it off.

And stuff kept getting in the way. I took out one train of thought and made it into its own article, and then April was approaching so I had to get my April Fools joke ready. Around that time the treasure that was the Harris-Klein kerfuffle fell into my lap, consuming the next month. Oh, and then the Eurovision Song Contest was up, so I had to write about that to maintain my personal tradition. The Harris-Klein article blew up (and made the blog about an order of magnitude bigger) and in particular its “decoupling” concept, so I made the effort to develop it further while still relatively fresh.

I turned my attention back to my The Elephant in the Brain mess for a while before one of my Twitter threads became a little bit of a hit, which made me expand it into Postmodernism vs. The Pomoid Cluster. To keep things active over summer I made sure to finish an old draft called 30 Fundamentals and publish it while working on this. Now, finally, months later, I’ve slain my Elephant-shaped white whale.

Why go on about why it’s taken me so long to get this article done? Well, I found these last few months’ experience interesting. It shows how hard it is to plan your writing and how it takes on a life of its own. Before I started this blog I had all these plans for articles and almost nothing worked out the way I thought it would. It’s a funny thing.

Also, this is my blog and I can go on and on about what I damn well please. People seem to like it. Now, for the main course:

elephant

The book is probably familiar to anyone reading this, so I won’t spend too many words describing it. The title is obviously a reference to the expression “elephant in the room”, referring to something everyone knows but avoids talking about. It’s a good title; it’s memorable and illustrates the message in two different ways: it’s something people sort of know but avoid talking about, and it’s something we, individually, avoid looking at in our own mind.

That something are the hidden selfish motives behind everything we do. It’s not the kind of hidden motives where we intentionally lie to other people about why we do what we do. No, it’s motives our mind hides from its own consciousness and self — it’s “PR department” as Simler and Hanson puts it. The existence and ubiquity of such motives has some major implications, as we shall see.

Their thesis can be stated in two short paragraphs:

In summary, our minds are built to sabotage information in order to come out ahead in social games. When big parts of our minds are unaware of how we try to violate social norms, it’s more difficult for others to detect and prosecute those violations. This also makes it harder for us to calculate optimal behaviors, but overall, the trade-off is worth it.

and

Our main goal is to demonstrate that hidden motives are common and important — that they’re more than a minor correction to the alternate theory that people mostly do things for the reasons that they give.

I won’t spend any significant time discussing the validity of their claims. Others have done that. I’m going to accept that what they say are to a significant extent true and instead focus on exploring the implications.

I wondered how much new information I could expect. A review by The Zvi that I read first recommended it especially to people unfamiliar with Robin Hanson’s blog Overcoming Bias and his signature “[thing] is really about [something else]” explanations. I’m not new to that at all so I had some doubts about whether it was worth reading for me, but everyone in my Twitter feed was reading it so I felt I should too.

The book is intellectually ambitious in that it doesn’t just want to give us information or argue for a single point. It wants to install in its reader a new lens through which to view the world. Such books can be truly formative when you read them in your teens or early 20s, that pivotal time when your mind is a wild frontier instead of the settled society used to gradual reform it turns into by your 30s. Given my accumulated mental inertia, the effect Elephant still managed to have is impressive. I wrote in Six Kinds of Reading:

I noticed a strange effect when reading it. I’ve read plenty of both Hanson’s and Simler’s other work and almost nothing in Elephant was new to me, but I still felt I was learning something. Ideas from the corners of my mind’s eye were pulled together and put front and center, and this had a particular effect on the structure of my mind.

What happened was this: while it was all familiar I’d only encountered it in bits and pieces before, and having the parts brought together in a single volume improved the structural integrity of my worldview. It made the whole edifice sturdier. The partial overlapping with ideas on many topics fixed the system in place, like how papier-maché becomes strong and rigid by layering paper strips partly on top of each other.

I knew of all the examples of unconscious motives, even complex ones. But they were still just a cloud of isolated noise points I couldn’t think of as a meaningful whole. I therefore, in a very real sense, didn’t think of it at all.

It makes you look a little closer at your own thoughts and behaviors. Specifically, it makes you view your emotional impulses with greater suspicion. I’ve always been curious about them and where they come from — thoughts and feelings, needs and jugdments that just show up, uninvited and unannounced, to tell you in no uncertain terms and with little to no explanation what’s good and bad or what must and musn’t be done.

Elephant makes the case for the most cynical explanation: what seems inscrutable and opaque if you look into it — and is just taken for granted if you don’t — is in fact the way it is for a reason. There are things going on in our minds that we are not told about because it’s easier to hide things from others if they’re also hidden from us[1].

Knowledge suppression is useful only when two conditions are met: (1) when others have partial visibility into your mind; and (2) when they’re judging you, and meting out rewards or punishments, based on what they “see” in your mind. These two conditions may hold for nonhuman primates in some situations. In the moments leading up to a fight, for example, both animals are struggling frantically to decipher the other’s intentions. And thus there can be an incentive for each party to deceive the other, which may be facilitated by a bit of self-deception. Just as camouflage is useful when facing an adversary with eyes, self-deception can be useful when facing an adversary with mind-reading powers.

So, many of our behaviors are the result of feelings, impulses and motivations whose origins and rationales are hidden from us. Their most common function is to show us off as valuable social allies[2].

After finishing this book I’ve been racking my brain trying to figure out why I didn’t think in these terms before. All the pieces were there. Elephant didn’t bring me much new information but it hammered its point home over 400 pages. It made me cram it all into my working memory at once so it could be sewn together, and presto — a mental schema was born. It promptly installed itself as a productive member of my staff.

It didn’t keep its head down, though. It in fact picked a few fights with other employees, which eventually led to some serious internal policy changes.

Lament revisited, or What’s a motive, really?

I’ve got a long history of reading about evolution and thinking about its implications for the self. I read The Selfish Gene at 17 and followed up with, among others, The Blind Watchmaker, Climbing Mount Improbable and Dan Dennett’s pan-evolutionary treatise Darwin’s Dangerous Idea.  The simple yet complex beauty and staggering power of the evolutionary process was fascinating, and so was its capacity to explain part of why we are the way we are.

Naturally I’ve always wanted to defend the role of biology in explaining human behavior, and that has made me sensitive to people pushing vulgar interpretations of evolutionary psychology (and its predecessor sociobiology) that paint a needlessly bleak, cynical picture. It’s a mistake, it doesn’t follow, and it falsely convinces people to reject biology as a factor because of the perceived unpleasant (and false!) implications. The point of The Selfish Gene is not that we’re selfish, which many seem think (including, I’m sure, a few that have actually read the book). It’s that we aren’t — and biologically speaking that needs an explanation. That explanation is that when we are not selfish, our genes are (metaphorically speaking). Our genes are, however, not us. Unselfish organisms exist because sometimes unselfish behaviors makes it more likely that your genes will spread in the population.

So naturally I’m weary of arguments suggesting that something is “selfish” in an ordinary sense just because they’re “selfish” in a genetic sense. It’s a very important distinction to make.

My alarm gets tripped a few times when Simler and Hanson talk. For example, they discuss the altruistic behaviors of “babblers” (a kind of bird) and argue that they are in fact not altruistic at all:

Thus babblers compete to help others in a way that ultimately increases their own chances of survival and reproduction. What looks like altruism is actually, at a deeper level, competitive self-interest.

Depending in the exact meaning of “deeper level”, this is true, but risks being highly misleading to people not sufficiently familiar with the jargon. This was my point in A Lament on Simler Et Al last December: we definitely don’t walk around thinking about our secret selfish motives, lying to everyone else. Our prosocial feelings (benevolence, gratitude, empathy etc.) are real, as in we’re not pretending we have them. Arguing that we’re actually selfish all the time is a misrepresentation, conflating selfishness in an ordinary sense with acting in ways that are beneficial to our genes (and sometimes us) for psychologically non-selfish reasons.

Now, if I also accept the message of Elephant it seems like I’m contradicting myself, doesn’t it?

Yes, and I’ve changed my mind. The thesis of Lament depended on dividing motivations into two distinct categories: explicit instrumental reasoning vs. terminal values implemented in emotional impulses. The second are absolutely real and fundamental, and it’s beside the point that it’s an evolutionary, game-theoretic logic that have given us these feelings and motivations. It all happened outside the mind, far in the past and we’re not responsible for it. I don’t care for my children or help my friends because an intentional calculus suggests I’ll be able to spread my genes or reap rewards later on. I really do love them.

I stand by that point. There’s a selfish-of-sorts calculus behind our motivations, but whether it was performed by evolution before we were born or if it’s done intentionally right there in our conscious minds makes a big, big difference.

eleph3

In this picture, we can wash our hand of the “sin” (i.e. the “selfish” calculus) because it’s outside our minds and selves.

Unfortunately for past me, Simler and Hanson find the flaw in this reasoning, and it’s a big one: it rests on the assumption that that the mind and the conscious self are one and the same. When you relax that assumption it opens up the possibility of a self-interested calculus going on inside our heads, but still outside the conscious self.

eleph2

According to this model that Elephant seems to endorse as at least partially correct, the mind is larger than I thought — and larger than it feels from the inside. And it performs some of the functions I thought of as “outsourced” to evolution. Out of sight is not out of mind.

I really should have seen it. The evidence was everywhere, and if I had looked at things more carefully I would have understood that our prosocial instincts could not have been quite as blunt and across-the-board as my first model suggested. They’d be open to exploitation. Instead, they are remarkably shrewd. I don’t keep track of favors between me and my friends, but there’s clearly something within me that does because if it gets too unbalanced I, like other people, instinctively react with either resentment or guilt depending on how it leans. And that’s just one example. Further down I discuss the chapter on charity, and there too it becomes obvious that the “strategicness” of charitable behaviors is too sophisticated to be entirely “precomputed” by evolution and compiled into generic fuzzy feelings.

I confess I made it easy for myself in Lament by avoiding this. I did the same in Erisology of Self and Will where I say that biology acts through the self, not past it, and thus doesn’t undermine agency. I made that point too strongly — sometimes mental causation does work past the conscious self. I simply didn’t think, I realize now, that this kind of complex, high-level calculation could be done by any part of the mind other than the rational self. Frankly I’ve always dismissed such ideas as Freudian nonsense and considered the undeniable exceptions to be ignorable outliers and not an important phenomenon in its own right, with serious implications for how to think about the mind.

Elephant suggests we need new solutions for some important questions. If my brain and my self are not the same and my mind is manipulating my self into doing something for reasons I’m not aware of, does this count as a motive? Is it an intentional action? Can I or can I not wash my hands of it? What is really the reason I’m doing it?[3] Our folk psychology doesn’t have a good answer, and if philosophy of mind does it is one I have yet to come across.

Our everyday notions of motive, action, culpability, virtue, selfishness, responsibility, causes, reasons etc. aren’t suited to dealing with a will that functions this way. Free will supposes that our actions are authored by a conscious self that can be reasonably viewed as a coherent, indivisible entity. I’ve argued before that this view is resilient and can withstand the standard threats from biology and physical determinism. But this is worse. This complicates things, and from an unexpected angle. It assaults not the self’s freedom as such but its coherence and its capability to make informed choices.

Our traditional view of will and motivation doesn’t hold up very well, hence the concepts dependent on it are on shaky ground and we’ve got a lot of refactoring to do. We should get on it, not just because it’s necessary but because it could potentially help us address a lot of problems.

The field of tension

I remember having a sort of epiphany many years ago when I heard a reporter ask some teenagers what they wanted from politicians and one of them said that she liked when they got upset and emotional because it showed they care.

In an instant, it turned my head inside out. I had never thought of it like that. I guess I had just assumed that other people shared my distaste and lack of respect for politicians who got flustered, emotional or too passionate in debates or speeches. It demonstrated to me that they couldn’t be trusted to keep their emotions and biases in check and look at things soberly. They couldn’t be trusted to work on dispassionately keeping their model of the world accurate, which is absolutely necessary for effective management and improvement.

I had never realized that what many people were looking for in politicians was conviction, not expertise or rationality. Will, not skill. I, on the other hand, had never particularly doubted politicians’ conviction.

This came to mind when I was mulling over the chapters on charity and medicine. How come? Well, they’re also examples of how our social intuitions aren’t conducive to solving complex problems.

Let’s back up. The standard view of charity is that if we’re good people we want to do good. And we do: those impulses are to an extent genuine in the way described in the last section. Its just that they’re subtly but clearly misdirected in a way that suggests that their ultimate function is something other than to simply do good. Simler and Hanson suggest that if doing good was our main motive we would for example be very concerned with how well what we’re doing is working. And according to the research, we aren’t. Not at all. Instead they exist because — you guessed it — they advertise our value as an ally.

[T]he incentives to show empathy and spontaneous compassion are overwhelming. Think about it: Which kind of people are likely to make better better friends, coworkers, and spouses — “calculators” who manage their generosity with a spreadsheet, or “emoters” who simply can’t help being moved to help people right in front of them? Sensing that emoters, rather than calculators, are generally preferred as allies, our brains are keen to advertise that we are emoters.

If doing the most good in total was the true purpose, we wouldn’t want people to be uncontrollably moved by someone in pain right in front of them. Instead we’d consider people virtuous if they were to dispassionately weigh their options and do whatever brought the most good. But if we’re shopping for friends instead of abstractly judging virtue, we do want people to privilege what’s close — because it’s likely to be us[4].

Charity exists in that field of tension between what feels right and a more considered view of how to best achieve our stated goals. The authors’ argument (and I agree) is that listening too much to what feels right and too little to what our rational mind is telling us is a problem when “what feels right” is contaminated by an invisible self-interest module in our brains. Explicit, rational thinking is transparent in a way feelings aren’t, and therefore we’ve got a better shot at rooting out its flaws.

The chapter on medicine comes to a similar conclusion: medicine is not so much about improving health outcomes as it about showing how much we care. This is supported by things like people preferring elaborate procedures with conspicuous sacrifice:

Patients and their families are often dismissive of simple cheap remedies, like “relax, eat better, and get more sleep and exercise.”[5] Instead they prefer expensive, technically complicated medical care—gadgets, rare substances, and complex procedures, ideally provided by “the best doctor in town.” Patients feel better when given what they think is a medical pill, even when it is just a placebo that does nothing. And patients feel even better if they think the pill is more expensive.

Another piece of evidence is that people appear to care much more about doctors’ bedside manner than their hard skills. Hanson and Simler discuss experiments that show people are shockingly uninterested in information that could help them make better decisions about which hospitals and doctors to go to. This is highly strange if we assume that we have a rational interest in getting better. In the charity case it makes sense not to be concerned about outcomes: we don’t notice much difference because the outcomes aren’t ours. But in this case, if we’re totally selfish, we should care a lot about effectiveness! Why are the patients themselves playing along in this signaling game?

Maybe this is something deeply primal? We evolved in an environment where there was no such thing as effective medical treatment besides the extreme basics. In such a situation your chances of recovery depends entirely on whether you will be protected and fed while you rest. In other words, it’s crucial that others are willing to make sacrifices to take care of you. So that’s what we on an emotional level want when we’re sick: to be cared for, and to know that we won’t be left behind when we can’t pull our weight. Simler and Hanson have a few examples (like, cooking instead of bringing takeout) that suggest visible sacrifice explains much of our behavior when caring for sick loved ones.

Modern medicine that actually works, but in difficult to understand ways that require specialized competence and not just caring, doesn’t jive with this intuition. As a result we put too much emphasis on our doctor making us feel cared for and too little on them having a good track record on outcomes — and as a societies we care too much that treatments demonstrate that no expense is being spared and too little about their measurable benefits.

Medicine exists in that same field of tension between what feels right and a more considered view of how to achieve our stated goals. In both cases, what feels right is really the result of unconscious mechanisms that serve the purpose of detecting and ensuring loyalty and care. They do not detect the competence required to solve the problem. If we assume that what feels right and actually achieves our goals are always the same[6], we’ll blindly follow our intuitions and end up with less-than-optimal outcomes. In some cases we understand this: lying on the couch eating pizza feels right to me, but I know it’s not good for me in the long run. It gets a lot thornier when all the benefits and costs aren’t concentrated in a single person.

I started this section with reminiscing about politics, and it has a lot in common with both charity and medicine: it also exists in the field of tension. It’s about trying to do good, but it’s complex and difficult to do it well. Our intuitions are not calibrated to handle such situations, because this sort of problem is evolutionarily novel, an artifact of large, complex societies. We fall back on judgments adapted to a more straightforward, small-scale clan-life kind of problem solving, where care and loyalty are what’s in short supply rather than ability to understand and solve the problem.

That raises the question: what does Elephant say about politics?

Hooooo-boy.

Politics, the antimatter of engineering

The book closes with a chapter on politics that I read with a heavy heart. They demonstrate how political opinions are about signaling loyalty to your side and expressing opinions isn’t about offering information as a contribution to collective deliberation but about indirectly offering information about yourself: who you are with, whose side you would take in a conflict, who can trust you.

As someone of a fundamentally nerdy, individualist persuasion this bothers me a lot. After all, a team-based popularity contest with analytical correctness reduced to irrelevant noise sounds like a decent description of nerd hell.

It’s no coincidence I feel like this and have an engineering degree. In many ways, the mindsets behind engineering and politics-as-coalitional-power-struggle are opposites. Politics is about securing a big piece of the pie for you and your side, while engineering — in the broad sense of constructing things in interaction with inanimate reality — is about making pie. I like making pie and I hate zero-sum fighting (and the quasi-fighting that is coalition building). I wrote in 30 Fundamentals:

I massively prefer creation and synthesis over other processes, I think in literally all areas of life. When I played computer strategy games as a kid I liked to place the enemies as far away from me as possible so I could build my civilization in peace. When fighting became inevitable I lost interest and started a new game. I think I actually spent more time modding such games than playing them. This has a big influence on my attitudes to everything, especially politics.

Not just that, I remember playing Age of Empires, Civilization 3 and Sid Meier’s Alpha Centauri and actually resenting having to spend resources on military units to defend myself. It felt like such a waste. I wanted to spend it all on building prosperous, beautiful civilizations.

That’s what’s so frustrating about politics-as-a-field-of-human-activity. It has elements of both engineering-as-positive-sum-creation and politics-as-coalition-building-and-zero-sum-fighting, and the second gets in the way of the first. That has to be resisted. More than resisted: it has to be fought. That sounds paradoxical, but in this case “fought” means “smothered by successful building”. There’s a veneer of rationality to politics just like there’s a veneer of rationality to all argumentation, and it needs to be maintained or everything starts to degrade.

That bit of lip service also gives us something to hold on to and work with. If we make enough effort to hold people to rational standards it’s possible to make the engineering-like, positive-sum aspect of politics bigger and the conquest-like, zero-sum aspect smaller.

So when you’re in the public sphere you have a choice: you can help build the trust, understanding and a cooperative spirit that supports positive-sum solutions, or you can set such social capital on fire by stoking hatred, encouraging disrespect, dismissal and condescension, fostering misunderstanding and rewarding loyalty over fairness. One point I missed in 30 Fundamentals might have been that my basic political orientation is metapolitical: for maintaining and building social capital, against burning it for personal or coalitional gain.

The last few years have been depressing.

To get a bit more personal and passionate: the open desire to “make things political” by insisting that the personal is political, that art and research is political, yes, that everything is political, is near the bottom of my list of things to have sympathy and respect for. I feel viscerally disgusted by it, the way I imagine conservative christians feel about abortion or communists about investment bankers.

I don’t dispute that it can yield valid insights to do this. It’s just so alien to me to want to do it, because political thinking is so dysfunctional as thinking. By encouraging approaching things with a political eye, you invite the corrupting influences of coalitional and loyalty-based concerns on anything that runs on results-oriented, rational and open thinking. Wanting this to happen is to me like just wanting to ruin something, to hijack it for extracting rather than producing value and to destroy carefully built-up trust in the process. It’s defection. Leave your weapons at the door please. No war in here, nor the precursor to it by other means.

I do my best to empathize with politics-pushers (I do understand that they have good reasons sometimes) but it’s hard when the anger comes screaming from the depths of my brain stem.

Oh.

Wait a minute.

No…

This is precisely one of those cases where a moral given just shows up in the mind, guns blazing, and puts me to work spinning justifications for why its true and important. As we’ve seen, this is reason to be suspicious. And sure, if we look carefully it’s obvious that my feelings on this point originates from self-interest: my personal strengths lie in creation and analysis and my weaknesses in relationship building. Of course I’ll notice and be Very Concerned about us intuitively overvaluing care and loyalty over analytical skill (to be frank, it’s also why I’m so otherwise inexplicably pissed about the art world being more about getting attention than creating). What I want is to have my strengths valued more and my weaknesses valued less. Naturally. “Local man’s values in line with own interests — more at eleven!” I guess.

At this point I sort of feel like the foundations of my mind are crumbling. Are there elephants all the way down? I certainly hope not, and since I’ve just argued that our nature do have better angels we can feed or starve I also don’t quite believe it. And I still think my story about the awfulness of political thinking has a good deal of validity to it even if it isn’t equal to The Truth. I do find it hard to keep arguing for it when I understand better that it’s mostly not its validity that makes me feel so strongly about it.

What do we do with this?

In their conclusion the authors wrestle with the possible consequences of publicizing these ideas. What if people took this message to heart:

We try to cultivate allies and undermine those who aren’t allied with us; we angle to take credit for successes and avoid blame for failures; we lobby for policies that will benefit us, even when we have little reason to believe those policies will benefit the entire group. We tell people what they want to hear. But of course we don’t do this out in the open. We don’t say to our enemies, “I’m trying to undermine you right now.” Instead we cloak our actions in justifications that appeal to what’s best for everyone.

And we believe it ourselves, because if we do it works better. Maybe making people more aware of this means stripping away the protective white lies that keep society working and people tethered to at least some moral code. If we keep saying that politics isn’t about policy but about coalitions, and charity isn’t about doing good but signaling empathy, people might take it as license to be as nakedly collectively or individually selfish as they please.

Yeah, there’s a risk. Like we can build social capital by insisting on maintaining a least the appearance of fairness and rationality in a way that spills over into real good behavior, I think there’s good reason to believe that engaging in a performance of kindness and selflessness does make us kinder, more selfless people. The circuitry from feelings to actions isn’t one-way only. What you do, you also become.

It makes me think of the movie The Invention of Lying. It had an interesting idea (before it became about religion and went down the drain): never lying meant no pretension and no polite fictions. By extension, that meant no internal conflict between what we really feel and what we say and do. If we didn’t have to keep track of what we should say and do there’s no need for anyone in our mind holding our selfish and superficial impulses back. By recognizing that we need to adhere to positive fictions we grant more power to the parts of our minds that really believe in them[7].

Myths about spirits and gods whose powers wax and wane with our belief in them and practice in their service may not be literally true but still capture something important about our psychology.

The “conservative” solution is thus: keep what works, it’s not perfect but its better than risking it all. But by writing this book Simler and Hanson, however reluctantly, signal (besides their own cleverness, of course) their support for a more “radical” approach. Our hypocrisy might have benefits, but the costs are great too. We’re wasting a lot of resources on misdirected charity, education arms races, conspicuous consumption, unnecessary medicine etc. that would do a lot of good if we could direct them more effectively towards their supposed ends.

For that, everyone (or at least the culture at large, whatever that means in practice) needs to get onboard. But how do you convince people about facts there are whole freaking brain systems in place to prevent us from thinking about?[8]

Some say everyone already knows this. Robin Hanson wrote a blog post responding to reviews which included assertions like “this is obvious to everyone not a hippie”, or a new “packaging of known ideas”. They do acknowledge this in the book when they talk about education:

Now, none of these “hidden” functions of school are all that hidden. It doesn’t particularly bother us to admit that primary school works well as day care or that college is a great social scene. Nevertheless, these functions get short shrift in public discourse. All else being equal, we prefer to emphasize the most prosocial motive, which is that school is a place for students to learn. It costs us nothing to say that we send kids to school “to improve themselves,” which benefits society overall, and meanwhile we get to enjoy all the other benefits (including the signaling benefits) without having to appear quite so selfish and competitive.

Regardless of exactly how aware people are, I think we can say with virtual certainty that they’re right that this perspective is underacknowledged when we talk in public. In other words, even if many to most people accept it on various levels, the culture as a whole does not. It’s not common knowledge in the sense of being considered “on the table” or “out in the open”. That leaves its rightful place in public discourse empty and thus it isn’t the factor in society’s idea-churning process it should be, which means that we as societies don’t make effective use of this knowledge. That’s a big problem, because game-theoretical hurdles like these require collective, intentional action — which in turn requires common, out-in-the-open knowledge.[9].

A common problem plagues people who try to design institutions without accounting for hidden motives. First they identify the key goals that the institution “should” achieve. Then they search for a design that best achieves these goals, given all the constraints that the institution must deal with. This task can be challenging enough, but even when the designers apparently succeed, they’re frequently puzzled and frustrated when others show little interest in adopting their solution. Often this is because they mistook professed motives for real motives, and thus solved the wrong problems.

Savvy institution designers must therefore identify both the surface goals to which people give lip service and the hidden goals that people are also trying to achieve. Designers can then search for arrangements that actually achieve the deeper goals while also serving the surface goals — or at least giving the appearance of doing so. Unsurprisingly, this is a much harder design problem. But if we can learn to do it well, our solutions will less often meet the fate of puzzling disinterest.[10]

Elephant could perhaps accomplish something by exposing these dynamics and thus limiting their corrupting influence, which would enable us to construct more effective institutions. But it’s an uphill battle. We’re up against human nature itself. And against Moloch.

But it’s not the first time. History shows that dramatic change is possible. Going from particularist, tribalist morality and despotism to universalism, rule of law and individual rights was also a shift that could’ve reasonably been dismissed as utopian. I don’t think my medieval equivalent would’ve ever thought it possible. Yet it happened, fuelled by prosperity, trade and technological development but also by philosophical ideals showing us what to aim for, and understanding of our nature showing us what to avoid. Modern institutions emerged to prevent the worst, most obvious problems stemming from human corruptibility. Can we aim even higher when building new ones? Maybe. But the legitimacy of such institutions required agreement on what the foundational problem was: even good people cannot be trusted with unchecked power. To build new institutions capable of directing efforts towards worthy goals effectively we need to agree on the next-level problems. The Elephant in the Brain gestures in their direction.

 

• • •

Notes

[1]

Remember, it’s not a lie if you believe it.

— George Costanza

[2]
This is convincing on the surface but some things don’t add up. If we’re looking for allies, shouldn’t we look for people with complimentary skills to our own? Shouldn’t extroverted leader types be impressed by the narrow-focus expert? The scrawny nerd be in awe of a peak physical condition Chad? Maybe I’m wrong but I’m not seeing that at all, most people seem to ignore or resent those with vastly different skill profiles and instead admire the similar but not identical. Why? Is it an expected value calculation going on? Value as an ally times likelihood to become my ally?

[3]
This involves getting rid of the idea that there is one such thing as the reason, which reduces to the general problem of getting people to think in terms of non-excluding partial narratives.

[4]
I guess I knew this but saw it as just a bug, the same way I used to think of is-ought conflation. But they aren’t bugs. They’re not random errors or noise, they’re the kind of anomalies that necessitate a paradigm shift. I’ve been shying away from this realization because of its, for me, uncomfortable implications: it’s definitely and on some level justifiably socially valued to be unable to control your emotions. This is a tough and bitter pill to swallow for someone who considers rationality, cool-headedness, consistency, integrity and self-control cardinal virtues.

In the end of the chapter they talk about how to improve charity by making it signal those virtues instead:

The other approach is to learn to celebrate the qualities that make someone an effective altruist. As Bloom points out, it’s easy (perhaps too easy) to celebrate empathy; for millions of years, it was one of the first things we looked for in a potential ally, and it’s still extremely important. But as we move into a world that’s increasingly technical and data-driven, where fluency with numbers is ever more important, perhaps we can develop a greater appreciation for those who calculate their way to helping others.

Maybe we can. While local, involuntary acts of charity advertise your value as a personal ally, effective altruism-type charity and its associated virtues advertise your value as a citizen of a large society. We don’t value this enough intuitively because it’s not natural (large societies aren’t natural) and some “artificial” means of non-local social recognition would be necessary. But I’m not holding my breath, it’s tough to go up against human nature.

[5]
I didn’t like this part and I’d like to point out that “eat better and exercise” is not simple. It’s simple as in “not complex” but not simple as in “easy”. It requires a lot of work and sacrifice for an uncertain payoff, much more than an expensive pill does.

[6]
Based on a discussion thread I read the other day I’d call this the “heroic” view. If what feels right is right, then acts that show the greatest personal virtue, courage and sacrifice will also have the greatest beneficial consequences. Heroic stories are set up in such a way as to make this true. In reality, virtue ethics and consequentialism diverge quite considerably in their attributions of greatness.

[7]
That’s why I don’t like cynicism except as a joke (or a coping mechanism) and I think it’s at least as big of a sin as naivety. In Rant on Arrival I said that I greatly prefer The West Wing over its mirror image House of Cards. That’s not because I’m hopelessly naive but because I’m unimpressed by gratuitous “look-how-dark-and-edgy-this-is” coquettishness — simplistic, detached nihilism is cowardly, and a cheap substitute for sophistication. It’s also because I think affirming ideals by showcasing straight-up, unironic virtue is an important, if unfashionable, function of art.

[8]
It’s hard to bring a mere partial narrative all the way all the way to common knowledge status because their very nature admits the existence of valid alternatives. An unflattering and unsettling partial narrative is especially likely to get the short shrift because we tend to believe accepting it means we have to reject the “opposite”, flattering one.

[9]
There’s a parallel between proactive ideas and common knowledge here: for an idea to take charge in the sort of chaotic, attention-grabby cognition that goes on in systems like minds and cultures it needs not just existence, but the kind recursive awareness of awareness of awareness that characterize common knowledge.

[10]
It’s hard to read this paragraph and not think of Hanson’s support for prediction markets and futarchy, and their failure to inspire much enthusiasm despite their obvious benefits.

Did you enjoy this article? Consider supporting Everything Studies on Patreon.

7 thoughts on “Facing the Elephant

  1. Thanks for the v. thoughtful review John. I feel like you’re grappling with all the same issues as I am, and coming to many of the same conclusions.

    Here’s something I’ve been wanting to say, and this seems like as good a forum as any to say it. In the process of writing Elephant (as you experienced while reading it), I found myself questioning pretty much everything — about my own mind, how people relate to each other, what’s “good” and “bad,” etc. But the one thing that I could never assail, the thing that kinda feels like a North Star, is the fact that positive-sum games are better than zero-sum games.

    “We should be playing more positive-sum games.” For certain definitions of “we” and “should,” this is almost a tautology. So I wonder if it’s possible to use something like it as the anchor for a philosophy of self, ethics, etc., in the face of Elephant-like challenges to the traditional concepts. Any thoughts on this, or related ideas, would be much appreciated.

    Liked by 1 person

    1. I’ve been think about this for a while and if course I can’t help but agree that positive sum behavior is morally superior. I don’t exactly know how much you can get out of that, given that taking second and third order effects into account makes knowing the exact results of your actions practically impossible. I’m not so sure it doesn’t reduce to utilitarianism and it’s “does the ends justify the means” problem. As a heuristic, sure, but it doesn’t really answer those who, say, want to engage in negative sum behavior in order to achieve some ideal of justice. Is that always wrong? I’m not prepared to say that necessarily.

      Like

  2. Common to footnote #6. Consider that calculating consequences is often untenable. Now reimagine virtue ethics as such: rather than a fixed set of virtues (from your text: courage, sacrifice, what “feels right”) there is a dynamic cultural “character hierarchy” in which new virtues get inserted as needed (for example: alignment with the oppressed, cool-headedness, fairness) as awareness spreads within the culture of the threat caused by their absence. Such a system, if it works, might make heroism look pretty good.

    Like

  3. I think it is high time we eschewed the notion of “freewill”. I think of it as a religious artifact and doubt it jives well with trying to reconcile hidden motives with true preferences. This is where I think economists get it: incentives. Once we are able to establish or identify what the incentives are, the motives become clear and freewill becomes irrelevant, at least in this domain of knowledge which is probably generalizable.

    Like

  4. Interesting read as always, thanks.

    Regarding footnote 2.
    “Maybe I’m wrong but I’m not seeing that at all, most people seem to ignore or resent those with vastly different skill profiles and instead admire the similar but not identical. Why?”
    Because vastly differing skillsets was not an important reliable feature of the ancestral environment?[1] Division of labor is such a fundamental aspect of civilization that we easily forget it is an invention of civilization.
    However, the ancestral environment reliably featured enormously dangerous creatures living in the neighborhood whose only difference from your ancestors was their slightly different way of speaking and behaving. Chad is not a baker on whose self-interest you rely for bread; he is a Hatfield to your McCoys.

    [1] Gender differences being the obvious exception, but evolution has indeed equipped us with a specific set of emotions to ensure we form alliances across that divide.

    Liked by 1 person

  5. I think your vocabulary still doesnt agree with that of Hanson and Simler. They would consider your actions „intentional“ in both your diagramms. The deciding factor is that they result from a calculation and what that calculation considers, not where it runs. Essentially they started out with a revealed-preference model of intentions and then learned about all these causal factors.

    Now if your intention concept has to double in a non-predictive role, such as with your ethics, then adopting this sort of view can be quite inconvenient, but that isnt really because of what they describe: the revealed-preference model has these sort of difficulties in many places (look up Caplans old piece on mental illness for example). I would advise that we split the intention concept into revealed-preference-intention used for prediction, and humanistic-intention used for ethical arguing. To keep connotations constant, rename the empirical concept, into elefant maybe.

    Like

Leave a comment