I recently had the pleasure to be on the Intellectual Explorers Club Podcast hosted by Peter Limberg. We discussed erisology, culture war, rationality and postrationality, modernism and postmodernism, communication, and cultural fragmentation.
This post includes some observations from the experience and some clarifications/elaborations on things I said there.
Having never been on a podcast before I learned a lot. First of all, being a good podcast guest is harder than it looks (sounds). I came away with a new admiration for people who can sound eloquent in recorded conversation. You really do have to try to steer towards topics you already know how to talk about, because thinking as you speak makes maintaining flow hard. I now understand why many public intellectuals push their “thing” all the time. It’s not just brand management, it’s because that’s what they’ve practiced talking about. I’ve barely spoken about my ideas out loud before this and the transition from writing to speech was more dramatic than I had imagined. I don’t mean to say it went badly. I give myself a passing grade but I would’ve liked to be better.
Of course, one thing is that speaking in a second language is different than writing in it, because the small processing overhead suddenly becomes significant. Listening to our conversation I hear verbal tics I’ve never noticed before, and some slips from fluency I would normally miss. I also, to my chagrin, have a bit more of an accent than I like to think.
My main source of prior nervousness was going silent from having nothing to say, so I overcorrected by speaking too much and too long a couple of times, especially near the end. That’s my fault and Peter deserved better answers to some of his questions. As discussed in my recent meta-meditation, I have problems keeping my thoughts on track and I’m leaning towards it being a bad thing.
Maybe more experienced interviewees feel less stress about keeping the conversation flowing and can be more relaxed and attentive. Speaking face-to-face would also make silence less scary, I suspect.
I hadn’t anticipated the issue of having to balance talking to Peter and talking to the listeners. I know that Peter’s read my blog and already knows a lot about what I think but I can of course not assume that the audience does. The result was an attempt to both explain some basic ideas and concepts as well as explore new ideas. It’s a difficult balance to strike.
If I get the chance to do a podcast again there are a few things I ought to remember:
Easy on the “sort of”s.
Practice explaining central ideas beforehand. It’s hard to improvise smoothly even if you sort of(!) know what you want to say.
Be succinct. That is to say: avoid reiterating points, or say stuff again in slightly different words, like when you explain a basic thing but think you should say more — which means, essentially, that I should spend less time trying to develop a point more than it needs to just to come back to the starting point again as a conclusion. So, in short, I guess, try to be succinct. Like, not too long. Not longer than what is needed. Because that gets tiring, and doesn’t add much to the conversation, which reminds me of…
Be more goal-oriented. I clearly have a tendency to forget what point I was going to make and go off on tangents. That ought to stop.
Listen better. I felt pressured to always have something interesting to say, which made it hard for me to properly listen to and think about what Peter was saying. It needs to be more of a two-way conversation.
Now for some further comments on topics discussed:
On rationality and transhumanism
Around the 12 minute mark Peter says he considers me part of the rationalist blogosphere and I say yes, but point out that I feel about as much affinity with the postrationalists.
While I think the rationalist community (inasmuch as it makes sense to ascribe one position to it) is right about the nature of reality, I differ a bit when it comes to interests and attitude. In the podcast I mention utilitarianism and far-future AI as things that are of little interest to me, and decision theory, self-improvement and especially transhumanism can be added to the list. Maybe I’m getting old and maybe having children does things to you, but I don’t much enjoy discussing or thinking about a radically different far future where alien, abstracted morality questions may come into play because we have the power to reshape the nature of mind and reality itself. I find it disturbing, bordering on existential-horror-inducing and have come to prefer the here and now. This is a marked difference from the younger me, and perhaps growing up has made me more skeptical about the potential impact of radically different conditions on human psychological well-being. I have difficulty seeing — or feeling, more accurately — how transhumanism or superintelligent AI could lead to a future full of happy people. So many things can go wrong, and the more powerful we get the more wrong they can go. And overcoming all adversity is a noble goal but I have a hard time believing we’ll be happy when we do. It’s a bind and I have no solution. I guess that’s why I (yet) don’t mind that I’ll die some day.
On cognitive decoupling and contextualization
Around 27 minutes in Peter asks me about cognitive decoupling, which is something I should have a better elevator pitch for than what came out. I’ve written a fair amount about it (here and here) but I find it hard to explain briefly what it means without bringing up examples (and I’m also not sure that what I mean by it is the same thing its originator Keith Stanovich means).
At its most general it just means looking at a single issue/question/idea/fact at a time, and nothing is on the table unless it’s explicitly said to be. Related ideas, implications and historical or symbolic associations etc. can only be brought in with the consent of all parties. Contextualizing, on the other hand, means that all associative connections between ideas are valid by default and implications/associations count as relevant if any party thinks they are. It’s a simple idea but it takes a lot of examples to show how important it is for the mechanics of many complex disagreements. I didn’t quite do it justice on the podcast.
On modernism and postmodernism
After the decoupling question Peter asks me whether modernism and postmodernism map on to decoupling and complicated and I don’t quite know what to say. Part of the problem is that I’m weary of the word “postmodernism”. It’s used in many senses and I wasn’t sure whether he meant what I would call postmodernism proper — the idea that social reality is made of words (including knowledge about physical reality because facts are encoded in language) and since words lack objectively correct and stable meanings so does social reality — or if he meant the quasi-ideology I call the pomoid cluster , or maybe postmodernity (the cultural milieu postmodernist philosophy describes), or finally steelmanned postmodernism (which I more or less believe in myself): a measured critique of modernist excesses while incorporating its strong parts.
I think (as I mentioned) that the dichotomy makes more sense mapped onto the difference between science (or STEM) vs. humanities (and perhaps even more, arts) thinking styles, and how valid it is to connect it to modernism/postmodernism depends precisely on how valid it is to say that modernism = “STEM” and postmodernism = “humanities/arts”. So: yes, a bit, but simplified, and I don’t think the the decoupling-contextualizing axis adds much to the modern-postmodern distinction that isn’t already there in the standard accounts.
On politics in insight porn
After about 35 minutes I talk about how I don’t like when narrative-style philosophy or humor is constructed for political reasons. What I mean is that both humor and “insight porn” rely on people to suspend their skepticism. Insight porn doesn’t work if you keep asking for empirical evidence or airtight argumentation — it’s just as much an art as it’s a medium for truth claims, and since we humans are so susceptible to narrative, consuming art amounts to letting ourselves be manipulated. That requires that we trust the artist to not use the powers of the medium to alter our minds in service of their own political interests.
In other words, I have the intuition (that I intend to explore further in a future post) that ideas with political potency should not be communicated through media that relies on deemphasizing our critical reasoning faculties. It allows politics to pass without scrutiny, or if detected causes critical reasoning to be activated when it shouldn’t (happens to me a lot), which interferes with enjoying the art.
As I later mention and dismiss, you can call almost anything political if you want. I avoided elaborating on this because I find the topic so tiresome. It’s true there’s no clear line between the political and the non-political and avoiding anything with detectable trace amounts of politics limits you too much. One way to get around this is to become less sensitive to narratives and less likely to believe any compelling thing you hear. The best way to do that is to just collect a lot of them.
But that takes time, effort and will and in ideological monocultures people can create insight-porny systems of ideas that get confused for the truth (like, oh I don’t know, theology, “grievance studies“, or various “red pills”). I find it hard to categorically condemn any and all creation of ideological systems because it’s a huge overreaction. Being on our guard against such systems monopolizing entire spaces is the best we can do.
It brings to mind this (paraphrased) quote I can’t find now:
Where there is one religion, there is oppression. Where there are two, there is war. Where there are fifty, there is harmony.
On the meaning of “liberal”
Peter asks me about I, LPC at around 44 minutes and I don’t have much to add about it except one thing: I’m familiar with how the meaning of words change. It happens. It’s a fact of life. That doesn’t mean that the way it happens can’t be bad. It’s bad if it happens differently in different contexts, because it creates misunderstandings. It’s also bad if something starts to mean it’s own opposite (I complained about social constructionists’ use of “reality” in this piece for the same reason), which “liberal” often does. Coalitional politics makes it less galling to Americans I suppose, but where I come from liberals do not, as a rule, consider themselves left wing — and the real left wingers agree. The difference between the liberal and the Marxist(and its descendants) views of the world could scarcely be any greater. Some common enemies does not identity make.
The problem with corrective argumentation
I babble a bit after Peter mentions my 30 Fundamentals post. It makes my point sound more complicated than it is. It’s just this:
The fact is that whenever somebody speaks there are many things they assume but leave unsaid. It has to be like that or we wouldn’t be able to communicate at all. Unless you have a good idea of what those background assumptions are, you’re not going interpret what somebody says correctly. That’s especially true if you’re not an unnaturally committed decoupler and try to read between the lines.
It’s also true that a lot of argumentation is corrective (i.e. it’s meant to nudge others in a direction rather than guide them to specified point) and therefore will “overshoot” its true target (if you want to lower taxes by 10% you might start with “taxation is theft” etc.). In order for that to work and be understood the parties involved must agree on what the “defaults” are and what exactly an argument is meant to respond to. And I believe that now we not only have different defaults, we also have different beliefs about what the generally assumed defaults are: wanting a 20 week limit on abortions means making very different arguments if abortion is currently banned compared to if it’s allowed until a week before birth. Now imagine the confusion resulting from people having radically different ideas about what the current law is. This doesn’t happen with abortion in particular since laws are close to unambiguous facts, but it happens a lot when the “status quo” isn’t as well defined.
Good public debate is a public good
When Peter brings it up around minute 58, I express skepticism about the idea of encouraging better public debate through purposefully designed social media (I specifically want to apologize for going off on a tangent here; I ask Peter a question but the forget that I’ve done so). I give a decent explanation of why but I’d like to develop it a little more.
The social function of public debate is to evaluate the merits of ideas, but that’s not why we take part in it anymore than corporations take part in the economy in order to satisfy consumers wants. They want to make money and the creation of value is a side effect.
We take part in public debate to convince others we’re right, to shut up our enemies, to rally and organize our allies, to gain the respect of our group, to commiserate and bond, vent our anger and sooth our personal anxieties, to express ourselves or experiment with identities, to show off our intelligence, compassion and loyalty, or simply in pursuit of money and fame. The evaluation of ideas is a fortunate side effect. If we believe people will do that work without getting the psychological or social rewards that come with it — which is what we believe if we try to design a version of public discourse where we can’t do the things we really come to social media to do — we’re going to be as wrong as the most utopian anarcho-socialists are when they believe we can sustain a hypercomplex global economy without the profit motive.
I’d love it if people would abandon Twitter/Facebook/Tumblr in favor of a platform designed for rational debate. I really would. But it’s not going to happen for the same reason the existence of kale smoothies doesn’t solve the obesity problem — the small minority of people who rush to consume it weren’t the problem in the first place.
The best we can do is to try to improve the “market” as best we can. That’s why I want erisology to be a thing. I want us to be able to navigate and regulate the marketplace of ideas better by understanding the mechanisms that distort it — not distort it away from it’s natural state, because healthy public debate is not a natural state, but distort it away from the discursive equivalent of the finely calibrated, shocklingly efficient, and quite artificial creation that is the modern regulated market economy.
On the prevalence of ideologues
Near the end, at about an hour and 5 minutes I sound as if I think most people are ideologues wedded to only one view of the world. On second thought I should not be so dismissive (and smug). The impression probably results from spending a lot of time online and being particularly interested in disagreement and debate. Committed ideologues tend to be loud and thus command more attention than their numbers would suggest. They also tend to be young and smart and I’m convinced many of them, although not all, develop more complex and less dogmatic views as they grow older. Sadly I’m also convinced they tend to fall silent as they do so.
On the end and the future
Just after that I get the final question from Peter, on what we can do to improve discourse and what the future holds. A connection problem cut me off before I could finish, but that was for the better since I picked the wrong time to go on and on and on.
After reiterating some things I’d already said I brought up Strauss & Howe’s The Fourth Turning and discuss some models I’ve seen based on their theories incorporating Kondratiev waves that propose that things will get worse until a crisis around 2020 after which a new (temporarily) stable social order will grow into place as our new institutions mature. It’s too much to squeeze in at the end and I should’ve stuck with just the basics.
Can polarization get worse? I said “not so much” and mention the Kavanaugh hearings (which I couldn’t help get intimately familiar with despite living on a different continent) as peak polarization. However, mere days after our conversation it’s topped by “smirkgate“, so I’m not sure what to think. Smirkgate did lead to a lot of soul searching and awareness of how interpretations of limited evidence lets us project our ideological (and psychological) preoccupations on the news and the world, so just maybe this is the beginning of an acquired cultural immunity against the outrage virus.
I’m not opening the champagne just yet. Fights are vicious when the stakes are low, because there isn’t enough incentive to establish a truce and police it. If people are going to learn to temper their rage on social media, picking fights has to hurt more. I don’t know how to accomplish that in a non-terrible way. Maybe if damage becomes more evenly distributed among the camps everyone will be more inclined to feel that the fighting causes real harm, but as long as you can antagonize people and then just block/ignore them with no consequences our normal de-escalation routines won’t kick in.
I’m still confident we’re not facing doom. As I mention in the podcast but don’t finish before the link breaking down, we do understand the problem. There’s plenty of writing about disagreement and polarization, and our knowledge is rapidly increasing (now all we need is to bring it all together under the erisology umbrella…). Things can’t keep escalating forever (not the same things, anyway) and we will have a new normal. It might take a few years but eventually the daily outrage will be reduced to ignorable background noise.
• • •
It’s unclear what postrationality is actually supposed to be. But the short description in this white paper is about as good as you’re going to get.
Many have a favorite approach but almost everyone will try to switch rules depending on what helps them the most.
Helen Pluckrose of the grievance studies hoax fame calls this “applied postmodernism”. That’s fair, although maybe “selectively applied” would be more accurate, since it involves parts of postmodernist philosophy weaponized by carefully selective application.
I should get a second Twitter account that just angrily retweets examples of people using “liberal” as a synonym for “left”. Ought to be good for my mental health.
The situation reminds me of this observation from The Elephant in the Brain:
A common problem plagues people who try to design institutions without accounting for hidden motives. First they identify the key goals that the institution “should” achieve. Then they search for a design that best achieves these goals, given all the constraints that the institution must deal with. This task can be challenging enough, but even when the designers apparently succeed, they’re frequently puzzled and frustrated when others show little interest in adopting their solution. Often this is because they mistook professed motives for real motives, and thus solved the wrong problems.
Did you enjoy this article? Consider supporting Everything Studies on Patreon.
4 thoughts on “Postscript to a Podcast”
Congrats on getting on the podcast and thanks for linking to it — it’s always nice to hear the voice of someone whose writing you’ve read.
My favorite moment is when you explain, roughly an hour in, that your goal is not to promote analyses of disagreements where each one is weighed against the “correct” way to do things with lists of fallacies and so forth, but the study of how disagreements work in nature — clearer observation and understanding rather than prescriptions of how to “get it right”. I imagine you had made this differentiation before in writing, but here it really hit home.
I went to the link you provided in the footnote about postrationalism, which leads to a very interesting article in its own right, but it doesn’t give me a much better idea about what postrationalism actually is. In fact, I don’t remember if I’d ever heard the term before hearing you mention it on the podcast. It would be nice if you could link to some examples of what you would consider postrationalist essays or forums, say, because I’m still slightly bewildered here.
As for shortcomings of the rationalist movement, speaking as someone who is generally satisfied with the idea of fully taking part in it, or at least a certain wing of it (although in my case it’s more of a peripheral self-identification and I can’t consider myself at this time to be anywhere near a full-fledged participant), I too am left cold by a lot of the transhumanist stuff. I do think the AI discussions and debates seem quite interesting if only I had more of a background in AI, but as I don’t, I don’t feel any more at home discussing it than you are. It does seem to me that the main thrust of the focus on AI is not so much “Hey let’s brainstorm about a possible utopia created by superintelligent AI!” as “We’re at a critical point now where if we’re not careful, unfriendly AI will rise and could wipe out humanity” — in other words, the cause, much like most environmental causes, is expressed as an urgent need for damage control according to the scientific knowledge available to us. At any rate, the transhumanism and AI discourse seems to comprise a significant but still sufficiently small portion of rationalist activity that I don’t actually see that much of it. Utilitarianism I am on board with, at least as the default template for analyzing ethical questions, but I’ve been surprised at how far it is from being generally accepted within the rationalist-sphere. Yes, the two main leaders Yudkowsky and Alexander both seem to assume consequentialist utilitarianism, but it seems perfectly acceptable within the subculture to dissent from that. Anyway, if there’s an overarching point I wanted to make in this slightly rambly paragraph, it’s that the rationalist-sphere has now grown so huge that it seems possible to engage with a certain wing of it while mostly avoiding the parts that seem less appealing.
LikeLiked by 1 person
Well thanks for listening:)
About postrationality… the linked article has like one paragraph on them and they are indeed difficult to get a firm grip on. The best I can muster is rationalist-like but more into subjective experience, artlike prose, nonsystematic thought and speculative models. The three writers the article mentions (Dave Chapman, Sarah Perry and Venkat Rao) are probably the most central examples you can find.
There’s nothing that drives me away from the rationalist community as such. The examples given are just what keeps me aware that it’s center of gravity, idea wise, is not quite identical to my own views. The AI bits, the transhumanism and the utilitarianism is, I think, all facets of one tendency I don’t really share, namely the desire to optimize and systemize things like thought and morality. It doesn’t seem the right approach to me and I, rightly or wrongly, see it as somewhat overemphasized on LessWrong. Less so on SSC.
I enjoyed listening to this, and it was really interesting to read this sort of postmortem of an interview/podcast experience.
Don’t beat yourself up for any fluency or accent issues! I’m a former ESL teacher and it’s not an exaggeration to say that your English is pretty close to the best I’ve heard (and read) from a non-native speaker. I’m curious—do you work mostly with English speakers?
LikeLiked by 1 person
Thanks! There’s a substantial contingent of non-Swedish speakers at my workplace but that’s a recent phenomenon and I can remember feeling very comfortable with English for much longer than that. Idk why, maybe reading almost exclusively in English for more than 15 years makes a difference.