Review: The AI Does Not Hate You

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

This quote by Eliezer Yudkowsky is how Tom Chivers’s book The AI Does Not Hate You got its name. The book is about the risks with runaway artificial intelligence and the subculture that’s grown up around the topic over the last 5-10 years: the rationalist community.

ainothate

Before I go on, in the interest of full disclosure: the author, Tom Chivers, reads this blog and has mentioned it favorably on several occasions, I’m therefore likely to be at least a little bit biased in his favor.

Chivers is a British science journalist who’s known about the community since 2014 and describes his book as about AI risk, the rationalists and his efforts to find out whether he agrees with them. Agrees about what? About whether the emergence of artificial intelligence will render human civilization unrecognizable over the next, say, 100 years — for the better or for the worse.

The very short story is this: human-level artificial general intelligence (a machine that’s at least as good as an ordinary person at everything) might be mere decades away. Furthermore, there’s no particular reason to believe intelligence tops out at our level. If an AI can reprogram itself — just like human programmers can reprogram AI:s — it could become more and more capable until it leaves us flesh-based intelligences far behind.

Many things might happen. Some are positive, like humanity birthing a pan-galactic civilization packed with happy minds, and some are very negative; if the AI is written to have goals whose achievement has undesired side effects (like an evil genie but not on purpose) it could lead to our destruction.

The full argument is a lot longer. That’s partly because considering this a realistic possibility requires comprehending an AI:s motivation as separate from its intelligence (i.e. it won’t necessarily be wise or prudent or have common sense just because it’s intelligent), which requires unlearning some assumptions about intelligence in general being the same as human minds in particular. We, being human and all, have a tendency to be blindly self-centered about that[1].

This major point, illustrated by the titular quote, gets ignored by most conventional “robots rise up” stories. Chivers says:

‘When people talk about the social implications of general AI, they often fall prey to anthropomorphism. They conflate artificial intelligence with artificial consciousness, or assume that if AI systems are “intelligent”, they must be intelligent in the same way a human is intelligent. A lot of journalists express a concern that when AI systems pass a certain capability level, they’ll spontaneously develop “natural” desires like a human hunger for power; or they’ll reflect on their programmed goals, find them foolish, and “rebel”, refusing to obey their programmed instructions.’ But those aren’t the things we ought to be worried about.

Indeed they’re not, and this strikes me as the biggest hurdle to get past if you want to get people to consider superintelligence as potentially dangerous. Chivers admirably spends a lot of time and effort making this point. He does it well and I think effectively, but I can’t trust my own judgment fully because it wasn’t news to me.

Something similar applies to rest of the case for AI wariness, originating from, among others, Eliezer Yudkowsky and philosopher Nick Bostrom (author of Superintelligence). I’ve been in outer orbit around the rationalist sphere for some years now and I’ve been exposed to this gradually, making the perceived far-fetchedness of it all slowly and gently come down. While that makes me less than perfectly qualified to judge the book’s effectiveness as an explainer, my guess is it’s as good an introduction as you’re likely to get, as Chivers avoids getting overly technical, gratuitously absurd for shock value, or using the occasionally abrasive style that makes reading Yudkowsky difficult for a lot of people.

That’s AI, what about the rationalists?

I didn’t read this book to hear an AI argument I already understand. I read it because I was curious about how the rationalist community was going to be portrayed. Honestly, you’d suspect a book like this to be a hit job, and so did a lot of people Chivers spoke to (most notably Yudkowsky, who refused to speak to him besides to answer technical questions). It would clearly have been easy to produce one, and Chivers outright says he could have, had he wanted to. He didn’t. He’s sympathetic through and through.

When he talks to a few critics (including one who calls the rationalists a cult) they come off as petty and ignorant. I don’t know if that’s accurate or a result of what they say being filtered through Chivers’s own sympathy. You could argue he’s a bit too generous to be a trustworthy source on the rationalists, but frankly, I’m not that concerned. Two reasons: it’s already the default to use our intuitive impressions of absurdity and social abnormality to judge ideas, so for a journalist to decide to barely use it at all for once is such a welcome change that I have a really hard time finding it at all problematic. Secondly, I’m such a sympathizer myself that it doesn’t grate on me. YMMV, of course.

I suspect his sympathy explains the feeling I get when I read the section called “Dark Sides” where he, in addition to the charge of cultishness, discusses some spats with feminists and the relationship to the neoreactionaries (a small group of hyper-conservatives who want to roll back modernity and liberal democracy). He’s clearly uncomfortable during these parts, as if he wants to get them over with as soon as possible. This is probably a wise impulse, as issues like these have a tendency to grab all the attention if you go into them at all.

Honestly I would’ve preferred that he spend even less time on them. Let me explain. The feminism chapter in particular goes into both the kerfuffle around Scott Alexander’s controversial post Untitled, and the James Damore affair — the Google engineer who was fired and widely shamed for writing an internal memo arguing that the reason Google software engineers are mostly male is at least partly due to a sex difference in interest and personality traits, not just discrimination — and tries to explain the substance of both controversies. I can’t help thinking that this was a mistake. If you’re going to do that properly it really needs to take center stage, and it can’t and shouldn’t in a book focused on AI. At one point he says (referring to the biological/social origin of psychological sex differences):

There’s a lot of research on that, and endless back-and-forth arguments, and it’s an enourmous can of worms that I absolutely don’t want to open here,

…and, well… maybe he shouldn’t have opened the can he’s currently poking around in. I understand the impulse to explain, I really do — and the opening sentence of the chapter reveals that he considers himself duty-bound to do so — but it leaves a bitter taste in my mouth that he feels it’s necessary to go into it in as much detail as he does.

I think it’s because I’m bothered by his seeming aqcuiescence to the assumption that if you have reservations against any part of feminism in some way, you better explain yourself, buddy[2]. Had I written this part myself I’d have tried to explain the conflict in very general terms only (no specifics), and symmetrically, as coming down to a difference in attitudes to knowledge, evidence and propriety in discourse, and in doing so, treated the feminist view as no more natural or valid than the rationalist-ish view that sometimes comes to blows with it. In other words, I wouldn’t have privileged (sic) it the way I feel Chivers does. But this is a lament on the state of the world, not so much a criticism against him. I don’t know his views on the matter and I don’t want to project my own reservations on him, but regardless, the gods of public relations demand their sacrifice and this may very well have been necessary[3].

A sympathizer and outsider

Just like Scott Aaronson said in his review, reading the book made me reflect on my own tenuous relationship to the rationalists. I typically do identify with them, even though I’d sign on to maybe half of their typical beliefs on a good day. Yes, sure, I accept the basic soundness of the AI risk reasoning but I can’t say I’m particularly interested. It wasn’t what attracted me towards the subculture in the first place (on the contrary, it was more of a minus than a plus for me). Nor was it utilitarianism, self-improvement, polyamory[4], decision theory, effective altruism[5], and very much not any radical transhumanism — as human beings we have plenty of flaws, but I balk at the idea of becoming anything else (maybe this means I’ll be considered a bigoted old codger in 50 years, like many before me).

It was really two things[6] that attracted me: materialism[7], and norms of discourse.

Rationalist discourse tends to assume materialism for the most part, be finicky with the meanings of words, at the same time take ideas seriously and with charity, and at least try to avoid seeing beliefs as markers of social allegiance, and instead evaluate them according to logical coherence and empirical plausibility. The combination of all these is rare and powerful, while also feeling so obviously correct to me. It’s homey, it’s what I identify with, and it’s what I want to protect when I feel solidarity with them whenever they’re attacked or sneered at, even for things I don’t agree with.

Given this, I suspect that had Chivers’s book actually been a hatchet job, I’d have felt closer to the community than I ended up feeling. I did appreciate getting to hear a little about how people I know of and like only as words on a screen are in real life, but I’m also quite confident I’d feel a bit out of place at a meetup, just like Chivers did. He describes being uncomfortable with the lack of small talk and feeling like a “lumbering hooligan” for having a single beer in a group of Diet Coke drinkers at a pub. As a happy, even eager, social drinker I sympathize. Not that drinking is good and important per se, but abstention as the norm signifies, to me, a kind of “straight-laced-ism” that feels foreign to me.

Less superficially, the focus on AI, wide-eyed futurism and some weird lifestyle aspects also made me feel like a definite outsider. You could make the criticism that Chivers, by identifying the community with its most distinctive ideas rather than its most commonly shared ones, overestimates how much consensus there is. It’s not always clear from his descriptions what’s generally accepted, what the smaller circle of people who habitually socialize within the community mostly agree on, and what Eliezer Yudkowsky in particular believes or has said at some point. My unscientific impression is that preoccupation with AI, transhumanism and polyamory rapidly decrease with distance from the inner circle, and that hardcore utilitarianism is at least controversial all over.

In fact, if you’re anywhere near my corner of Twitter, the topic of “what rationality even is and how it differs from post-rationality and meta-rationality” comes up periodically and most often turns into a confused mess as people throw their pithy and partially contradictory takes into the ring. In truth it’s all a loose network of people and writing, barely held together by complicated, criss-crossing strands of common beliefs, attitudes and references.

I don’t blame Chivers for simplifying things. It’s almost impossible to avoid. He explains, correctly, that there’s a broad spectrum of “membership” from a few people who live their entire personal and professional lives inside the bubble to lots and lots who on occasion read the most popular affiliated blog, Slate Star Codex. He’d count as part of the outer rim himself, I think. At least he does if I do, as our levels of engagement seem similar.

Like looking in a mirror

I opened the book identifying with the rationalists, slightly uneasy on their behalf, but since there was no hit job and a lot of focus on what I’m lukewarm about, I interestingly came out of it identifying more with the author himself.

His experiences are what I expect mine would be in the same situation, and we share many similarities besides having been eyeing the rationalists from afar for the last five years. He’s a science writer and I’m a consultant-analyst wishing my reports were more like science writing than they are; he’s a father of two small children just like me, which informs his view of the future just like it does mine, and when I read “I’m almost 37 now” a few months after my own 36th birthday it almost gets a little eerie.

At that point I start thinking that while the AI risk part is important, and the discussion of the rationalist community is interesting, I totally selfishly want to hear more of his own personal journey through all this.

As it is that aspect is just a framing device. He opens and closes the story with a simple thing that Paul Crowley, engineer at Google and prominent rationalist figure, told him: “I don’t expect our children to die of old age.” What this means is that by the year 2100, about the time Chivers’s (and my own) children would reach the end of their natural lifespan, they might be either killed by a runaway artificial intelligence or have had their lives completely transformed by the Singularity.

I was quite down with this line of thinking 20 years ago. As a teenager I wrote a school essay about the history and evolution of computers, including an extrapolation into the future that included AI, uploading minds and having them travel the world through fiberoptic cables, with a concluding sentence that went something like “we better build morality and a conscience into our computers, or we don’t know what will happen”. I should be totally into this aspect of the rationalist community. But I’m not. I’m ambivalent, uninterested and anxious, all at once.

As I refamiliarized myself with it as an adult I did not like it. This time it felt a lot more real and a lot more disturbing. I don’t know why, but between the ages of 16 and 36 my tolerance for future shock went way down. Future shock? Chivers describes early discussions about the far future between Eliezer Yudkowsky, Nick Bostrom and others on a pre-rationalist mailing list:

Yudkowsky, on the other hand, felt that the problem with the Extropians was a lack of ambition. He set up an alternative, the SL4 mailing list. SL4 stands for (Future) Shock Level 4; it’s a reference to the 1970 Alvin Toffler book Future Shock.

Future shock is the psychological impact of technological change; Toffler describes it as a sensation of ‘too much change in a short period of time’. Yudkowsky took the concept further, dividing it up into ‘levels’ of future shock, or rather into people who are comfortable with different levels of it. Someone of ‘shock level 0’ (SL0) is comfortable with the bog-standard tech they see around them. ‘The use of this measure is that it’s hard to introduce anyone to an idea more than one shock level above,’ he said. ‘If somebody is still worried about virtual reality (low end of SL1), you can safely try explaining medical immortality (low-end SL2), but not nanotechnology (SL3) or uploading (high SL3). They might believe you, but they will be frightened–shocked.’

He acknowledged that transhumanists like the Extropians were SL3, comfortable with the idea of human-level AI and major bodily changes up to and including uploading human brains onto computers. But he wanted to create people of SL4, the highest level. SL4, he says, is being comfortable with the idea that technology, at some point, will render human life unrecognisable: ‘the total evaporation of “life as we know it”’.

Judging by what I know of Yudkowsky and his writing he’s excited about the possibilities. I can’t justify the feeling that I’m not, and I suspect teenage me would look down on current me for it. It’s just that the sheer enormity of the consequences is too much for me to handle on an emotional level, and I don’t feel at all comfortable with what I perceive to be an expectation of no to low sensitivity to future shock among core rationalists. Those eager to spend a lot of time thinking and talking about the prospect of a Singularity feel alien to me for this reason. For example, Chivers recounts the calculations Nick Bostrom did that show how many sentient beings there could be in a future where minds are pure software:

‘One followed by 58 zeroes’ may sound like a meaningless Big Number to you, and it does to me, but it is extraordinarily vast. ‘If we represent all the happiness experienced during one entire such life with a single teardrop of joy,’ says Bostrom, ‘then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millennia.’ Does that give a more visceral sense of how enormous it is?

Yes it does. Now make it stop. I going to tend my garden.

That’s why the very end, when Chivers comes back to “I don’t expect our children to die of old age” resonated with me. He accepts all the reasons to believe in the significant possibility of strong AI before 2100, but still can’t quite take it seriously enough, and comes to the understanding that he can’t because it’s frightening. He even bursts into tears, out of fear for the future of his children.

It was at this point that the conversation, if that’s the right word, took a slightly odd turn. It was still my sceptical side’s turn to speak, and it had this to say: ‘I can picture a world in 50 or 100 that my children live in, which has different coastlines and higher risk of storms and, if I’m brually honest about it, famines in parts of the world I don’t go. I could imagine my Western children in their Western world living lives not vastly different to mine, in which most of the suffering of the world is hidden away, and the lives of well-off Westerners continue and my kids have jobs. My daughter is a doctor and my son is a journalist, whatever. Whereas if the AI stuff really does happen, that’s not the future they have. They have a future of being destroyed to make way for paperclip manufacturing, or being uploaded into som transhuman life, or kept as pets. Things that are just not recogniseable to me. I can understand Bostrom’s arguments that an intelligence explosion would completely transform the world; it’s pointless speculating what a superintelligence would do, in the same way it would be stupid for a gorilla to wonder how humanity would change the world.’

And I realised on some level that this was what the instinctive ‘yuck’ was when I thought about the arguments for AI risk. ‘I feel that parents should be able to advise their children,’ I said. ‘Anything involving AGI happening in their lifetimes–I can’t advise my children on that future. I can’t tell them how best to live their lives because I don’t know what their lives will look like, or even if they’ll be recognisable as human lives.’ I then paused, as instructed by Anna, and eventually boiled it down. ‘I’m scared for my children.’ And at this point I apologised, because I found that I was crying. ‘I cry at about half the workshops I do,’ said Anna, kindly. ‘Often during the course of these funny exercises.’

I expect I would have done the same if it all came rolling towards me at once. For the moment I’m enjoying the luxury of not being forced to think about it.

I suppose people less prone to future shock have an easier time following this thinking to its conclusion and most who feel like I do just stop early, which is certainly easy to do. Hearing somebody else’s account of being an inbetweener, getting dragged all the way there, almost kicking and screaming, and owning up to how much fear and existential anxiety it generates made me feel less alone. I appreciated that.

I find it hard to judge the book based on my own experience, because few people would come at it from the exact angle I did. I far as I can see, Chivers has done a great job overall and if my biggest criticism is that it’s not more perfectly tailored to a target audience consisting of me and like six other people, that’s probably praising with faint damnation.

 

• • •

Notes

[1]
We even have a hard time understanding that other people think differently from us. What chance do we have understanding an AI? (See also this, one of my all time favorite articles + comments.)

[2]
This comes in big part from the extremely nebulous nature of feminism as a concept. A minimal version simply follows from liberal individualism and everyone’s right to live their lives the way they please with no unfair barriers, which is all good and valid and kind of boringly obvious. But there’s also an elaborate ideology represented by the word, with a host of far-reaching moral and factual assumptions it asks you to accept. The common refrain “feminism is the radical notion that women are people” pretends this isn’t the case, which makes it sound dishonest to me, not unlike “communism is the radical notion that the poor are people”, or “Christianity is the radical notion that we’re all valuable” would be. They aren’t just that, they’re a lot more than that, and whether you consciously intend to or not, you’re effectively pretending the whole follows from that minimal version, even though it doesn’t.

[3]
I have some objections to the chapter on the neoreactionaries as well, but they’ll have to wait for a “leftover” post after this one.

[4]
When polyamory is discussed, the word that comes to mind is “exhausting”.

[5]
I feel I ought to be engaged with effective altruism but I just can’t get there. My heart’s not in it. I’m probably just about too much of a self-centered asshole for it, just like I’m too lazy for other stuff I feel a better version of me would be into, like exercise and investing.

[6]
I suppose you could add my visceral hatred of bullshit in all its forms as a factor.

[7]
I’m a materialist/physicalist/whatever, and I’m convinced that properly internalizing a materialist view of the world (and not just not believe in explicitly supernatural things) is necessary to build understanding of how things work and how imperfectly language captures reality. Spaces where materialism is more or less assumed[8] are few and far apart. Sure, I’ve spent many hours over the years enjoying my Daniel Dennett books while nodding along and going “YES THAT’S EXACTLY IT” in my head, but that not quite enough. I relly enjoyed finding a vibrant subculture where you didn’t have jump across this conceptual chasm all the damn time but could build upwards instead.

[8]
This means a lot of philosophy and philosophical arguments and objections will be dismissed as irrelevant, which contributes to some animosity between the rationalist community and conventional philosophy.

 

Did you enjoy this article? Consider supporting Everything Studies on Patreon.

7 thoughts on “Review: The AI Does Not Hate You

  1. This is a beautiful review John – thanks!

    I imagine I’d find a lot in common with you, as you did with Chivers. But I feel a little more ‘inside rationality’ tho I’ve only been to two meetups ever.

    As for existential AI risk, I’m both more sanguine than the prominent AI-x-risk people _and_ I’ve been reading David Chapman now for a very long time. One of the most powerful lessons I’ve started to learn from Chapman is that, in a very real sense, we ‘should’ all be experiencing ‘SL4’ – with regard to the _present_. It’s _already_ the case that reality is terrifying and frequently awful! Furthermore, there is great utility (and wonder) in deliberately cultivating this as _a_ perspective. It’s an important part of Truth!

    So, given that we’re already burdened with a (huge) amount of past and present shock-debt that we still have to pay off (or make ‘interest’ payments on at least), the idea of any amount of future shock is both not compelling but also not _overwhelming_ to me. I can think about it without panicking or feeling like I need to avoid thinking about it.

    I also once ‘took religion seriously’ (mostly Christianity) and the reality they imagine – Hell – is literally infinitely worse than any plausible x-risk, AI or not. I can totally imagine a super-intelligent AI disassembling humanity (and the rest of the planet) because we’re made of things it could use to serve its own goals, and that would be a horrible and possibly very painful way for our world to end – but it wouldn’t involve an eternity of torture either.

    Like

  2. Well, after reading your “leftovers” review, I went back to look at this review and it turns out that I hadn’t read it when it came out. Also enjoyable even for someone who didn’t read the book it’s reviewing (I’d be interested in reading the book but my reading list is pretty backed up right now).

    Re footnote [2]: I couldn’t agree more (you more or less seem to be a describing a motte-and-bailey fallacy), but this issue doesn’t seem particular to feminism, as you even seem to point out in the other example definitions you’re likening the “women are people” definition to. As an example, nowadays in US politics, the most prominent figures in the new hip resurgence self-identified “socialists” are Bernie Sanders and Alexandria Ocasio-Cortez, who both consistently give equally watered-down answers to what the term “socialism” means every time they’re asked (“it means that every citizen deserves access to health care as a human right”). I think the reason that 2010’s feminism has been such a bugbear in the rationalist community is its relative dominance as an ideology and a guiding moral force in our culture (at least the Blue Tribe parts of it, which is where most rationalists seem to live in and/or come from).

    Re footnote [4]: When the word “exhausting” comes to mind, are you applying it to discourse on polyamory, or on the poly lifestyle itself? 😛

    Like

Leave a comment