[Note: Quick and off the cuff. Quality may have suffered.]
Yesterday famous biologist Richard Dawkins said this:
…and all hell broke loose on Twitter for the first time since, like, Wednesday.
I scrolled past the tweet the first time and made hardly any note of it. A obvious, plain point. He complained about how people conflate something being undesirable with it being impossible. And sure, that’s a generally popular argumentative trope people pull out when they want to be extra safe and get a little too bold and greedy: “no there can’t even be a trade-off, this has to be a delusion in addition to evil”. So, yeah. Nothing noteworthy.
But it blew up, with comments and quote-tweets flying around. It reached the level where people would talk about the whole thing right into the Twitter-aether without quoting anything, just assuming that everyone already knew the topic — which signifies at least a level 6 quake on the Twichter scale. I was surprised at the heat, even though I shouldn’t be at this point. There are so many of these incidents. I guess I do underestimate how strong a rationality dampening field some terms project. “Eugenics” is clearly up there near the top of the list (along with “incel” and “racism“).
Not that I don’t get it. It makes perfect sense to be wary of eugenics, of course, and everyone should be to some extent. But when discussing in the abstract (in this case on the meta-level — as in discussing the discussion), something that’s not even close to happening, it’s a little odd that reactions get as extreme as they often do. There’s no clear and present danger to justify being extremely trigger-happy. It all seems like two orders of magnitude out of proportion, as if the Chernobyl disaster having happened made some people freak out at the mere mention of setting foot anywhere on the Eurasian continent. That’s my reaction, anyway. I might be wrong for some specific contexts.
I fell for the controversy for a while, thinking to myself that people who read something evil into this and went hysterical over it were totally fucking insane. But they’re not. It’s me — and I really should know better since this is like two thirds of what I talk about on here — underestimating how flexible interpretations can be.
On the surface this is just another case of people disagreeing on the proper dosage of decoupling. I’m not going over that again this time. Instead I got interested in the second hand debate in the wake, on what “eugenics” means. Shockingly, people aren’t on the same page.
After reading behind some lines while trying to be fair, and filling in some blank spaces between, I’ve compiled a non-exhaustive list of candidates:
Using government power to prevent people with undesirable traits from procreating.
Having any third party interfere in a couple’s reproductive choices for the purposes of improving heritable traits.
Taking any action at all with the intent of improving the genetic endowment of any person.
Eliminating heritable diseases through gene therapy.
Forcibly sterilizing poor people and/or ethnic minorities.
Voluntarily letting people edit their children-to-be’s genomes.
Encouraging, financially or otherwise, bright and well-behaved people to reproduce.
Significantly, reliably and effectively cultivate a specific desired trait in people with no adverse effects, using genetics as the lever.
Improving society by means of selective breeding of people.
Any policy with the aim of improving the genetic health of the population.
Any policy with the aim of changing mental or behavioral traits on the genetic level in the population.
I think that’s enough to map out most of the moving parts underneath. To me these three seem to be the biggest (but certainly not the only) factors in dispute:
1. Is coercion necessarily involved or do softer negative or even positive incentives also count?
2. Does it have to involve any change in who reproduces and how much, or does direct genetic engineering also count?
3. Is the hypothetical goal narrow, like “cause a change in the frequency of a trait”, or broad, like “solve significant social problems”?
It’s a lot like designing your gaming character. There are a number of sliders to move around to create your own custom meaning of “eugenics”, tailored to whatever rhetorical purpose you have at the moment, that people of varying persuasions certainly make use of. If we set up an interpretation matrix of the two little words “eugenics works” it’d be multidimensional and complicated as hell, and the moral charge of each cell would vary dramatically. A minefield is an apt analogy.
If we’d just pick up one minimally and one maximally controversial version (a motte-and-bailey pair, as it’s sometimes called) as examples it’d probably be something like this:
It would be technically possible to change the frequency of human behavioral traits in roughly the desired direction by taking some actions that alters the future gene pool.
We could make society better by a policy of selective breeding where we restrict undesirables from reproducing.
The actual content of these two are different enough, but also think about how much of a difference the language makes. Saying “selective breeding” about people as if they were crops or livestock triggers our ick-feelings, the word “undesirables” have all sorts of awful associations, and while “we could…” can mean the same as “it would be technically possible to…”, it opens up a lot of interpretation-space in the direction of it being a seriously intended proposal.
There are many concepts like this, where the meaning is fuzzy and constantly shifting but people tend to act like it’s not. In part they’re trying to enforce their own meaning by pretending, to both others and themselves, that it’s the canonical or “real” one.
It’s really hard to deal with this kind of interpretive flexibility when speaker and audience are on hostile terms. We don’t know what we don’t know: we don’t know which of our own words other people will interpret differently and how, and we don’t know which of other people’s words they mean differently and how. As a speaker you can clarify, but when you do so you’ll use words again, and they’re subject to the same problem. And even if you make an effort to be very clear about what you mean and don’t mean, some will ignore it anyway (I discuss this briefly in my long postmortem of the Harris-Klein debacle). That’s not necessarily irrational, as some in the audience are likely ignore it as well and come away with a bad or dangerous message, but it puts an almost unsurmountable burden on anyone speaking on sensitive issues. Do you have to cater to the least reasonable audience member on every side?
Personally I can’t accept that. On a medium like Twitter in particular where you’re limited to 280 characters and the product idea is “decontextualization as a service”, you can’t clarify with enough detail to protect yourself against hostile interpretation. Language itself wasn’t built for that shit. I can’t easily think up a rephrasing of Dawkins’s tweet that would’ve made the same point but made it impossible to interpret in a bad way given enough will, certainly not given the character restriction.
Still, some who disliked the hysteria still maintained that Dawkins should take some of the blame for not being more diplomatic or spend more effort clarifying things he has every reason to believe would be obvious to every reasonable person. I really can’t agree with that either. In a context like this, for reasons stated above, the onus has to fall mostly on the audience to spend a few cycles to think about what somebody means and default to giving people the benefit of the doubt.
• • •
Note that I’m virtually certain you could point to examples of Dawkins himself doing the same with religion.
One sign of this is people coming up with really weird and tortured reasons for why things they don’t want wouldn’t be possible anyway. Honestly I’m not innocent myself. I catch myself doing it at work on occasion when I really don’t want to do a thing.
It can be quite enjoyable to think like that. I recommend a few minutes of it as a weekend treat. Certainly not as a steady diet.
An encounter with an ambiguous yet controversial-sounding claim starts with an instinctive emotional reaction. We infer the intentions or agenda behind the claim, interpret it in the way most compatible with our own attitude, and then immediately forget the second step ever happened and confuse the intended meaning with our own interpretation. This is a complicated way of saying that if you feel a statement is part of a rival political narrative you’ll unconsciously interpret it to mean something false or unreasonable, and then think you disagree with people politically because they say false and unreasonable things.
This could reasonably be said to lie so far outside any common use of “eugenics” as to be disingenious. Kind of. The intention isn’t literally to claim that this should be called eugenics, rather it’s an attempt to establish that acknowledging that some heritable traits are desirable and acting on that is a normal and reasonable thing on a personal level. This anchor sets up a spectrum between acceptable and unacceptable eugenics-like things, which makes it harder to ignore the complicating gray area of non-coercive but functionally eugenic practices in between. It’s weird and fascinating how much of this sophisticated mechanism works subconsciously.
Note that the difference between “poor health” and “undesirable mental and behavioral traits” isn’t clear cut at all.
These aren’t anywhere near unambiguous either of course. They each have their own interpretation matrices, but less vague ones than the original.