My last post was a long response to some criticism and snark after an article about erisology in the Atlantic about two months ago. Since then I’ve been thinking I should be done with countering criticism for the time being. I have no particular interest in defending myself against criticism towards of what I don’t actually believe, nor against plain boring contradiction or the empty idea that I should focus on something else.
That being said, some criticism deserve some comments and at the moment I have three response articles in the making. That doesn’t feel right, it’s stressful and it keeps me from writing about more interesting things by making me repeat myself. Hopefully I’ll get back to regularly scheduled programming soon. This however, is going to be a response article. Or at least a comment article, because I don’t want to address the author in question directly. I don’t approve of how he’s conducted himself and I don’t want to reward it with attention. But since there are things worth clearing up and because might want something to point to in similar situations in the future I will quote the article in question and discuss it.
Somebody by the name “Eli” wrote a comment on A Defense of Erisology where he stated, fairly politely, that he had some disagreements with me and posted a link to an article. The article itself was less polite, in fact rather full of snark, uncalled-for hostility and several times the daily recommended intake of scare quotes.
The general message of the piece — exemplified by the opening line “Hello, John Nerst. Wanna put your money where your mouth is?” — appears to be that since I argue a lot for understanding and charity in disagreement I in some sense owe him to make an effort to understand him and engage him in good faith or (I guess) I’m a hypocrite.
Part of me is tempted to just say that sure, I’m a hypocrite, in the sense that I don’t always live up to my own ideals. I think that’s healthy and normal, since the point of an ideal is to be something to strive for. But I won’t take that path. Or, I’ll take it only 10-15%. I don’t think, nor think I’ve argued for, that I owe him any more charity than he granted me, which doesn’t feel like a lot . I support a tit-for-tat strategy when directly engaging another person: start off cooperating, if they defect, then defect (as in disengage, don’t call them names or anything). That’s the short version, there are a lot of complicated factors that influence how much and what kind of charity you owe somebody, which I might write about it in a future article
So should I just ignore him? Probably, but I’ll still comment for two reasons. First, there are some important substantial issues to discuss, and second, he’s more or less goading me to respond in what in my opinion is a pretty obnoxious way and might take silence as a victory, which could work as reinforcement. That’s no good either. Writing a dry, distant comment might be the best course of action.
He starts off by quoting my early description of erisology as: “specifically the study of unsuccessful disagreement. An unsuccessful disagreement is an exchange where people are no closer in understanding at the end than they were at the beginning.” A lot of the substance of his post comes back to this use of “unsuccessful”, which gives the word choice more weight than it deserves. I’ll get to that later.
This is John Nerst, who, like me, is some guy with a blog. Nerst is the subject of Jesse Singal’s article on erisology, which is why this was going around; almost nobody cares about “successful disagreement” but lots of people care about high-profile journalists like Jesse Singal. So what I want to do is go quickly through Singal and then circle back to Nerst. Shall we begin?
Quotes the Atlantic article on decoupling:
The concept of decoupling is erisology at its best. Expanding on the writing of the mathematician and blogger Sarah Constantin, who was herself drawing on the work of the psychologist Keith Stanovich, Nerst describes decoupling as simply the idea of removing extraneous context from a given claim and debating that claim on its own, rather than the fog of associations, ideologies, and potentials swirling around it.
When I first heard of decoupling, I immediately thought about the nervous way in which liberals discuss intelligence research. There is overwhelming evidence that intelligence, as social scientists define and measure it, has a strong hereditary component; according to some estimates, genetic factors account for about half the variation in intelligence among individuals. None of that has anything to do with race, because races do not map neatly onto genetic difference. But because the link between intelligence and genetics is so steeped in oppression and ugly history — that is, because charlatans have so eagerly cited nonsense ‘research’ purporting to demonstrate Europeans’ natural superiority — discussions even of well-founded studies about intelligence often end in acrimony over their potential misuse.
Unfortunately, Singal gives no examples here of what he means by “nervous… liberals,” “overwhelming evidence,” “charlatans,” “nonsense ‘research,'” or “well-founded studies” — or, for that matter, even “intelligence.” And you may well think that that doesn’t matter, because we’re not really supposed to be talking about any of those things but rather about the concept of “decoupling.” As it turns out, though, the details here matter quite a lot, because “decoupling” is defined in such a way as to not eliminate all context but, more specifically, only “extraneous” context. That’s the proper way to do this — obviously, we wouldn’t want to eliminate relevant or important context — but it does beg the question of what counts as extraneous.
This beef is more with Singal than with me so I shouldn’t get too far into it. But one thing I feel like I need to scream from the rooftops by now: I don’t think nor argue that decoupling is automatically correct. It does often play an important role in complex disagreements and we should be aware of it. This is discussed in A Defense of Erisology.
I find it worth commenting on the complaints about no examples or rigorous definitions. This isn’t written for a philosophy journal, it’s a short piece in a popular publication and its use of language is going to be closer to everyday communication than to a philosophical tract. Such everyday communication relies on an implied shared background at least to the extent that you can use nonspecific, nondetailed shorthand that could in theory be expanded by support, definition or justification but isn’t.
If you don’t share the implied background it looks flimsy and unjustified, but often there isn’t a reasonable way to accommodate the radically different backgrounds people sometimes bring to texts. Building and justifying a worldview from scratch for an article isn’t practical and opinions differ on where and when you can stop. Some assumptions and simplifications pass by unnoticed if you agree with them and stick out like thorns if you don’t. There’s no real way around that.
Singal assumes the reader will know and accept what he’s gesturing at in this paragraph and I think that’s ok. Arguing that it’s insufficient because it doesn’t convince a smart, motivated skeptic is setting a very, very high bar that few arguments short of mathematical proofs actually clear. But yes, it is frustrating to read articles relying on background conceptualizations you don’t share. I have plenty of experience with that myself.
Decoupling and dangerous knowledge
He goes on with this:
For example: is it “extraneous” to worry about “the potential misuse” of intelligence research by far-right racists? I guess that’s one of those “swirling” “potentials” that Singal has in mind, but I’m not sure that it must therefore be “extraneous.” Let’s go back again to the goal here: we want to do what we can to ensure “successful disagreement,” which, according to Nerst, is disagreement in which the principals advance in understanding. Surely, then, we could at least consider the possibility that certain areas of scientific research are, at the moment, more likely to hinder understanding than to promote it. Surely that’s at least superficially compatible with erisology. And that’s before we get into the question of what intelligence is, how it’s measured, and so on, all of which seem like they would presumably be important elements in any “successful” dialogue about the subject. In short, then, it’s very far from obvious that Singal is using “decoupling” the right way here. If anything, he himself seems to be making a highly coupled(?) argument, in that his approach seems to cast every other person’s strategy (both to the left and to the right of Singal himself) as being ill-motivated and thus inherently illegitimate.
This is a long paragraph I don’t really take issue with, but the first sentence deserves comment because it points to the crux of the particular issue at hand — the issue in question when I first used the decoupling concept.
Is it extraneous to worry about the potential misuse of intelligence research by racists? To worry? In general, not at all. Of course it’s a major concern. But does it matter for whether such claims are true or not? Not in the slightest. But there’s a “however”: when we don’t know if an idea is correct or not because of uncertainty and/or ambiguity we still need to decide whether to adopt it, and thereby allow it influence.
The full-on decoupler’s position here is that since few would openly argue that we should reject a claim we know to be true, we’ve also accepted the more general principle that nothing but the truth value of a claim has any legitimate bearing on whether we should adopt it, even when ambiguity/uncertainty makes that truth value unknown. This makes perfect sense in a scientific context where “likelihood of being true” and “desirability of adoption” is considered identical by default.
The contextualizer’s (or coupler’s) position is different, to say the least. Rather than challenge the idea that we should believe what is true, it supposes that as a claim becomes more uncertain it also becomes more important to take other considerations into account. If you say something possibly dangerous you better have incontrovertible proof, otherwise you’re being reckless — and possibly suspect, because if something isn’t certain you don’t have to believe it, so if you do it’s likely because you want to. Speculation and vagueness is acceptable exactly to the extent it’s irrelevant (or beneficial) visavi moral or political concerns.
There’s no across-the-board right and wrong here. We talking about disputed territory where two norm systems make strong claims from either side. It’s unclear who is the aggressor: is politics intruding into science or is science intruding into politics? Who should have the home field advantage? If implications don’t affect the truth of something it also shouldn’t affect the truth estimate, right? But it’s also irresponsible to disseminate potentially dangerous beliefs unless you’re really sure?
I don’t have a solution. It feels like we should be able to have conversations of both kinds but it seems we can’t because the contextualizer’s/coupler’s position, as I understand it, is undermined by shining a spotlight directly at it. The explicit norms are closer to the decoupler (“of course we should believe what is most likely true!”) but the implicit, de facto norms lie closer to the decontextualizer. As a result the very act of making things explicit stacks the deck in favor of the decoupler.
Claims come with implications, if and only if we assume a background
Eli goes on to quote Singal:
Likewise, when scientists bring forth solid evidence that sexual orientation is innate, or close to it, conservatives have lashed out against findings that would ‘normalize’ homosexuality. But the dispute over which sexual acts, if any, society should discourage is totally separate from the question of whether sexual orientation is, in fact, inborn.
Alas, this is just wrong. It might not matter where people’s sexual preferences come from, but, then again, it might. The moral status of any given sexual act obviously doesn’t affect the causal history of the desires relating to that act, but those two questions are very much not “totally separate.” To give a much more plausible example of how the two might be connected, we might very well decide that we want to deprogram pedophilia in anyone who’s unlucky enough to experience it – y’know, sorta like how religious conservatives want to deprogram same-sex attraction. In that case, it would be very helpful to know whether pedophilia was inborn, whether it developed due to some particular circumstances, or whatever. This may seem less relevant for homosexuality, but the principle is precisely the same. So, as before, Singal is struggling here to correctly identify which parts of the context are “extraneous” and in what sense they might be so.
As I read it Singal is just describing the familiar is-ought gap, in other words the logical separateness between what is true and what is good, how things are and how they should be. Sure, knowing why pedophilia exists and how it comes about is highly relevant if we’re trying to get rid of it, but it doesn’t in itself imply whether we should get rid of it or not. Now, obviously, we do want to get rid of pedophilia — I have a hard time imagining anything there is greater consensus about. That judgment is, however, totally separate from the causal history of the desires in question. That’s nothing more than the is-ought gap and Singal is only asserting that whether homosexuality is inborn is logically disconnected from whether it’s morally acceptable.
Granted, the is-ought gap is theoretical and can be successfully problematized when applied to real life where there are plenty of values we assume so automatically that we forget they’re not logically self-evident. A factual claim can indeed have moral and political implications, when seen against a background of taken-for-granted values and beliefs. If Eli is making that point I don’t think he’s making it clearly enough.
I suspect he most of all wants to push back against the phrase “totally separate”, not the specific meaning Singal intends with it. The meaning of a phrase is indefinite, and more so when it leaves its original context. People are justifiably weary of accepting a vaguely phrased claim that supposedly has a very specific interpretation because they know that since their acceptance happens on the phrase level, there’s a very real possibility that it can be interpreted differently and used against them later.
Disagreeing about the purpose of disagreement
Next, Eli brings up the pushback from Prof. Emily Thorson, including one more off-putting swipe at Singal (“Singal’s somewhat naive embrace of erisology comes back to bite him”). He quotes the Atlantic article:
When I ran the concept of erisology by a couple of political scientists who study disagreement, I got some unexpected pushback. Though Nerst has claimed that ‘no one needs to be convinced’ of the needlessly adversarial quality of online discourse, the Syracuse University political scientist Emily Thorson isn’t buying it.
And a swipe at me:
I wonder what fancy erisology term we would use to describe that situation. But okay – what’s the story here?
I’d stick with what I said in the last post: it’s a complex disagreement, no doubt. Not too fancy but it serves.
He quotes Thorson’s answer
Thorson argued that disagreements on Twitter or comment threads do not usually entail people ‘trying to understand each other but failing due to “pitfalls.” Rather, their goal is to affirm their identity, and often that involves aggressively demeaning someone who has a different identity from them. And so these conversations aren’t “dysfunctional”; they’re functioning exactly how the participants intend them to — as defenses of their identity, not as deliberative forums.’
And goes on:
Now, I think, we’re really getting somewhere, because now we can properly begin to challenge (or, at least, probe) Nerst’s idea of what it means for context to be “extraneous.” (And we can do so without Singal’s fairly obviously inept hand being involved.)
The swipes are getting tiresome and it’s a shame. Without them this criticism wouldn’t start smelling until the end.
Note that I myself haven’t used the word “extraneous” about context (and if I did, I’d mean it as relative to a certain viewpoint, not as a brute fact). Nor do I consider decoupling always right, which I explained in the very article Eli commented on. Given that, some medium-snark criticisms that follow from this assumption can be dismissed:
As Thorson suggests, extraneity requires an object. That is, if you want to say that such-and-such is extraneous, you have to be able to say that such-and-such is extraneous with respect to some particular process, goal, or what-have-you. What, then, is Nerst’s goal? Well, recall that he wants disputants to be “closer in understanding at the end [of a disagreement] than they were at the beginning.” Already, then, Nerst is artificially (and somewhat arbitrarily) limiting the range of goals that disagreement can be used to achieve. (Talk about someone whose views are seemingly grounded in extraneous context!) While some disagreements are indeed meant to increase understanding, others are meant to reinforce identities, as Thorson indicates. Still others, as we know, are meant to halt the truth-seeking process and just get people to look somewhere else. And I would readily bet that there are many more purposes than just those three. Nerst, then, really is narrowing his vision quite a lot, which makes it more than a little inappropriate for him to simply declare that his version of dialogue is the only one that counts as a “success.”
This does make me suspect Eli didn’t read through A Defense of Erisology all that carefully. Not that it would make him agree with me, but I would expect him to acknowledge what I said there.
I am of course aware that people have all kinds of reasons for engaging in argumentation and rhetoric. I’ve written about that before. I chose the word “unsuccessful” at a time when I was writing more or less to an empty room and didn’t expect philosophers to pick at my ramblings (and Eli seems to be some kind of philosopher, considering the title of his blog is “Rust Belt Philosophy”). I doesn’t have that much thought behind it. Honestly it was kind of tongue-in-cheek. I though it was funny, in a sardonic sort of way, to call something unsuccessful when people aren’t quite as mature as I think they should be. I could pick another word. It doesn’t matter.
If I were to defend the choice I’d say this: when we for example have a situation where one person says something and then another person answers with an argument that clearly addresses something different than what the first person meant, then I think it’s reasonable to construe it as “something going wrong”. But people aren’t actually trying to communicate in good faith! No, sometimes not (most often it’s a combination, see my lengthy discussion of Thorson’s criticism in the last post for a treatment of her objections), but their personal motives aren’t what I’m talking about.
The quoted paragraph seems to assume that I think the purpose of communication in a state of disagreement is reducible to the purposes of the parties engaging in it. And if I then think people are always doing their very best to understand each other but fail then I’m very wrong. Well, that isn’t what I believe (not after 20 years of forum lurking with a functioning brain). Instead I posit social uses of disagreement that go over and above individuals’ private purposes.
This is similar to the private purpose of firms in a market economy being profit, while the social purpose is the creation of surplus value. The private purpose of a professional football player is to further their own career, while the social purpose is for the team to win. This works on the next level as well: the private purpose of the team is to win, and the social purpose is to produce riveting entertainment. And so forth.
The social purpose of “successful disagreement” is building trust and understanding, reducing extremism, improving individual knowledge, and evaluating ideas. Evaluating ideas well requires that those ideas are communicated with high enough fidelity, which is why rhetorical strategies that depend on preventing or disrupting that, even when successful, would count as a failure on a social level the same way cheating or anticompetitive practices in sports and business are still bad even though they benefit those who engage in them. Now consider that making everyone more aware of how those practices work might make them less effective.
That’s a value judgment about how people ought to behave, yes sireee. It can be criticized as not objectively true, and somebody else might value other things. Sure. Knock yourself out.
Understanding vs. true or false
The last section of the article contains the strongest clues to where Eli is coming from.
Moreover, what is Nerst even suggesting that we try to understand? After all, “understanding” requires an object just like “extraneous” does; you don’t just “understand” in a vacuum, you understand some particular thing. But what is Nerst saying that we should aim to understand? If you’re like me, you’ll guess that he wants us to understand the truth, i.e., the fact of the matter about whether intelligence is heritable, where homosexuality comes from, or whatever. In fact, though, that’s not it at all, as his website explains:
‘Disagreement’ can mean many things, but this is what I have in mind: A lot of online discourse is hostile and often needlessly adversarial (I trust no one needs to be convinced of this). Specifically, a lot of this disagreement is dysfunctional, by which I mean that it results from (or is exacerbated by) one or both of the parties, intentionally or unintentionally, misunderstanding the other party’s position or the nature of their differences.
Got that? For Nerst, “success” in dialogue is not about finding the truth of the matter under discussion. Rather, it’s about “understanding the other party.” Hopefully you can see for yourself why this might be a problematic (or, at the very very least, suboptimal) lens through which to view disagreement. If not, well, I guess I’d be happy to yell at you about it in the comments, thereby proving my point both verbally and demonstratively. (Here’s a taste: if the goal is to understand the other person, then there’s no such thing as “extraneous” context, so that whole line of though is an oopsie.)
I really don’t like claiming that somebody doesn’t know what they’re talking about. It’s usually a shitty (and uninteresting) move but now I really have to ask myself if Eli has read much or hardly any of my writing. He seems to only talk about what’s in the Atlantic article and What is Erisology?. The last (quite smug) sentence in parenthesis seems intended to be a scathing rebuttal to my beliefs, which it doesn’t manage to be because it suggests I believe the opposite of what I do believe.
Considering what he wrote earlier, it seems his impression is that I think decoupling is something you always do as a step in the process to understand a disagreement and that thinking about context somehow stand in the way of understanding. I hope it’s quite obvious from my writing as a whole that I don’t believe anything of the sort.
Shouldn’t we understand the truth of the matter? Sure, but we can’t easily map complex disagreements to true-false values. A theme for almost everything I write about them is that they typically aren’t about the truth or falsity of some singular, well-defined claim but about the value of different clusters of heuristics, conceptualizations, low-resolution beliefs, attitudes, models, narratives, and complex, semi-coherent and semi-implicit interpretive and evaluative frameworks.
Jumping ahead to “is it true?” assumes a model of how beliefs work that I just don’t think is very good. You obviously can’t treat an abstract, low-resolution phrase pregnant with implications like “is intelligence heritable?” like a straightforward empirical question isolated from all kinds of other — often quite subtle and philosophical — beliefs, if you want to understand how the whole public, non-scientist conversation on the topic actually works. Eli doesn’t seem to think so either so I’m a little surprised he apparently claims we can do this without spending a lot of effort understanding each other first. In my view, such understanding isn’t an alternative to finding out the truth of the matter, it’s a prerequisite. Furthermore, if we want a chance to change somebody’s mind we also need to understand why and how they believe what they believe.
Even when it boils down to a yes/no answer, the translation/conversion required to get from a belief in somebody’s head to a concrete claim about the world is more complicated and more ambiguous than we typically think, and will almost certainly distort it. I could pick a belief from my head and translate it into a strictly empirical claim, but then it’s no longer necessarily in the same “format” as it was in my head.
Correctives are not signals (ref)
I do tend to push something that looks a lot like relativism. That’s not because I’m a relativist. It’s because I think it’s an ingredient public discourse needs more of (and I don’t mean half-hearted, selectively applied, self-serving relativism). Me putting more salt and a bit of mustard into the casserole doesn’t make “hey look, this guy wants to eat salt and mustard, eww” a good criticism. People already push a lot of Science™, Facts™ and Logic™ and nobody needs me joining the choir. Those are all good things but they need to be tempered (not replaced) by an understanding of science as a social process, facts as constructed, logic as not straightforwardly applicable to physical reality, and beliefs as networked and distributed. We should not assume people’s complete beliefs just based on what they choose to mention, but based on what they say together with the background against which they’re saying it.
Getting all confrontational, why and why not
If I wasn’t already a bit sour on Eli, his last paragraph would certainly have done it:
Anyway, none of this is very new to me. I’ve been writing about reasoning ethics and meta-reasoning for sevenish years, and Nerst is hardly breaking new ground (except, I guess, in terms of his vocabulary). But we might as well try to have some fun while we’re here, right? So after I post this, I’m gonna link it over on Nerst’s site and invite/challenge him to come over and explain himself (and hopefully educate himself by reading some of my own thoughts on the subject). If this works and he shows up, then we might be able to entertain ourselves a while, which would at least be something. And if he stays away, well, then we’ll have established a practical limit on his pursuit of “understanding,” which would be something, too.
Without this I wouldn’t have been as hesitant to comment, because I don’t think this arrogance should be rewarded with attention. I regret kind of “taking the bait”, as it were, but I didn’t want to give the impression that the criticism was difficult to address.
There’s plenty of discussion of erisology’s questionable newness and why I don’t find any proposed this-has-all-already-been-done-here all that convincing or interesting in the last post, so I won’t go through that again. Nor do I think I’ll read his blog. I already have too much to read and this sample didn’t impress me.
The last quoted sentence can be read in several ways, I guess, but to return from what I said in the beginning I take it that he suggests that I, because I go on about understanding and charity, owes him to make an effort to understand his point of view and possibly to counter his barbs with some cheek-turning. But I don’t owe him anything more than to not misrepresent him (at least not without acknowledging that I might be doing that). Yes, you should be charitable to people you engage with but you don’t need to engage with anyone in particular, especially not if they aren’t charitable towards you.
Does that mean there is a practical limit on my pursuit of understanding? Well, yes. Of course. Everything has practical limits. Is that supposed to be a “gotcha”? I don’t spend all my time engaging with people, and when I do I certainly don’t prioritize those who seem to relish conflict. His style suggests he does, and so does his blog’s tagline: “GO AHEAD, CHANGE MY VIEW — I DARE YOU”. Alright. I don’t care to spar. Not with him or with anyone else. I like dancing better, and you can’t do that with somebody who insists on kickboxing.
A combative style, complete with swipes and condescension, doesn’t come naturally to me. The closest I’ve gotten to a tear-down is Rant on Arrival and that was quite tame and prefaced by a disclaimer. So his and my disagreement on the matter of disagreement (besides of course that I don’t think he has a particularly firm grasp of what I believe) likely comes down to personality. Some enjoy confrontation and the smart ones sometimes get their kicks by engaging in ritualized intellectual combat. Swipes and condescension are flourishes, little joyful, elegant pirouettes that add spice to a performance. They’re not indications of real hostility but a normal part of debate, and objecting to it like I do is just indicative of wimpy oversensitivity. Well, I can enjoy that too, when directed at something I don’t like. But it’s a guilty pleasure. Such elements are unwelcome when you want something serious; it detracts from the seriousness by making you distrust the writer.
Of course its galling as a specialist to have non-specialists like me act as if they know anything about what you think is your specialty. And if you’re educated in philosophy a lot of ordinary thinking and arguing bothers you because it isn’t as detailed and rigorous as you’re accustomed to. You might very well consider it a moral duty to seek out and debunk bad (and harmful) thinking out there, with tools forged and sharpened in the nearest ivory tower. It can be useful. Even heroic. If you’re good at it you get to be a good guy and enjoy personal formidability. (Note that I’m not saying this necessarily applies to Eli, it’s general speculation not specifically about him).
The behavior is perfectly understandable, given certain priorities and proclivities. I even agree somewhat, although not as much as I used to. I was big into the whole hard-nosed Skeptic with a capital S thing in my teens and twenties but I’ve found the whole mindset much less appealing as I’ve grown older. Now I’m more interested in how things do make sense than how they don’t. Similarly, I no longer think the fact that you can poke a few holes in virtually any consumer-grade belief system with an undergraduate-philosophy-shaped stick is all that much more telling than the fact that you can blow up any building with a big enough bomb.
So, I don’t much care for most confrontational debate. It doesn’t do anything for me. Good criticism is valuable, but crappy criticism that doesn’t represent its object correctly — i.e. as intended, including the assumed background against which it is said and the standards to which it’s meant to conform — is not.
• • •
I don’t want to be a dick here but when one writes about philosophy one really shouldn’t be using “beg the question” to mean “raise the question”.
I’m not entirely sure what this means. It might be that some research — like intelligence research — could be divisive and therefore not conducive to understanding in the sense of non-conflict. Hence I should oppose it? If that’s the implication it’s an example of a kind of formalist rhetorical trickery that leaves me thoroughly unimpressed. Just because I favor understanding as a goal of communication doesn’t mean I favor anything than can be construed as increasing some sense of understanding. Two things aren’t the same because you can refer to them with the same word. That seems very basic to me, but counterintuitively you find the opposite notion far too often in philosophy: a word has only one meaning and all uses have to be consistent with one interpretation. I guess because it’s partly historical habit, and partly because it makes it easier to do philosophy. But ordinary language doesn’t work like that, and that doesn’t make it deficient.
Which you can’t until you operationalize it.
The reverse process happens when we get results from scientific experiments and try to interpret them in relation to our (low-resolution) beliefs.
This doesn’t mean anything radical, conspiratory or woo-ish.
This gets complicated of course, for reasons I’ve discussed earlier: people disagree on the background, and what other backgrounds other people assume you have and they have, and which they assume you assume they have etc.
The scare quotes don’t seem to be doing much other than sneer.
I suspect this attitude is somewhat common among philosophers because it lends itself well to the kind of vigorous criticism expected in the field. But that ethos of vigourous criticism meant to leave only the strongest of ideas is also responsible for aspects of philosophy I find seriously unattractive. There’s a tendency to export one’s picky habits to normal discourse and thereby misunderstand it; to view minor flaws and single counterexamples as powerful refutations, expecting too much specificity and rigor from ordinary words (see note 2), or consider unconcern, assumptions, simplifications and shortcuts to be mistakes and errors. Some of my philosophy frustrations are here.
This makes me think of Ullica Segerstråle’s distinction between “planter” and “weeder” attitudes in science, from her book Defenders of the Truth: The Sociobiology Debate that I’m currently rereading.