Erisology of Self and Will, Part 6: The Need and the Reasons

This is the 6th part of a series adapted from my 2009 Bachelor’s Thesis in philosophy.

Part 1 introduced the series and its premise: there are two ways to look at the self — a scientific way and a traditional way — and transferring statements from one to the other has weird effects.

Part 2 described the traditional view, using the philosopher C.A. Campbell as a representative.

Part 3 offered a sketch of an alternative view, assembled from background assumptions in the physical sciences.

Part 4 discussed some scientific disciplines with bearing on the self, and how their results are interpreted differently by the traditional paradigm vs. the scientific.

Part 5 gave some examples of people expressing “Campbellian” views online.

Here in part 6 I discuss the reasons why the traditional view persists when prescientific thinking on other topics often doesn’t.

So where does this Campbellianism come from? Even people with no personal commitment to supernaturalism gets uneasy when pondering the implications of causation in human behavior. I think it’s the result of a sleight of hand that many philosophers have performed, unbeknownst to themselves, throughout western philosophical history[1].

Much has been written on the problem of free will. The prospect of it being “unfree” as in determined, has often been considered unbearable. Why is that?

The idea of determinism in a broad sense can be in conflict with personal and political convictions about humans being free or unfree to do and be what they want to be without any limitations or restrictions. Evidence of this is Segerstråle’s account of some of the reasons behind the vigorous protests against E.O. Wilson in the 1970’s after his publication of Sociobiology, which contained observations and theories about humans’ social behavior as well as those of a number of other species. The following explains how some of the protesters felt:

However, later statements by group members indicated that the group was in fact not supportive of a blank-slate idea [here interpreted as the idea that culture creates the mind – my comment, JN] . The critics of sociobiology were against any kind of determinism, environmental as well as biological. Humans should be seen as free agents, having choices (see, for example, Larry Miller, 1978, speaking for the group).

Here the idea of the self being non-causal feels not only metaphysically but also practically and politically significant. The critics callling themselves The Sociobiology Study Group saw their mission as largely political: to counteract the perceived threat against the idea of political progress posed by discussion of humans being limited creatures.

Political freedom and perfectibility are big issues, but not the only reason for fear of determinism. What one sees most notably in Campbell is something else. He aims to establish a conception of free will, not just any conception but “the kind of free will required for moral responsibility”. We can see in the last comment by “Inquiry” in the previous section that this thinking is alive and well in the 21st century. Perhaps it’s even the leading reason why causation seems so worrying in the area of human choices. Further up, among those commenting on the finding of an “infidelity gene”, we find examples of this fear; that acknowledgment of causes for behaviors is tantamount to excusing and accepting those behaviors.

Now, this fear probably lies behind much of the need to posit a metaphysically free will. But I think the assumption rests upon an unfortunate and unnecessary conjunction of two sparate ideas. One is the reasonable notion that for you to be responsible for something the decision must be made by you. Campbell phrases this as “the agent must be the sole author” of their action.

The other is that human thought occurs separately from physical processes, as per Descartes. Combining these two creates a problem, since it entails treating any intrusion of normal causality into the decision-making process as just that, an intrusion. And as character traits are more and more being brought under the umbrella of biology, neurology and psychology, there is less and less for this incorporeal faculty to do. This creates reasoning like such:

A: Moral responsibility requires that you yourself make the decisions.

B: Your thoughts operate in a non-physical realm, outside natural causation.

B → C: Natural causation in decision-making means you aren’t making those decisions.

A + C → D: Natural causation in decision-making is incompatible with moral responsibility.

The absurdity occurs in cases where one’s unwilling to explicitly defend B but is unable to see that this frees you from having to defend D because C doesn’t hold. This makes it somehow desirable to introduce an element – any element – that operates outside of causation and then believe this accomplishes anything. When the libertarian philosopher reaches this point which he has mistaken himself into pursuing, one might ask exactly how a non-causal, singular event emerging from nowhere is supposed to have bearing on our moral responsibility.

A modern philosopher that does exactly this is Robert Kane, who is in the unusual position of trying to save a metaphysical free will (or as I think it should be called: a non-causal element in human decision-making) from a materialist perspective. Kane’s convoluted metaphysics invoking quantum indeterminacy to get uncaused actions, is necessary to him because he — despite being a materialist — retains the idea that non-causality is required for responsibility. This if anything shows that C and thus D has taken on a life of its own and remains firmly lodged even in those who reject B.

Dennett speaks about Kane’s efforts:

The best attempt so far is by Robert Kane, in his 1996 book, The Significance of Free Will. Only a libertarian account, Kane claims, can provide the feature we—some of us, at least—yearn for, which he calls Ultimate Responsibility. /—/ A human mind has to be where the buck stops, Kane says, and only libertarianism can provide this kind of free will, the kind that can give us Ultimate Responsibility.

It is interesting and telling that Kane chooses to call what he’s after “Ultimate Responsibility”. This shows how he, and I suspect many others, sees responsibility. To them it is an objectively real property that people may or may not possess, and we are in the process of examining whether we have it or not. Responsibility is in this case a metaphysical entity with an existence or nonexistence. And if this concept is to be real then metaphysical free will or non-causal element in human decision-making needs to be real too, no washed-out naturalist conception will do.

Galen Strawson uses the term Ultimate Responsibility as well (in The Impossibility of Moral Responsibility), but he uses it to illustrate what he says doesn’t exist. Strawson uses the qualifier “Ultimate” conscientiously to separate it from the more ordinary, mundane responsibility we do have. But when it comes to the kind of responsibility that could motivate sending someone to eternal torment in hell, he says it simply does not and cannot exist.

What “Is moral responsibility real?” really means is “are our anger at and punishment of moral transgressions justified?”. And how do we determine that? Not by examining the structure of the universe. Which of our feelings are justified is a moral question whose answer must be decided, not an ontological question whose answer must be discovered. Right, wrong, good, evil, responsible, not responsible, justified, unjustified etc. are concepts we use to regulate the social world, not things to find in the natural world.

This goes against an influential thread in western philosophy, where ethics is seen as an exercise in abstract thought through which we can find “the Right and the Good”, as if they were real things. This is partly Plato’s fault. A dissenting voice here, whose philosophy has stood up to scrutiny remarkably well over several centuries, is David Hume — who fairly early in the history of moral philosophy emphasized the role of emotional reactions.

This separateness between body and mind ties in to a broader idea about concepts. Historically the world has been thought to consist of an eclectic collection of diverse things whose existence was independent. Before Newton, the universe was thought to be divided into the superlunar and the sublunar world which were fundamentally different[2], before Darwin a similar divide between humans and other animals were thought to exist. Nowadays the earth and the heavens are understood to be different emergent entities with a common ontological base, and so are humans and other animals (even though this has failed to sink in as much). But the analogous divide between mental and physical is still with us.

The opposite of this view of the world as consisting of separate and parallel strands is monism, the view that everything in the world is the multifaceted expression of one set of fundamental rules. The only real remaining version of this is materialism, where different things are different because of their structure and arrangement of matter, not because they are fundamentally separate kinds of things. A human can be turned into a dog by rearranging its matter, it isn’t needed to extract its “humanness” and inject it with “dogness”.

Reductionism is the intellectual application of materialism with the aim to understand the world by uncovering exactly how phenomena arise from the interaction of its constituent parts[3]. Sometimes this meets resistance, when we saw Dennett apply a reductionist perspective towards love earlier he was negating the antinaturalist and antireductionist notion that love has an existence parallel to the physical world rather than being a phenomenon arising within it.

Popular views of the self and the will are only one manifestation of this broader tendency. To think this way seems to be a natural human tendency that we are very slowly shedding through centuries of science and rigorous intellectual discipline. Confronting ontologically independent concepts with monism and naturalism forces us to recognize that we conceptualize the world from a human perspective and our conception therefore is a function of our interaction with the rest of the world rather than some inherent nature of it as seen from outside reality. Concepts are therefore not “real” in a deep, ontological sense. Where this view differs from that of radical constructionism is that I think that our concepts are far from arbitrary and that different views map better or worse onto the structure and regularities in the world.

This may well leave many people uncomfortable. It may sound like it leaves everything meaningless. That thought of course reflects a lingering paradigm where things like meaning need to be real in an independent metaphysical way to be real at all. Again we see the feeling that something doesn’t exist if it’s different from what we thought it was.

If one does not find redefinition of things that matter to us satisfactory and despite Flanagan’s elucidation still lament the loss of the Earth then one is asking for more than a respectable philosophy can offer. Glib and grim maybe, but I think we’re in no worse predicament than the Ptolemaic that Flanagan writes about. Our views of the nature of ourselves, meaning, love and everything like it is, like the nature of the earth, a matter of habit. I and many others with me are living proof that humans can live with a view of everything, including ourselves, as enworldened without falling into existential despair and nihilism[4]. We’re all too busy tending our gardens.

• • •

[1]
The ”free will debate” seems to be a peculiarity of the western intellectual tradition. It doesn’t have the same status in the “eastern world”. This is well illustrated by some theorists of the skeptical persuasion having an interest in Buddhist teachings. Both Owen Flanagan (mentioned before) and Susan Blackmore (mentioned later in part 7), consider themselves Buddhists; Galen Strawson also has certain Buddhist influences in his writings. Buddhism teaches (roughly) that the goal of life is to extinguish the illusion of self and the desires on that self’s behalf. Contrast this with the theology of the west, where individual salvation is central, and one might get an inkling of why this has been such an important issue here. The take home-message is that obsession with the self and will in the western manner is not as necessary we might think.

[2]
Newton put an end to this by claiming that the forces that make things fall here on earth is actually the same forces that make the planets circle about in the heavens.

[3]
“Reductionism” is often used as a term of abuse, most often by theorists in the humanities. It’s aimed at those who emphasize lower levels in causation (such as biology or neurology rather than, say, culture). However, naturalism requires that for an explanation to be complete. Sure, you can explain something as emanating from a higher level abstracted entity (like “culture”) but then that entity needs to be explained in turn (How did culture come about? What does it consist of?). To be complete, an explanation need to, at some point, reach all the way down to the ground/physical level (in the way a building cannot hang in the air — it can be hung in something and not touch the ground, but then what it hangs in has to be planted on the ground, and so forth). This is simply required by naturalism (materialism) and that leads me to suggest that those who use reductionism as a pejorative term are actually uncomfortable with naturalism and materialism but don’t want to admit it.

[4]
2017 comment: Considering I’ve been subject to much more existential anxiety the last few years than I ever had when I wrote that, I’m much less glib on that point today.

Did you enjoy this article? Consider supporting Everything Studies on Patreon.

6 thoughts on “Erisology of Self and Will, Part 6: The Need and the Reasons

  1. The school of Pragmatism suggests that the value of a concept is found in its operational utility. And if we want to understand our concepts we need to observe them in actual use. Ideas are subject to natural selection, and we are most likely to pass on to subsequent generations those ideas that have proved themselves most useful.

    Responsibility is assigned by society to the person whose actions ought to be emulated or ought to be corrected. A person does something heroic and society praises them, thus encouraging children to emulate the desired behavior. A person criminally harms another person, or violates their rights, and society blames the actor, and subjects them as necessary to punishment and hopefully correction.

    The meaning of a word is found in specific operations. Understand the operations and you understand the concepts.

    Neither “responsibility” nor “free will” can be held responsible for an unjust penalty. To understand the penalty, we must understand what justice is about. What is it trying to do? How does it operate to achieve that objective. You may find this helpful:

    https://marvinedwards.me/2015/05/26/what-is-justice/

    Reduction of the whole to its parts is helpful to explain how the whole works. But we cannot be misled into thinking that in explaining something that we have somehow “explained it away”.

    For example, Neuroscience may progress to the point where the brain activity, say of a person sitting in a restaurant deciding what to order from the menu, is totally mapped from neuron to neuron. However, a second person in the restaurant may order the same item from the menu using different neuron pathways. From this we discover that there is a common process, “ordering lunch from a menu”, which is being carried out on different hardware platforms (different brains).

    So, what is the nature of “ordering lunch from a menu”? Can we say that it is controlled by the neurology, or is it controlled by programmable reasoning?

    If I had a drone helicopter, and programmed it to detect and maintain a stationary height, then is the reasoning in the hardware or in the software? Or is it actually in the running process. Turn it on and the drone ascends to the selected altitude and bobs up and down there. Turn it off and the drone falls to the ground.

    So, is it in the hardware, the software, or in the active processing of the software by the hardware.

    Liked by 1 person

    1. I pretty much agree with what you said (I’ll check out your link when I have the time). As for the last question, in the drone example I think the “reasoning” should be said to be in the running process but other interpretations can make sense depending on context. For brains there is obviously not the same division between hardware and software.

      Like

      1. Yes, it’s kind of weird for me to suggest the software is independent of the brain. But a clearer example might be a book of recipes. Someone writes a book of recipes and later dies. Someone else, a hundred years later, picks up the book at the library and begins trying out the recipes. The recipes are clearly programs, and they also exist separately from the two brains.

        It’s similar to a computer program written in a popular language that is implemented on a variety of different hardware platforms via different operating systems (Windows vs OS2).

        If the program were not independent of the brain, then the only way it could be transferred from one mind to another would be a physical exchange of neurons (or perhaps a Vulcan “mind-meld”).

        Education, or more specifically skills training, is a self-programming task, where new behaviors are passed on through demonstration or description, and committed to verbal, sensory, and muscle memory through practical exercises.

        The program would exist separately from the brain, initially, until it is deliberately installed into the neural network, and then accessed from within that network whenever that task needed to be done. The operation within the brain would be all brain, of course.

        Anyways, if we wanted to explain what the theist is describing as something not physical but nonetheless controlling the activity of the physical brain, that might be a way to do it.

        I have this underlying question as to whether everyone is actually referring to the same thing, but just describing it differently. Basically, we are all dealing with the same physical reality. So, in operating with that reality our actions are going to be similar. For example, my mother was a Christian all her life, but she took us to the doctor when needed, rather than relying on faith in miracles. And there should be a similar correspondence in behavior and attitude for the majority of moral problems as well.

        But I’m rambling now. Best to move on.

        Like

        1. I haven’t thought enough about this to give a detailed response. I should though, because it’s an interesting question. There are obviously ways to transfer information from brain to brain but as far as I know there isn’t any definitive philosophical account of how exactly that works.

          Whether the relationship between brains and the information it contains/can-output-and-install is well described by the hardware-software dichotomy I’m doubtful. The differences seems so great that the metaphor might hurt as much as it helps.

          Like

Leave a comment