R. Scott Bakker's Blog
May 13, 2025
No Whimpering Here
I think it’s fair to say it’s beginning in earnest now. A bit faster than I thought, and pretty damn close to as predicted. The plunging cost of reality is allowing the simulation of crisis events to cue sustained engagement, driving advertising revenue, fueling gushers of pollution. Herr Goering, the adorable clown, is now the leader. All the toothbrush moustaches lurk behind the stage, and Curtis Yarvin is growing himself a little Dugin beard. Welcome to the Age of the Fat Fascist, stage one of the Semantic Apocalypse.
I’d ask you to buy a ticket but I see you’ve already found your seat.
February 19, 2020
Egnor Confounded
Once again, in midst of all the ad hominem nonsense coming from the Trump-Newscorp-Combine, we find another theorist, this time neuroscientist (and creationist) Michael Egnor, embracing an ad hominem dismissal of eliminativism on the Mind Matters podcast, which has been partially transcribed and posted under the title, “Why Eliminative Materialism Cannot Be A Good Theory of Mind.”
Where Trump hews to what is called the ‘abusive ad hominem,’ Egnor espouses the tu quoque, the argument that intentional eliminativism is self-refuting because eliminativists themselves use intentional terms. This is essentially the same argument my old highschool girlfriend’s mother would use to refute my atheism: every time I uttered the word “God” she would cry, “See! You believe in Him!”
The same way God doesn’t have to exist for the term “God” to do a tremendous amount of work, terms like “beliefs” or “reasons” and so on don’t need referents to do a tremendous amount of work.
The story is a good deal more complicated than this when it comes to intentional idioms, of course: one needs to explain, among other things, why so many theorists run afoul this particular confound. But the tu quoque, as applied against eliminativism, at least, is every bit as bankrupt.
Enter Egnor:
[Identity Theory has] been discarded because its logical nonsense. Every attribute of the mind, reason, emotion, perception, all of those things are completely different from matter. That is, one describes matter as extensions in space; one describes perceptions and reason and emotions in completely different ways. There’s no overlap between them so mental states can’t be the same thing as physical states. They actually don’t share any properties in common. They’re clearly related to one another in important ways but they’re not the same thing.
Eliminative materialists go one step further. They actually say that there are no mental states, that there is only the brain. Which is kind of an odd thing to say because what eliminative materialists are saying is that their ideas are mindless.
How can you have a proposition that the mind doesn’t exist? That means propositions don’t exist and that means you don’t have a proposition.
Let’s go through this sentence by sentence…
[Identity Theory has] been discarded because its logical nonsense. Simply not true. Identity Theory has fallen out of favour because, like Egnor, it possesses no compelling account of intentional phenomena. As we shall see, the “logical nonsense” here belongs entirely to Egnor.
Every attribute of the mind, reason, emotion, perception, all of those things are completely different from matter. Because, Egnor thinks, these things are exceptional, somehow distinct from the natural world as we have come to understand it. It’s important to keep in mind who’s making the more extraordinary claim here: The eliminativist is saying intentional properties only seem exceptional, much the same way celestial properties once seemed exceptional, because we lack perspective. Egnor is say they really are exceptional.
That is, one describes matter as extensions in space; one describes perceptions and reason and emotions in completely different ways. Yes, heuristically, in source insensitive ways. How else are humans supposed to understand themselves and one another? Given the astronomically complicated nature of the systems involved, our ancestors had to rely on hacks to communicate facts pertaining to their brain states, which is to say, ways to report brain states absent any knowledge of brain states. Egnor, on the other hand, would have us ignore this rather obvious cognitive dilemma, and argue that in addition to brains, we also evolved this secondary, exceptional ontological order, the extension of our intentional vocabulary.
There’s no overlap between them so mental states can’t be the same thing as physical states. They actually don’t share any properties in common. There’s (almost) no overlap between them because intentional cognition is heuristic cognition, a system that neglects the high-dimensional facts of the systems involved, relying instead on cues systematically related to those systems. Those cues appear to possess an exceptional nature because we lack the metacognitive resources required to high-dimensionally source them, to intuit them as belonging to nature more generally. Given biocomplexity, its hard to imagine how it could be any other way.
They’re clearly related to one another in important ways but they’re not the same thing. And this, of course, is the million dollar question, the one that ecological eliminativism, at least, actually answers. Egnor would lead us into the exceptionalist labyrinth, and brick up all the exits with his fallacious tu quoque.
Eliminative materialists go one step further. They actually say that there are no mental states, that there is only the brain. Which is kind of an odd thing to say because what eliminative materialists are saying is that their ideas are mindless. Intentionalists are forever telling eliminativists what they “really mean.” Intentional cognition is mandatory: we simply have no way of reporting biological systems short its heuristic machinations. But one can agree that the hacks belonging to intentional cognition are mandatory without likewise asserting that intentional exceptionalism is mandatory. As with “God,” I can assert that “mind” is a useful hack in certain cognitive situations without automatically asserting that minds (as Egnor theorizes them) are real.
How can you have a proposition that the mind doesn’t exist? See above.
That means propositions don’t exist and that means you don’t have a proposition. No, that means I’m employing a hack that works quite well in certain problem-solving contexts. Cognitive neuroscience, unfortunately, isn’t one of them, as Egnor’s utter inability to solve any of the problems of consciousness and intentionality attest.
For me, the most egregious thing about the post lies with Mind Matters, not Egnor. They actually quote William Ramsey’s excellent SPEP article on eliminativism, but they remain utterly mum on the devastating critique Ramsey provides of tu quoque counter-arguments such as Egnor’s. If I argue that intentional terms have no extension, that only various metacognitive confounds make it seem that way, then arguing that my position is absurd because I use intentional terms clearly begs the question. It is, to use Egnor’s phrase, logical nonsense.
February 10, 2020
Lollipop World
What are the odds that I would finish writing a near-future viral thriller (The Lollipop Factory) just as 2019-nCoV was becoming entrenched in Wuhan? So, as it turned out, the first facts I wanted to know when I caught wind of the outbreak were things like the resource requirements for treatment, the average transmission rate per person, and whether transmission was asymptomatic. The sudden rush to build new hospitals answered the first question: 2019-nCoV was a resource intensive disease. As it turns out, some 18% of those with verified cases require intensive care. This fact became mind-boggling as more and more estimates of the transmission rate bubbled to the surface of the web: on average, investigators think 2019-nCoV is around twice as contagious as the seasonal flu. And if this weren’t bad enough, we now have solid evidence of asymptomatic transmission: as relieved as I was to learn that infected children weren’t getting sick, I understood the kind of epidemiological nightmare this represented. How do you contain a disease you can’t see?
So what’s the upshot?
2019-nCoV is more difficult to contain than the seasonal flu, and so, likely beyond containment short severe and sustained (ie, economic activity killing) restrictions on face-to-face interaction. Either way, we are likely at the beginning of the wildfire season, not the middle, nor the end.
The lethality of 2019-nCoV will be a function of the resources available to treat critical cases. If this reaches influenza pandemic proportions, then 2019-nCoV will likely be more, not less, deadly than SARS (which killed, given the resources available at the time, around 10% of those infected).
Personally, given the way Chinese authorities bungled the outbreak at the start, and given the alarming tendency of the WHO and CDC to communicate only the most optimistic appraisals of 2019-nCoV, I think this will be the biggest thing to hit humanity since World War II.
For the critically minded, most of the estimates referenced above can be found agreggated here. There’s countless caveats, of course, including the mutability 2019-nCoV itself, which could, like SARS, become less lethal over time. But don’t be lulled by calm-at-all-costs bureaucrats or the nothing-to-see-here Wall Street Bulls: the stakes may not be apocalyptic, but they are civilizational.
January 6, 2020
Discontinuity Thesis: A ‘Birds of a Feather’ Argument Against Intentionalism*
“Scholars who study intentional phenomena generally tend to consider them as processes and relationships that can be characterized irrespective of any physical objects, material changes, or motive forces. But this is exactly what poses a fundamental problem for the natural sciences. Scientific explanation requires that in order to have causal consequences, something must be susceptible of being involved in material and energetic interactions with other physical objects and forces.” Terrence Deacon, Incomplete Nature, 28
“Exactly how are consciousness and subjective experience related to brain and body? It is one thing to be able to establish correlations between consciousness and brain activity; it is another thing to have an account that explains exactly how certain biological processes generate and realize consciousness and subjectivity. At the present time, we not only lack such an account, but are also unsure about the form it would need to have in order to bridge the conceptual and epistemological gap between life and mind as objects of scientific investigation and life and mind as we subjectively experience them.” Evan Thompson, Mind in Life, x
“Norms (in the sense of normative statuses) are not objects in the causal order. Natural science, eschewing categories of social practice, will never run across commitments in its cataloguing of the furniture of the world; they are not by themselves causally efficacious—no more than strikes or outs are in baseball. Nonetheless, according to the account presented here, there are norms, and their existence is neither supernatural nor mysterious. Normative statuses are domesticated by being understood in terms of normative attitudes, which are in the causal order.” Robert Brandom, Making It Explicit, 626
What I would like to do is run through a number of different discontinuities you find in various intentional phenomena as a means of raising the question: What are the chances? What’s worth noting is how continuous these alleged phenomena are with each other, not simply in terms of their low-dimensionality and natural discontinuity, but in terms of mutual conceptual dependence as well. I made a distinction between ‘ontological’ and ‘functional’ exemptions from the natural even though I regard them as differences of degree because of the way it maps stark distinctions in the different kinds of commitments you find among various parties of believers. And ‘low-dimensionality’ simply refers to the scarcity of the information intentional phenomena give us to work with—whatever finds its way into the ‘philosopher’s lab,’ basically.
So with regard to all of the following, my question is simply, are these not birds of a feather? If not, then what distinguishes them? Why are low-dimensionality and supernaturalism fatal only for some and not others?
.
Soul – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of the Soul, you will find it consistently related to Ghost, Choice, Subjectivity, Value, Content, God, Agency, Mind, Purpose, Responsibility, and Good/Evil.
Game – Anthropic. Low-dimensional. Functionally exempt from natural continuity (insofar as ‘rule governed’). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Game is consistently related to Correctness, Rules/Norms, Value, Agency, Purpose, Practice, and Reason.
Aboutness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Aboutness is consistently related to Correctness, Rules/Norms, Inference, Content, Reason, Subjectivity, Mind, Truth, and Representation.
Correctness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Correctness is consistently related to Game, Aboutness, Rules/Norms, Inference, Content, Reason, Agency, Mind, Purpose, Truth, Representation, Responsibility, and Good/Evil.
Ghost – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of Ghosts, you will find it consistently related to God, Soul, Mind, Agency, Choice, Subjectivity Value, and Good/Evil.
Rules/Norms – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Rules and Norms are consistently related to Game, Aboutness, Correctness, Inference, Content, Reason, Agency, Mind, Truth, and Representation.
Choice – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Embodies inexplicable efficacy. Choice is typically discussed in relation to God, Agency, Responsibility, and Good/Evil.
Inference – Anthropic. Low-dimensional. Functionally exempt (‘irreducible,’ ‘autonomous’) from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Inference is consistently related to Game, Aboutness, Correctness, Rules/Norms, Value, Content, Reason, Mind, A priori, Truth, and Representation.
Subjectivity – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Subjectivity is typically discussed in relation to Soul, Rules/Norms, Choice, Phenomenality, Value, Agency, Reason, Mind, Purpose, Representation, and Responsibility.
Phenomenality – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. Phenomenality is typically discussed in relation to Subjectivity, Content, Mind, and Representation.
Value – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Value discussed in concert with Correctness, Rules/Norms, Subjectivity, Agency, Practice, Reason, Mind, Purpose, and Responsibility.
Content – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Content discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Phenomenality, Reason, Mind, A priori, Truth, and Representation.
Agency – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Agency is discussed in concert with Games, Correctness, Rules/Norms, Choice, Inference, Subjectivity, Value, Practice, Reason, Mind, Purpose, Representation, and Responsibility.
God – Anthropic. Low-dimensional. Ontologically exempt from natural continuity (as the condition of everything natural!). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds God discussed in relation to Soul, Correctness, Ghosts, Rules/Norms, Choice, Value, Agency, Purpose, Truth, Responsibility, and Good/Evil.
Practices – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Practices are discussed in relation to Games, Correctness, Rules/Norms, Value, Agency, Reason, Purpose, Truth, and Responsibility.
Reason – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Reason discussed in concert with Games, Correctness, Rules/Norms, Inference, Value, Content, Agency, Practices, Mind, Purpose, A priori, Truth, Representation, and Responsibility.
Mind – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Mind considered in relation to Souls, Subjectivity, Value, Content, Agency, Reason, Purpose, and Representation.
Purpose – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Purpose discussed along with Game, Correctness, Value, God, Reason, and Representation.
A priori – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One often finds the A priori discussed in relation to Correctness, Rules/Norms, Inference, Subjectivity, Content, Reason, Truth, and Representation.
Truth – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Truth discussed in concert with Games, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Value, Content, Practices, Mind, A priori, Truth, and Representation.
Representation – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Representation discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Subjectivity, Phenomenality, Content, Reason, Mind, A priori, and Truth.
Responsibility – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Responsibility is consistently related to Game, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Reason, Agency, Mind, Purpose, Truth, Representation, and Good/Evil.
Good/Evil – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Good/Evil consistently related to Souls, Correctness, Subjectivity, Value, Reason, Agency, God, Purpose, Truth, and Responsibility.
.
The big question here, from a naturalistic standpoint, is whether all of these characteristics are homologous or merely analogous. Are the similarities ontogenetic, the expression of some shared ‘deep structure,’ or merely coincidental? For me this has to be what I think is one of the most significant questions that never get’s asked in cognitive science. Why? Because everybody has their own way of divvying up the intentional pie (including interpretavists like Dennett). Some of these items are good, and some of them are bad, depending on whom you talk to. If these phenomena were merely analogous, then this division need not be problematic—we’re just talking fish and whales. But if these phenomena are homologous—if we’re talking whales and whales—then the kinds of discursive barricades various theorists erect to shelter their ‘good’ intentional phenomena from ‘bad’ intentional phenomena need to be powerfully motivated.
Pointing out the apparent functionality of certain phenomena versus others simply will not do. The fact that these phenomena discharge some kind of function somehow seems pretty clear. It seems to be the case that God anchors the solution to any number of social problems—that even Souls discharge some function in certain, specialized problem-ecologies. The same can be said of Truth, Rule/Norm, Agency—every item on this list, in fact.
And this is precisely what one might expect given a purely biomechanical, heuristic interpretation of these terms as well (with the added advantage of being able to explain why our phenomenological inheritance finds itself mired in the kinds of problems it does). None of these need count as anything resembling what our phenomenological tradition claims to explain the kinds of behaviour that accompanies them. God doesn’t need to be ‘real’ to explain church-going, no more than Rules/Norms do to explain rule-following. Meanwhile, the growing mountain of cognitive scientific discovery looms large: cognitive functions generally run ulterior to what we can metacognize for report. Time and again, in context after context, empirical research reveals that human cognition is simply not what we think it is. As ‘Dehaene’s Law’ states, “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79). Perhaps this is simply what intentionality amounts to: a congenital ‘overestimation of awareness,’ a kind of WYSIATI or ‘what-you-see-is-all-there-is’ illusion. Perhaps anthropic, low-dimensional, functionally exempt from natural continuity, inscrutable in terms of natural continuity, source of perennial controversy, and possesses inexplicable efficacy are all expressions of various kinds of neglect. Perhaps it isn’t just a coincidence that we are entirely blind to our neuromechanical embodiment and that we suffer this compelling sense that we are more than merely neuromechanical.
How could we cognize the astronomical causal complexities of cognition? What evolutionary purpose would it serve?
What impact does our systematic neglect of those capacities have on philosophical reflection?
Does anyone really think the answer is going to be ‘minimal to nonexistent’?
* Originally posted June 16th, 2014
December 27, 2019
Eliminativist Interventions: Gallagher on Neglect and Free Will
[image error]
In the previous post I tried to show how the shape of the free will debate can be explained in terms of the differences and incompatibilities between source-sensitive (causal) and source-insensitive (intentional) cognition. Rather than employ the overdetermined term, ‘free will,’ I considered the problem in terms of ‘choice-talk,’ the cognitive systems and language we typically employ when reporting behaviour. I then try to show how this simple step sideways allows us to see the free will debate as a paradigmatic, intellectual ‘crash space,’ a point where the combination of heuristic neglect and intellectual innovation generates systematic cognitive illusions.
As it happened, I read Shaun Gallagher’s excellent Enactivist Interventions: Rethinking the Mind while picking at this piece, and lo, discovered that he too tackles the problem of free will. I wrote what follows as an inescapable consequence.
Gallagher’s approach to the question of free will is diagnostic, much like my own. First, he wants to characterize how the problem is canonically posed, the ‘common understanding of the question,’ then he wants to show how this characterization gets the problem wrong. Discussing Libet’s now infamous ‘free won’t’ experiment, he points out that “In the experimental situation we are asked to pay attention to all of the processes that we normally do not attend to, and to move our body in a way that we do not usually move it…” which is to say, precisely those processes choice-talk systematically neglects. As he writes:
“These experiments, however, and more generally the broader discussions of motor control, have nothing to tell us about free will per se. If they contribute to a justification of perceptual or epiphenomenal theories of how we control our movement, these are not theories that address the question of free will. The question of free will is a different question.”
But why is the free will question a different question? Gallagher offers two different reasons why the question of free will simply has no place in the neurophysiology of decision-making:
“The attempt to frame the question of free will in terms of these subpersonal processes—either to dismiss it or to save it—is misguided for at least two reasons. First, free will cannot be squeezed into the elementary timescale of 150–350 milliseconds; free will is a longer-term phenomenon and, I will argue, it involves consciousness. Second, the notion of free will does not apply primarily to abstract motor processes or even to bodily movements that make up intentional actions—rather it applies to intentional actions themselves, described at the most appropriate pragmatic level of description.”
Essentially, Gallagher’s offering his own level-of-description argument. The first reason choice-talk has no place in neurophysiological considerations is that it only applies to the time-scale of personal action, and not to the time-scale of neurophysiological processes. This seems a safe enough assumption, given the affinity of choice-talk with personal action more generally. The problem is that we already know that free will has no application in neurophysiology—that it is expunged. The question, rather, is whether source-talk applies to personal time-scales. And the problem, as we saw above, is that it most certainly does. We can scale up the consequences of Libet’s experiment, talk of the brain deciding before conscious awareness of our decisions. In fact, we do this whenever we use biomedical facts to assess responsibility. Certainly, we don’t want to go back to the days of condemning the character of kids suffering ADHD and the like.
Gallagher’s second reason is that choice-talk only applies to the domain of intentional actions. He introduces the activity of lizard hunting to give an example of the applicability and inapplicability of choice-talk (and hence, free will). What’s so interesting here, from a heuristic neglect standpoint, is the way his thinking continually skirts around the issue of source-insensitivity.
“I am not at all thinking about how to move my body—I’m thinking about catching the lizard. My decision to catch the lizard is the result of a consciousness that is embedded or situated in the particular context defined by the present circumstance of encountering the lizard, and the fact that I have a lizard collection. This is an embedded or situated reflection, neither introspective nor focused on my body. It is ‘a first-person reflective consciousness that is embedded in a pragmatically or socially contextualized situation.”
Gallagher’s entirely right that we systematically neglect our physiology in the course of hunting lizards. Choice-talk belongs to a source-insensitive regime of problem solving—Gallagher himself recognizes as much. We neglect the proximal sources of behaviour and experience, focusing rather on the targets of those sources. Because this regime exhibits source-insensitivity, it relies on select correlations, cues, to the systems requiring solution. A face, for instance, is a kind of cuing organ, allowing others to draw dramatic conclusions of the basis of the most skeletal information (think happy faces, or any ‘emoticon’). The physiology of the expression-animating brain completely eludes us, and yet we can make striking predictions regarding what it will do next given things like ancestral biological integrity and similar training. A happy face on robot, on the other hand, could mean anything. This ecological dependence is precisely why source-insensitive cognitive tools are so situational, requiring the right cues in the right circumstances to reliably solve select sets of problems—or problem ecologies.
So, Gallagher is right to insist that choice-talk, which is adapted to solve in source-insensitive or ‘shallow’ cognitive ecologies, has no application in source-sensitive or ‘deep’ cognitive ecologies. After all, we evolved these source-insensitive modes because, ancestrally speaking, biological complexity made source-sensitive cognition of living systems impossible. This is why our prescientific ancestors could go lizard hunting too.
Gallagher is also largely right to say that sourcing lizard-hunting a la neuroscience has nothing to do with our experience of hunting lizards—so long as everything functions as it should. Sun-stroke is but one of countless, potential ‘choice-talk’ breakers.
But, once again, the question is whether source-talk applies to the nature of lizard hunting—which it certainly does. How could it not? Lizard hunting is something humans do—which is to say, biological through and through. Biology causes us to see lizards. Biology also causes us (in astronomically complicated, stochastic ways) to hunt them.
Gallagher’s whole argument hinges on an apple and orange strategy, the insistence that placing neurophysiological apples in the same bushel as voluntary oranges fundamentally mistakes the segregate nature of oranges. On my account both choice-talk and source-talk possess their respective problem-ecologies while belonging to the same high-dimensional nature. Choice-talk belongs to a system adapted to source-insensitive solutions, and as such, possesses a narrow scope of application. Source-talk, on the other hand, possesses a far, far broader scope of application, so much so that it allows us to report the nature of choice-talk. This is what Libet is doing. His findings crash choice-talk because choice-talk actually requires source-neglect to function happily.
On Gallagher’s account, free will and neurophysiology occupy distinct ‘levels of description,’ the one belonging to ‘intentional action,’ and the other to ‘natural science.’ As with the problem ecology of choice-talk, the former level is characterized by systematic source-neglect. But where this systematic neglect simply demarcates the problem-ecology of choice-talk from that of source-talk in my account, in Gallagher it demarcates an ontologically exceptional, low-dimensional ecology, that of ‘first-person reflective consciousness… embedded in a pragmatically or socially contextualized situation.’
This where post-cognitivists, having embraced high-dimensional ecology, toss us back into the intractable lap of philosophy. Gallagher, of course, thinks that some exceptional twist of nature forces this upon cognitive science, one that the systematic neglect of sources in things like lizard hunting evidences. But once you acknowledge neglect, the way Gallagher does, you have no choice but to consider the consequences of neglect. Magicians, for instance, are masters at manipulating our intuitions via neglect. Suppress the right kind of information, and humans intuit exceptional entities and events. Is it simply a coincidence that we both suffer source-neglect and we intuit exceptional entities and events when reflecting on our behaviour?
How, for instance, could reflection hope to distinguish the inability to source from the absence of sources? Gallagher agrees that metacognition is ecological—that there is no such thing as the ‘disembodied intellect.’ “Even in cases where we are able to step back,” Gallagher writes, “to detach ourselves from the demands of the immediate environment, and to engage in a second-order, conceptual deliberation, this stepping back does not make thinking any less of an embodied/ intersubjective skill.” Stepping back does not mean stepping out, despite seeming that way. Human metacognition is radically heuristic, source-insensitive through and through. Deliberative reflection on the nature of experience cannot but systematically neglect sources. This is why we hallucinate ‘disembodied intellects’ in the first place! We simply cannot, given our radically blinkered metacognitive vantage, distinguish confounds pertaining to neglect from properties belonging to experience. (The intuition, in fact, cuts the other way, which is why the ball of discursive yarn needs to be unraveled in the first place, why post-cognitivism is post.)
Even though Gallagher relies on neglect to relativize choice-talk to a particular problem-solving domain (his ‘level of description’), he fails to consider the systematic role played by source-insensitivity in our attempts to cognize cognition. He fails, in other words, to consider his own theoretical practice in exhaustively ecological terms. He acknowledges that it has to be ecological, but fails to consider what this means. As a result, he trips into phenomenological and pragmatic versions of the same confounds he critiques in cognitivism. Disembodied intellects become disembodied embodied intellects.
To be embodied is to be high-dimensional, to possess nearly inexhaustible amounts natural information. To be embodied, in other words, is to be susceptible to source-sensitive cognition. Except, Gallagher would have you believe, when its not, when the embodiment involves intentionality, in which case, we are told, source-talk no longer applies, stranding us with the low-dimensional resources of source-insensitive cognition (which is to say, perpetual disputation). ‘Disembodied intellects’ (one per theorist) are traded for irreducible phenomenologies (one per theorist) and/or autonomous normativities (one per theorist), a whole new set of explananda possessing natures that, we are assured, only intentional cognition can hope to solve.
Gallagher insists that intentional phenomena are embodied, ‘implicit,’ as he likes to say, in this or that high-dimensional ecological feature, only at a ‘level of description’ that only intentional cognition can solve. The obvious problem, of course, is that the descriptive pairing of low-dimensional intentional phenomena like ‘free will’ with high-dimensional ecologies amounts to no more than a rhetorical device short some high-dimensional account of intentionality. Terms such as ‘implicit,’ like ‘emergent’ or ‘autopoietic,’ raise far more questions than they answer. How is intentionality ‘implicit’ in x? How does intentionality ’emerge’ from x? Short some genuine naturalization of intentionality, very little evidences the difference between Gallagher’s ‘embodiment’ and haunting—‘daimonic possession.’
The discursively fatal problem, however, is that intentional cognition, as source-insensitive, relies on strategic correlations to those natures—and thus has no application to the question of natures. These are ‘quick and dirty’ systems adapted to the economical solution of practical problems on the fly. Only neglect makes it seem otherwise. This is why post-cognitivism, like cognitivism more generally, cannot so much as formulate, let alone explain, its explananda in any consensus-commanding way. On Gallagher’s account, institutional philosophy remains firmly in charge of cognitive scientific theorization, and will continue to do so in perpetuity as a ‘philosophy of nature’ (and in this respect, he’s more forthright than Hutto and Myin, who rhetorically dress their post-cognitive turn as an ‘escape’ from philosophy).
Ecological eliminativism suffers neither of these problems. Choice-talk has its problem-ecology. Source-talk has its problem-ecology. The two evolved on separate tracks, but now, thanks to radical changes in human cognitive ecology, they find themselves cheek and jowl, causing the former to crash with greater and greater frequency. This crash occurs, not because people are confusing ‘ontologically distinct levels of description,’ one exceptional, the other mundane, but because the kind of source-neglect required by the former does not obtain the way it did ancestrally. We should expect, moreover, the frequency of these crashes to radically increase as cognitive science and its technologies continue to mature. Continued insistence on ontologically and/or functionally exceptional ‘levels of description’ all but blinds us to this looming crisis.
Having acknowledged the fractionate and heuristic nature of deliberative metacognition, having acknowledged source-neglect, Gallagher now needs to explain what makes his exceptionalism exceptional, why the intentional events and entities he describes cannot be explained away as artifacts of inevitable heuristic misapplication. He finds neglect useful, but only because he neglects to provide a fulsome account of its metacognitive consequences. It possesses a second, far sharper edge.
December 19, 2019
If Free-Will were a Heuristic…
Ecological eliminativism provides, I think, an elegant way to understand the free-will debate as a socio-cognitive ‘crash space,’ a circumstance where ecological variance causes the systematic breakdown of some heuristic cognitive system. What follows is a diagnostic account, and as such will seem to beg the question to pretty much everyone it diagnoses. The challenge it sets, however, is abductive. In matters this abstruse, it will be the power to explain and synthesize that will carry the theoretical morning if not the empirical day.
As hairy as it is, the free-will debate, at least in its academic incarnation, has a trinary structure: you have libertarians arguing the reality of how decision feels, you have compatibilists arguing endless ways of resolving otherwise manifest conceptual and intuitive incompatibilities, and you have determinists arguing the illusory nature of how decision feels.
All three legs of this triumvirate can be explained, I think, given an understanding of heuristics and the kinds of neglect that fall out of them. Why does the feeling of free will feel so convincing? Why are the conceptualities of causality and choice incompatible? Why do our attempts to overcome this incompatibility devolve into endless disputation?
In other words, why is there a free-will debate at all? As of 10:33 AM December 17th, 2019, Googling “free will debate” returned 575,000,000 hits. Looking at the landscape of human cognition, the problem of free will looms large, a place where our intuitions, despite functioning so well in countless other contexts, systematically frustrate any chance of consensus.
This is itself scientifically significant. So far as pathology is the royal road to function, we should expect that spectacular breakdowns such as these will hold deep lessons regarding the nature of human cognition.
As indeed they do.
So, let’s begin with a simple question: If free-will were a heuristic, a tool humans used to solve otherwise intractable problems, what would it’s breakdown look like?
But let’s take a step back for a second, and bite a very important, naturalistic bullet. Rather than consider ‘free-will’ as a heuristic, let’s consider something less overdetermined: ‘choice-talk.’ Choice-talk constitutes one of at least two ways for us humans to report selections between behaviours. The second, ‘source-talk,’ we generally use to report the cognition of high-dimensional (natural) precursors, whereas we generally use choice-talk to report cognition absent high-dimensional precursors.
As a cognitive mechanism, choice-talk is heuristic insofar as it turns a liability into an asset, allowing us to solve social problems low-dimensionally—which is to say, on the cheap. That liability is source insensitivity, our congenital neglect of our biological/ecological precursors. Human cognition is fundamentally structured by what might be called the ‘biocomplexity barrier,’ the brute fact that biology is too complicated to cognize itself high-dimensionally. The choice-talk toolset manages astronomically complicated biological systems—ourselves and other people—via an interactional system reliably correlated to the high-dimensional fact of those systems given certain socio-cognitive contexts. Choice-talk works given the cognitive ecological conditions required to maintain the felicitous correlation between the cues consumed and the systems linked to them. Undo that correlation and choice-talk, like any other heuristic mechanism, begins to break down.
Ancestrally, we had no means of discriminating our own cognitive constitution. The division of cognitive labour between source-sensitive and source-insensitive cognition is one that humans constitutively neglect: we have to be trained to discriminate it. Absent such discrimination, the efficacy of our applications turn on the continuity of our cognitive ecologies. Given biocomplexity, the application of source-sensitive cognition to intractable systems—and biological systems in particular—is not something evolution could have foreseen. Why should we possess the capacity to intuitively reconcile the joint application of two cognitive systems that, as far as evolution was concerned, would never meet?
As a source-insensitivity workaround, a way to cognize behaviour absent the ability to source that behaviour, we should expect choice-talk cognition to misfire when applied to behaviour that can be sourced. We should expect that discovering the natural causes of decisions will scuttle the intuition that those decisions were freely chosen. The manifest incompatibility between high-dimensional source-talk and low-dimensional choice-talk arises because the latter has been biologically filtered to function in contexts precluding the former. Intrusions of source-talk applicability, when someone suffers a head injury, say, could usefully trump choice-talk applicability.
Choice-talk, in fact, possesses numerous useful limits, circumstances where we suspend its application to better solve social problems via other tools. As radically heuristic, choice-talk requires a vast amount of environmental stage-setting in order to function felicitously, an ecological ‘sweet spot’ that’s bound to be interrupted by any number of environmental contingencies. Some capacity to suspend its application was required. Intuitively, then, source-talk trumps choice-talk when applied to the same behaviour. Since the biocomplexity barrier assured that each mode would be cued the way it had always been cued since time immemorial, we could, ancestrally speaking, ignore our ignorance and generally trust our intuitions.
The problem is that source-talk is omni-applicable. With the rise of science, we realized that everything biological can be high-dimensionally sourced. We discovered that the once-useful incompatibility between source-talk and choice-talk can be scotched with a single question: If everything can be sourced, and if sources negate choices, then how could we be free? Incompatibility that was once-useful now powerfully suggests choice-talk has no genuinely cognitive applicability anywhere. If choice-talk were heuristic, in other words, you might expect the argument that ‘choices’ are illusions.
The dialectical problem, however, is that human deliberative metacognition, reflection, also suffers source-insensitivity and so also consists of low-dimensional heuristics. Deliberative metacognition, the same as choice-talk, systematically neglects the machinery of decision making: reflection consistently reports choices absent sources as a result. Lacking sensitivity to the fact of insensitivity, reflection also reports the sufficiency of this reporting. No machinery is required. The absence of proximal, high-dimensional sources is taken for something real, ontologized, becoming a property belonging to choices. Given metacognitive neglect, in other words, reflection reports choice-talk as expressing some kind of separate, low dimensional ontological order.
Given this blinkered report, everything depends on how one interprets that ontology and its relation to the high-dimensional order. Creativity is required to somehow rationalize these confounds, which, qua confounds, offer nothing decisive to adjudicate between rationalizations. If choice-talk were a heuristic, one could see individuals arguing, not simply that choices are ‘real,’ but the kind of reality they possess. Some would argue that choice possesses a reality distinct from biological reality, that choices are somehow made outside causal closure. Others would argue that choices belong to biological reality, but in a special way that explains their peculiarity.
If choice-talk were heuristic, in other words, you would expect that it would crash given the application of source-cognition to behaviours it attempts to explain. You would expect this crash to generate the intuition that choice-talk is an illusion (determinism). You would expect attempts to rescue choice would either take the form of insisting on its independent reality (libertarianism), or its secondary reality (compatibilism).
Two heuristic confounds are at work, the first a product of the naïve application of source-talk to human decision-making, cuing us to report the inapplicability of choice-talk tout court, the second the product of the naïve application of deliberative metacognition to human decision-making, cuing us to report the substantive and/or functional reality of ‘choice.’
If choice-talk were heuristic, in other words, you would expect something that closely resembles the contemporary free-will debate. You could even imagine philosophers cooking up cases to test, even spoof, the ways in which choice-talk and source-talk are cued. Since choices involve options, for instance, what happens when we apply source-talk to only one option, leaving the others to neglect?
If choice-talk were heuristic, in other words, you could imagine philosophers coming up things like ‘Frankfurt-style counterexamples.’ Say I want to buy a pet, but I can’t make up my mind whether to buy a cat or a dog. So, I decide to decide when I go the pet store on Friday. My wife is a neuroscientist who hates cats almost as much as she hates healthy communication. While I’m sleeping, she inserts a device at a strategic point in my brain that prevents me from choosing a cat and nothing else. None the wiser, I go to the pet store on Friday and decide to get a dog, but entirely of my own accord.
Did I choose freely?
These examples evidence the mischief falling out of heuristic neglect in a stark way. My wife’s device only interferes with decision-making processes to prevent one undesirable output. If the output is desirable, it plays no role, suggesting that the hacked subject chose that output ‘freely,’ despite the inability to do otherwise. On the one hand, surgical intervention prevents the application of choice-talk to cat buying. Source-talk, after all, trumps choice-talk. But since surgical intervention only pertains to cat buying, dog buying seems, to some at least, to remain a valid subject of choice-talk. Source neglect remains unproblematic. The machinery of decision-making, in other words, can be ignored the way it’s always ignored in decision-making contexts. It remains irrelevant. Choice-talk machinery seems to remain applicable to this one fork, despite crashing when both forks are taken together.
For some philosophers, this suggests that choice isn’t a matter of being able to do otherwise, but of arising out of the proper process—a question of appropriate ‘sourcing.’ They presume that choice-talk and the corresponding intuitions still apply. If the capacity to do otherwise isn’t definitive of choice, then provenance must be: choice is entirely compatible with precursors, they argue, so long as those precursors are the proper ones. Crash. Down another interpretative rabbit-hole they go. Short any inkling of the limits imposed by the heuristic tools at their disposal—blind to their own cognitive capacities—all they can do is pursue the intuitions falling out of the misapplications of those tools. They remain trapped, in effect, downstream the heuristic confounds described above.
Here we can see the way philosophical parsing lets us map the boundaries of reliable choice-talk application. Frankfurt-style counterexamples, on this account, are best seen as cognitive versions of visual illusions, instances where we trip over the ecological limits of our cognitive capacities.
As with visual illusions, they reveal the fractionate, heuristic nature of the capacities employed. Unlike visual illusions, however, they are too low-dimensional to be readily identified as such. To make matters worse, the breakdown is socio-cognitive: perpetual disputation between individuals is the breakdown. This means that its status as a crash space is only visible by taking an ecological perspective. For interpretative partisans, however, the breakdown always belongs to the ‘other guy.’ Understanding the ecology of the breakdown becomes impossible.
The stark lesson here is that ‘free-will’ is a deliberative confound, what you get when you ponder the nature of choice-talk without accounting for heuristic neglect. Choice-talk itself is very real. With the interactional system it belongs to—intentional cognition more generally—it facilitates cooperative miracles on the communicative back of less than fifteen bit-per-second. Impressive. Gobsmacking, actually. We would be fools not to trust our socio-cognitive reflexes where they are applicable, which is to say, where neglecting sources solves more problems than it causes.
So, yah, sure, we make choices all the bloody time. At the same time, though, ‘What is the nature of choice?’ is a question that can only be answered ecologically, which is to say, via source-sensitive cognition. The nature of choice involves the systematic neglect of systems that must be manipulated nevertheless. Cues and correlations are compulsory. The nature of choice, in other words, obliterates our intellectual and phenomenological intuitions regarding choice. There’s just no such thing.
And this, I think it’s fair to say, is as disastrous as a natural fact can be. But should we be surprised? The thing to appreciate, I think, is the degree to which we should expect to find ourselves in precisely such a dilemma. The hard fact is that biocomplexity forced us to evolve source-insensitive ways to troubleshoot all organisms, ourselves included. The progressive nature of science, however, insures that biocomplexity will eventually succumb to source-sensitive cognition. So, what are the chances that two drastically different, evolutionarily segregated cognitive modes would be easily harmonized?
Perhaps this is a growing pain every intelligent, interstellar species suffers, the point where their ancestral socio-cognitive toolset begins to fail them. Maybe science strips exceptionalism from every advanced civilization in roughly the same way: first our exceptional position, then our exceptional origin, and lastly, our exceptional being.
Perhaps choice dies with the same inevitability as suns, choking on knowledge instead of iron.
November 18, 2019
Flies, Frogs, and Fishhooks*
[Revisited this the other day after reading Gallagher’s account of lizard catching in Enactivist Interventions (recommended to me by Dirk a ways back) and it struck me as worth reposting. But where Gallagher thinks the neglect characteristic of lizard catching implies only to the inapplicability of neurobiology to the question of free-will, I think that neglect can be used to resolve a great number of mysteries regarding intentionality and cognition. I hope he finds this piece.]
So, me and my buddies occasionally went frog hunting when we were kids. We’d knot a string on a fishhook, swing the line over the pond’s edge, and bam! frogs would strike at them. Up, up they were hauled, nude for being amphibian, hoots and hollers measuring their relative size. Then they were dumped in a bucket.
We were just kids. We knew nothing about biology or evolution, let alone cognition. Despite this ignorance, we had no difficulty whatsoever explaining why it was so easy to catch the frogs: they were too stupid to tell the difference between fishhooks and flies.
Contrast this with the biological view I have available now. Given the capacity of Anuran visual cognition and the information sampled, frogs exhibit systematic insensitivities to the difference between fishhooks and flies. Anuran visual cognition not only evolved to catch flies, it evolved to catch flies as cheaply as possible. Without fishhooks to filter the less fishhook sensitive from the more fishhook sensitive, frogs had no way of evolving the capacity to distinguish flies from fishhooks.
Our old childhood theory is pretty clearly a normative one, explaining the frogs’ failure in terms what they ought to do (the dumb buggers). The frogs were mistaking fishhooks for flies. But if you look closely, you’ll notice how the latter theory communicates a similar normative component only in biological guise. Adducing evolutionary history pretty clearly allows us to say the proper function of Anuran cognition is to catch flies.
Ruth Millikan famously used this intentional crack in the empirical explanatory door to develop her influential version of teleosemantics, the attempt to derive semantic normativity from the biological normativity evident in proper functions. Eyes are for seeing, tongues for talking or catching flies; everything has been evolutionarily filtered to accomplish ends. So long as biological phenomena possess functions, it seems obvious functions are objectively real. So far as functions entail ‘satisfaction conditions,’ we can argue that normativity is objectively real. Given this anchor, the trick then becomes one of explaining normativity more generally.
The controversy caused by Language, Thought, and Other Biological Categories was immediate. But for all the principled problems that have since belaboured teleosemantic approaches, the real problem is that they remain as underdetermined as the day they were born. Debates, rather than striking out in various empirical directions, remain perpetually mired in ‘mere philosophy.’ After decades of pursuit, the naturalization of intentionality project, Uriah Kriegl notes, “bears all the hallmarks of a degenerating research program” (Sources of Normativity, 5).
Now the easy way to explain this failure is to point out that finding, as Millikan does, right-wrong talk buried in the heart of biological explanation does not amount to finding right and wrong buried in the heart of biology. It seems far less extravagant to suppose ‘proper function’ provides us with a short cut, a way to communicate/troubleshoot this or that actionable upshot of Anuran evolutionary history absent any knowledge of that history.
Recall my boyhood theory that frogs were simply too stupid to distinguish flies from fishhooks. Absent all knowledge of evolution and biomechanics, my friends and I found a way to communicate something lethal regarding frogs. We knew what frog eyes and frog tongues and frog brains and so on were for. Just like that. The theory possessed a rather narrow range of application to be true, but it was nothing if not cheap, and potentially invaluable if one were, say, starving. Anuran physiology, ethology, and evolutionary history simply did not exist for us, and yet we were able to pluck the unfortunate amphibians from the pond at will. As naïve children, we lived in a shallow information environment, one absent the great bulk of deep information provided by the sciences. And as far as frog catching was concerned, this made no difference whatsoever, simply because we were the evolutionary products of numberless such environments. Like fishhooks with frogs, theories of evolution had no impact on the human genome. Animal behavior and the communication of animal behavior, on the other hand, possessed a tremendous impact—they were the flies.
Which brings us back to the easy answer posed above, the idea that teleosemantics fails for confusing a cognitive short-cut for a natural phenomenon. Absent any way of cognizing our deep information environments, our ancestors evolved countless ways to solve various, specific problems absent such cognition. Rather than track all the regularities engulfing us, we take them for granted—just like a frog.
The easy answer, in other words, is to assume that theoretical applications of normative subsystems are themselves ecological (as is this very instant of cognition). After all, my childhood theory was nothing if not heuristic, which is to say, geared to the solution of complex physical systems absent complex physical knowledge of them. Terms like ‘about’ or ‘for,’ you could say, belong to systems dedicated to solving systems absent biomechanical cognition.
Which is why kids can use them.
Small wonder then, that attempts to naturalize ‘aboutness’ or ‘forness’—or any other apparent intentional phenomena—cause the theoretical fits they do. Such attempts amount to human versions of confusing flies for fishhooks! They are shallow information terms geared to the solution of shallow information problems. They ‘solve’—filter behaviors via feedback—by playing on otherwise neglected regularities in our deep environments, relying on causal correlations to the systems requiring solution, rather than cognizing those systems in physical terms. That is their naturalization—their deep information story.
‘Function,’ on the other hand, is a shallow information tool geared to the solution of deep information problems. What makes a bit of the world specifically ‘functional’ is its relation to our capacity to cognize consequences in a source neglecting yet source compatible way. As my childhood example shows, functions can be known independent of biology. The constitutive story, like the developmental one, can be filled in afterward. Functional cognition lets us neglect an astronomical number of biological details. To say what a mechanism is for is to know what a mechanism will do without saying what makes a mechanism tick. But unlike intentional cognition more generally, functional cognition remains entirely compatible with causality. This potent combination of high-dimensional compatibility and neglect is what renders it invaluable, providing the degrees of cognitive freedom required to tackle complexities across scales.
The intuition underwriting teleosemantics hits upon what is in fact a crucial crossroads between cognitive systems, where the amnesiac power of should facilitates, rather than circumvents, causal cognition. But rather than interrogate the prospect of theoretically retasking a child’s explanatory tool, Millikan, like everyone else, presumes felicity, that intuitions secondary to such retasking are genuinely cognitive. Because they neglect the neglect-structure of their inquiry, they flatter cunning children with objectivity, so sparing their own (coincidentally) perpetually underdetermined intuitions. Time and again they apply systems selected for brushed-sun afternoons along the pond’s edge to the theoretical problem of their own nature. The lures dangle in their reflection. They strike at fishhook after fishhook, and find themselves hauled skyward, manhandled by shadows before being dropped into buckets on the shore.
*Originally posted January 23rd, 2018
October 9, 2019
On the Death of Meaning
My copy of New Directions In Philosophy and Literature arrived yesterday…
[image error]
The anthology features an introduction by Claire Colebrook, as well as papers by Graham Harman, Graham Priest, Charlie Blake, and more. A prepub version of my contribution, “On the Death of Meaning,” can be found here.
September 27, 2019
Exploding the Manifest and Scientific Images of Man*
This is how one pictures the angel of history. His face is turned toward the past. Where we perceive a chain of events, he sees one single catastrophe which keeps piling wreckage upon wreckage and hurls it in front of his feet. The angel would like to stay, awaken the dead, and make whole what has been smashed. But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress. –Benjamin, Theses on the Philosophy of History
What I would like to do is show how Sellars’ manifest and scientific images of humanity are best understood in terms of shallow cognitive ecologies and deep information environments. Expressed in Sellars’ own terms, you could say the primary problem with his characterization is that it is a manifest, rather than scientific, understanding of the distinction. It generates the problems it does (for example, in Brassier or Dennett) because it inherits the very cognitive limitations it purports to explain. At best, Sellars take is too granular, and ultimately too deceptive to function as much more than a stop-sign when it comes to questions regarding the constitution and interrelation of different human cognitive modes. Far from a way to categorize and escape the conundrums of traditional philosophy, it provides yet one more way to bake them in.
Cognitive Images
Things begin, for Sellars, in the original image, our prehistorical self-understanding. The manifest image consists in the ‘correlational and categorial refinement’ of this self-understanding. And the scientific image consists in everything discovered about man beyond the limits of correlational and categorial refinement (while relying on these refinements all the same). The manifest image, in other words, is an attenuation of the original image, whereas the scientific image is an addition to the manifest image (that problematizes the manifest image). Importantly, all three are understood as kinds of ‘conceptual frameworks’ (though he sometime refers to the original image as ‘preconceptual.’
The original framework, Sellars tells us, conceptualizes all objects as ways of being persons—it personalizes its environments. The manifest image, then, can be seen as “the modification of an image in which all the objects are capable of the full range of personal activity” (12). The correlational and categorial refinement consists in ‘pruning’ the degree to which they are personalized. The accumulation of correlational inductions (patterns of appearance) undermined the plausibility of environmental agencies and so drove categorial innovation, creating a nature consisting of ‘truncated persons,’ a world that was habitual as opposed to mechanical. This new image of man, Sellars claims, is “the framework in terms of which man came to be aware of himself as man-in-the-world” (6). As such, the manifest image is the image interrogated by the philosophical tradition, which given the limited correlational and categorial resources available to it, remained blind to the communicative—social—conditions of conceptual frameworks, and so, the manifest image of man. Apprehending this would require the scientific image, the conceptual complex “derived from the fruits of postulational theory construction,” yet still turning on the conceptual resources of the manifest image.
For Sellars, the distinction between the two images turns not so much on what we commonly regard to be ‘scientific’ or not (which is why he thinks the manifest image is scientific in certain respects), but on the primary cognitive strategies utilized. “The contrast I have in mind,” he writes, “is not that between an unscientific conception of man-in-the-world and a scientific one, but between that conception which limits itself to what correlational techniques can tell us about perceptible and introspectable events and that which postulates imperceptible objects and events for the purpose of explaining correlations among perceptibles” (19). This distinction, as it turns out, only captures part of what we typically think of as ‘scientific.’ A great deal of scientific work is correlational, bent on describing patterns in sets of perceptibles as opposed to postulating imperceptibles to explain those sets. This is why he suggests that terming the scientific image the ‘theoretical image’ might prove more accurate, if less rhetorically satisfying. The scientific image is postulational because it posits what isn’t manifest—what wasn’t available to our historical or prehistorical ancestors, namely, knowledge of man as “a complex physical system” (25).
The key to overcoming the antipathy between the two images, Sellars thinks, lies in the indispensability of the communally grounded conceptual framework of the manifest image to both images. The reason we should yield ontological priority to the scientific image derives from the conceptual priority of the manifest image. Their domains need not overlap. “[T]he conceptual framework of persons,” he writes, “is not something that needs to be reconciled with the scientific image, but rather something to be joined to it” (40). To do this, we need to “directly relate the world as conceived by scientific theory to our purposes and make it our world and no longer an alien appendage to the world in which we do our living” (40).
Being in the ‘logical space of reasons,’ or playing the ‘game of giving and asking for reasons,’ requires social competence, which requires sensitivity to norms and purposes. The entities and relations populating Sellars normative metaphysics exist only in social contexts, only so far as they discharge pragmatic functions. The reliance of the scientific image on these pragmatic functions renders them indispensable, forcing us to adopt ‘stereoscopic vision,’ to acknowledge the conceptual priority of the manifest even as we yield ontological priority to the scientific.
Cognitive Ecologies
The interactional sum of organisms and their environments constitutes an ecology. A ‘cognitive ecology,’ then, can be understood as the interactional sum of organisms and their environments as it pertains to the selection of behaviours.
A deep information environment is simply the sum of difference-making differences available for possible human cognition. We could, given the proper neurobiology, perceive radio waves, but we don’t. We could, given the proper neurobiology, hear dog whistles, but we don’t. We could, given the proper neurobiology, see paramecia, but we don’t. Of course, we now possess instrumentation allowing us to do all these things, but this just testifies to the way science accesses deep information environments. As finite, our cognitive ecology, though embedded in deep information environments, engages only select fractions of it. As biologically finite, in other words, human cognitive ecology is insensitive to most all deep information. When a magician tricks you, for instance, they’re exploiting your neglect-structure, ‘forcing’ your attention toward ephemera while they manipulate behind the scenes.
Given the complexity of biology, the structure of our cognitive ecology lies outside the capacity of our cognitive ecology. Human cognitive ecology cannot but neglect the high dimensional facts of human cognitive ecology. Our intractability imposes inscrutability. This means that human metacognition and sociocognition are radically heuristic, systems adapted to solving systems they otherwise neglect.
Human cognition possesses two basic modes, one that is source-insensitive, or heuristic, relying on cues to predict behaviour, and one that is source-sensitive, or mechanical, relying on causal contexts to predict behaviour. The radical economies provided by the former is offset by narrow ranges of applicability and dependence on background regularities. The general applicability of the latter is offset by its cost. Human cognitive ecology can be said to be shallow to the extent it turns on source-insensitive modes of cognition, and deep to the extent it turns on source-sensitive modes. Given the radical intractability of human cognition, we should expect metacognition and sociocognition to be radically shallow, utterly dependent on cues and contexts. Not only are we blind to the enabling dimension of experience and cognition, we are blind to this blindness. We suffer medial neglect.
This provides a parsimonious alternative to understanding the structure and development of human self-understanding. We began in an age of what might be called ‘medial innocence,’ when our cognitive ecologies were almost exclusively shallow, incorporating causal determinations only to cognize local events. Given their ignorance of nature, our ancestors could not but cognize it via source-insensitive modes. They did not so much ‘personalize’ the world, as Sellars claims, as use source-insensitive modes opportunistically. They understood each other and themselves as far as they needed to resolve practical issues. They understood argument as far as they needed to troubleshoot their reports. Aside from these specialized ways of surmounting their intractability, they were utterly ignorant of their nature.
Our ancestral medial innocence began eroding as soon as humanity began gaming various heuristic systems out of school, spoofing their visual and auditory systems, knapping them into cultural inheritances, slowly expanding and multiplying potential problem-ecologies within the constraints of oral culture. Writing, as a cognitive technology, had a tremendous impact on human cognitive ecology. Literacy allowed speech to be visually frozen and carved up for interrogation. The gaming of our heuristics began in earnest, the knapping of countless cognitive tools. As did the questions. Our ancient medial innocence bloomed into a myriad of medial confusions.
Confusions. Not, as Sellars would have it, a manifest image. Sellars calls it ‘manifest’ because it’s correlational, source-insensitive, bound to the information available. The fact that it’s manifest means that it’s available—nothing more. Given medial innocence, that availability was geared to practical ancestral applications. The shallowness of our cognitive ecology was adapted to the specificity of the problems faced by our ancestors. Retasking those shallow resources to solve for their own nature, not surprisingly, generated endless disputation. Combined with the efficiencies provided by coinage and domestication during the ‘axial age,’ literacy did not so much trigger ‘man’s encounter with man,’ as Sellars suggests, as occasion humanity’s encounter with the question of humanity, and the kinds cognitive illusions secondary to the application of metacognitive and sociocognitive heuristics to the theoretical question of experience and cognition.
The birth of philosophy is the birth of discursive crash space. We have no problem reflecting on thoughts or experiences, but as soon as we reflect on the nature of thoughts and experiences, we find ourselves stymied, piling guesses upon guesses. Despite our genius for metacognitive innovation, what’s manifest in our shallow cognitive ecologies is woefully incapable of solving for the nature of human cognitive ecology. Precisely because reflecting on the nature of thoughts and experiences is a metacognitive innovation, something without evolutionary precedent, we neglect the insufficiency of the resources available. Artifacts of the lack of information are systematically mistaken for positive features. The systematicity of these crashes licenses the intuition that some common structure lurks ‘beneath’ the disputation—that for all their disagreements, the disputants are ‘onto something.’ The neglect-structure belonging to human metacognitive ecology gradually forms the ontological canon of the ‘first-person’ (see “On Alien Philosophy” for a more full-blooded account). And so, we persisted, generation after generation, insisting on the sufficiency of those resources. Since sociocognitive terms cue sociocognitive modes of cognition, the application of these modes to the theoretical problem of human experience and cognition struck us as intuitive. Since the specialization of these modes renders them incompatible with source-sensitive modes, some, like Wittgenstein and Sellars, went so far as to insist on the exclusive applicability of those resources to the problem of human experience and cognition.
Despite the profundity of metacognitive traps like these, the development of our source–sensitive cognitive modes continued reckoning more and more of our deep environment. At first this process was informal, but as time passed and the optimal form and application of these modes resolved from the folk clutter, we began cognizing more and more of the world in deep environmental terms. The collective behavioural nexuses of science took shape. Time and again, traditions funded by source-insensitive speculation on the nature of some domain found themselves outcompeted and ultimately displaced. The world was ‘disenchanted’; more and more of the grand machinery of the natural universe was revealed. But as powerful as these individual and collective source-sensitive modes of cognition proved, the complexity of human cognitive ecology insured that we would, for the interim, remain beyond their reach. Though an artifactual consequence of shallow ecological neglect-structures, the ‘first-person’ retained cognitive legitimacy. Despite the paradoxes, the conundrums, the interminable disputation, the immediacy of our faulty metacognitive intuitions convinced us that we alone were exempt, that we were the lone exception in the desert landscape of the real. So long as science lacked the resources to reveal the deep environmental facts of our nature, we could continue rationalizing our conceit.
Ecology versus Image
As should be clear, Sellars’ characterization of the images of man falls squarely within this tradition of rationalization, the attempt to explain away our exceptionalism. One of the stranger claims Sellars makes in this celebrated essay involves the scientific status of his own discursive exposition of the images and their interrelation. The problem, he writes, is that the social sources of the manifest image are not themselves manifest. As a result, the manifest image lacks the resources to explain its own structure and dynamics: “It is in the scientific image of man in the world that we begin to see the main outlines of the way in which man came to have an image of himself-in-the-world” (17). Understanding our self-understanding requires reaching beyond the manifest and postulating the social axis of human conceptuality, something, he implies, that only becomes available when we can see group phenomena as ‘evolutionary developments.’
Remember Sellars’ caveats regarding ‘correlational science’ and the sense in which the manifest image can be construed as scientific? (7) Here, we see how that leaky demarcation of the manifest (as correlational) and the scientific (as theoretical) serves his downstream equivocation of his manifest discourse with scientific discourse. If science is correlational, as he admits, then philosophy is also postulational—as he well knows. But if each image helps itself to the cognitive modes belonging to the other, then Sellars assertion that the distinction lies between a conception limited to ‘correlational techniques’ and one committed to the ‘postulation of imperceptibles’ (19) is either mistaken or incomplete. Traditional philosophy is nothing if not theoretical, which is to say, in the business of postulating ontologies.
Suppressing this fact allows him to pose his own traditional philosophical posits as (somehow) belonging to the scientific image of man-in-the-world. What are ‘spaces of reasons’ or ‘conceptual frameworks’ if not postulates used to explain the manifest phenomena of cognition? But then how do these posits contribute to the image of man as a ‘complex physical system’? Sellars understands the difficulty here “as long as the ultimate constituents of the scientific image are particles forming ever more complex systems of particles” (37). This is what ultimately motivates the structure of his ‘stereoscopic view,’ where ontological precedence is conceded to the scientific image, while cognition itself remains safely in the humanistic hands of the manifest image…
Which is to say, lost to crash space.
Are human neuroheuristic systems welded into ‘conceptual frameworks’ forming an ‘irreducible’ and ‘autonomous’ inferential regime? Obviously not. But we can now see why, given the confounds secondary to metacognitive neglect, they might report as such in philosophical reflection. Our ancestors bickered. In other words, our capacity to collectively resolve communicative and behavioural discrepancies belongs to our medial innocence: intentional idioms antedate our attempts to theoretically understand intentionality. Uttering them, not surprisingly, activates intentional cognitive systems, because, ancestrally speaking, intentional idioms always belonged to problem-ecologies requiring these systems to solve. It was all but inevitable that questioning the nature of intentional idioms would trigger the theoretical application of intentional cognition. Given the degree to which intentional cognition turns on neglect, our millennial inability to collectively make sense of ourselves, medial confusion, was all but inevitable as well. Intentional cognition cannot explain the nature of anything, insofar as natures are general, and the problem ecology of intentional cognition is specific. This is why, far from decisively resolving our cognitive straits, Sellars’ normative metaphysics merely complicates it, using the same overdetermined posits to make new(ish) guesses that can only serve as grist for more disputation.
But if his approach is ultimately hopeless, how is he able to track the development in human self-understanding at all? For one, he understands the centrality of behaviour. But rather than understand behaviour naturalistically, in terms of systems of dispositions and regularities, he understands it intentionally, via modes adapted to neglect physical super-complexities. Guesses regarding hidden systems of physically inexplicable efficacies—’conceptual frameworks’—are offered as basic explanations of human behaviour construed as ‘action.’
He also understands that distinct cognitive modes are at play. But rather than see this distinction biologically, as the difference between complex physical systems, he conceives it conceptually, which is to say, via source-insensitive systems incapable of charting, let alone explaining our cognitive complexity. Thus, his confounding reliance on what might be called manifest postulation, deep environmental explanation via shallow ecological (intentional) posits.
And he understands the centrality of information availability. But rather than see this availability biologically, as the play of physically interdependent capacities and resources, he conceives it, once again, conceptually. All differences make differences somehow. Information consists of differences selected (neurally or evolutionarily) by the production of prior behaviours. Information consists in those differences prone to make select systematic differences, which is to say, feed the function of various complex physical systems. Medial neglect assures that the general interdependence of information and cognitive system appears nowhere in experience or cognition. Once humanity began retasking its metacognitive capacities, it was bound to hallucinate a countless array of ‘givens.’ Sellars is at pains to stress the medial (enabling) dimension of experience and cognition, the inability of manifest deliverances to account for the form of thought (16). Suffering medial neglect, cued to misapply heuristics belonging to intentional cognition, he posits ‘conceptual frameworks’ as a means of accommodating the general interdependence of information and cognitive system. The naturalistic inscrutability of conceptual frameworks renders them local cognitive prime movers (after all, source-insensitive posits can only come first), assuring the ‘conceptual priority’ of the manifest image.
The issue of information availability, for him, is always conceptual, which is to say, always heuristically conditioned, which is to say, always bound to systematically distort what is the case. Where the enabling dimension of cognition belongs to the deep environments on a cognitive ecological account, it belongs to communities on Sellars’ inferentialist account. As result, he has no clear way of seeing how the increasingly technologically mediated accumulation of ancestrally unavailable information drives the development of human self-understanding.
The contrast between shallow (source-insensitive) cognitive ecologies and deep information environments opens the question of the development of human self-understanding to the high-dimensional messiness of life. The long migratory path from the medial innocence of our preliterate past to the medial chaos of our ongoing cognitive technological revolution has nothing to do with the “projection of man-in-the-world on the human understanding” (5) given the development of ‘conceptual frameworks.’ It has to do with blind medial adaptation to transforming cognitive ecologies. What complicates this adaptation, what delivers us from medial innocence to chaos, is the heuristic nature of source-insensitive cognitive modes. Their specificity, their inscrutability, not to mention their hypersensitivity (the ease with which problems outside their ability cue their application) all but doomed us to perpetual, discursive disarray.
Images. Games. Conceptual frameworks. None of these shallow ecological posits are required to make sense of our path from ancestral ignorance to present conundrum. And we must discard them, if we hope to finally turn and face our future, gaze upon the universe with the universe’s own eyes.
*Originally posted, April 2nd, 2018.
September 9, 2019
Postcards from Planet Analogue
[image error]
So, I’m slowly emerging from my analogue cocoon. Imagine no internet interaction for almost a year… In quick succession, I turned 50, concluded my 33-year narrative obsession with the publication of The Unholy Consult, and achieved my 20-year theoretical goal with the publication of “On Alien Philosophy.” On the down side, my arthritis had worsened to the point where mowing the lawn became something I could only accomplish on ‘good days’—where taking four ibuprofens at a time was the rule, not the exception.
Change was upon me, whether I liked it or not. Only the form was in question.
At first, I started working on The End of Meaning, a non-fiction book attempting to sum the abstruse matters we’ve covered here in a manner that would be generally accessible. But my house is over 130 years old, so I also had a long list of renovation projects I wanted to complete. My arthritis lent a ‘now or never’ urgency to these projects—so I forced myself to persist despite the pain and my lifelong aversion to renovations. I grew up encircled by gutted walls. I’ve demolished. I’ve roofed. I’ve framed. I’ve spent entire afternoons straightening bent nails!
I was convinced that my appetite for construction would quickly peter out, and that my hunger to write would consume all—the way it always has. I replaced my rear screen door with a gorgeous glass one I got on clearance. Since parts were missing, I was forced to cut and hammer an old eavestrough nail into a spindle. So, there I was, pounding nails once again! The thing is my youthful alienation was nowhere to be found. The feeling of accomplishment I got installing that door was nothing short of ridiculous.
[image error]
Next on the list was repairing the roof of my 130-year-old barn. Certainly, that would send me scampering back to the computer screen!
No such luck. The job sucked ass, to be sure, but I felt… invigorated, I guess. Renewed. Taking four ibuprofens had become the exception once again.
I began rethinking things. All the time I’ve spent pondering ancestral neglect structures had made me nostalgic for the analogue cognitive ecologies of my youth. But were they so idyllic as I remembered?
So, every morning after delivering my wife to work and my daughter to school I set to work rebuilding my old barn from the inside out. I accessed the web only via my phone, and then only to do those things I could do in the analogue days: buy books, research how-to, check the news and weather. I neglected everything else—to my professional and interpersonal detriment I’m sure! There’s no way to sort the effects of physical labour from the effects of an analogue neglect structure, I know, but I’ll be damned if they didn’t seem to be of a piece. Working with your hands means working with brute matter. After a lifetime spent sculpting smoke, continually arguing the reality of my creations, the determinacy and the permanence of my work, let alone the immediate understanding it evoked in others, were blessed indeed. Nothing need be questioned. Nothing need be defended. For once, it was what it fucking was.
[image error]
Matter has no voice. The tools we evolved to manage it run as deep as life itself, whereas the tools we evolved to manage one another only run as deep as we do. And man-o-man, does it show.
[image error]
Now, I have a swank office in the loft of an antique barn. More importantly, I’m down to one or two ibuprofen a day—if I remember to take them at all. I feel ten years younger.
So, forgive me my absence, or my awkwardness crawling back into my old digital cockpit. Sometimes you need to go missing for a while, lest you go missing for good.
R. Scott Bakker's Blog
- R. Scott Bakker's profile
- 2166 followers
