Few philosophical controversies better illustrate the French expression dialogue de sourds [dialogue of the deaf] than the “mind-brain problem.” The two camps are typically represented in the April 6 Weekly Standard review by Stephen Barr ofConsilience: The Unity of Knowledge, Edmund O. Wilson‘s latest sociobiological treatise. In one corner, we have either the hard-headed scientist, Wilson or Steven Pinker, or the scientiste, the humanistic adept of scientism, say, Daniel Dennett. In the other, we have someone like the WS reviewer, who finds his adversary wanting, but proposes no rational alternative beyond metaphysical dualism in which the source of the second, non-material kind of reality is never explained. Such veiled appeals to the supernatural appropriately call forth Aesop’s well-known moral: “the gods help those who help themselves.” Theories are not “truths” but models; the complaint that a theory explains some things but not everything is valid only as a preliminary to a more inclusive theory.

In a world that can no longer accept the historical particularism of religious thought, “mind” is presumably the last obstacle in the path of the all-conquering juggernaut of biological reductionism. An optical instrument can measure the wavelength of the color red far more precisely than you or I, but it can’t have the “experience of red” that defines it as a “mind.” How many times, in how many ways have we heard this argument? John Searle‘s improved version, if I recall correctly, compares a computer translating English into Chinese to a room full of people mechanically placing and ordering signs in accordance with programmed instructions, with no one knowing the sense of all of the signs so as to be able to understand the result. No doubt the computer lacks our experience of language, but a subjective description of the difference between computers and people will not save us from objective bio-reductionism. A minimal definition of the specificity of the human should not be concerned with uncommunicable entities like the “experience of red” or even the “experience of language.”

Although it is not apparent at first glance, the mind-brain controversy is a pseudo-empirical variant of the controversy over the “existence of God”, in which the idea of “existence,” which refers to worldly realities, is applied to the Being that stands behind the “other-worldly” realities that are the signs of language. There is no “other world” where the Ideas pass before the fire that projects their shadows on the wall of the cave. Signs are ideal entities that don’t “exist” at all. That such entities are not “natural” does not make them “supernatural,” which is only a mystified variety of “natural.” Signs are not things but relations within a network of human communication. The idea of the linguistic sign is the most inexpressible of all ideas because it is the simplest–the minimal idea. Which means that we should attempt to minimize the language with which we describe it, not create a whole new ontology sitting up there in the sky.

As Wittgenstein enjoyed noting, “experience” is a funny category. Whether I call out or say I’m hurting, not even Bill Clinton can “feel my pain.” You can never know what I feel; for example, you can never know how it feels to me to see the color red. Maybe when I see red I have exactly the same visual experience that you have when you see blue. But whatever my experience, I can communicate either color to you with a sign. Although signs are learned individually by individual minds, they don’t subsist in these minds, but in the sphere of their communication. The point of the sign isn’t to convey my or your experience, but to permit us to communicate about what is important to the linguistic community. We accept the verdict of the dictionary because we see it as the spokesman for this community; even when we disagree with it, it’s not because of what the word “means to us” but because of our perception of its common meaning.

Animals too have “experiences” and we may claim if we like that they have “minds.” But as Descartes’ cogito ergo sum suggests, the core of the mind-brain controversy is the sign. The self-conscious self is the sign-using self; we become conscious of our selves only when we can talk about them, and we talk about them in the first place to others. The yuppie who defines himself through his consumption of objects available to all is the heir of the self that constructs itself from language available to all. When we speak with Sartre of a “prereflexive cogito,” we lose sight of the essential connection between human self-consciousness and language. The human self is a user of signs in a community of like selves, with all the uncertainty that such an individual-collective entity suggests. Yet the arrogant dismissal of this truth by the overweening bourgeois Self should nonetheless not lead us, however understandably, to affirm that we have no selves at all.

The brain as an organ cannot account for the mind because “mind” is not something physical contained within the brain, even the brains of the entire population. It is a virtual, interpersonal reality that subsists in human culture and in which we participate. Our certitude that we are thinking beings is not illusory; but the instruments of our thought are signs that we share with others and that have no meaning outside of this interaction. To use words, or any other form of representation, is to participate virtually in the communal scene of representation. We each have our own thoughts but, in contrast with an emotion, there is nothing “private” about a thought.

Computers can now beat grandmasters at chess; can they be taught to “talk”? can they really “think”? can they at some point acquire self-consciousness? Emotions have a reassuringly physiological component. Fear makes our skin contract and activates our sweat glands; for a “cyborg” to have a comparable reaction, it would have to have a mammal-like nervous system, at which point its distinction from a human would truly start to become problematic. But what about thinking? To the extent that thinking is the formal manipulation of symbols, computers can indeed “think” much faster and better than we do. But that is not what we mean by “thinking.” To think is not simply to perform logical operations; it is to seek to represent what in our experience, whether of the natural or of the human world, has not yet been satisfactorily represented. What is human in thinking is not “the experience of thinking” but, on the contrary, the reduction of experience to thought, to language. This is not a task a computer can perform in any but a trivial sense.

The much misunderstood cogito offers a minimal example. A computer can easily be programmed to deduce “I think, therefore I am”; it suffices that its data base contain the notion that only existing things can perform such tasks as thinking. To think implies existence; only an existent being can think; I am thinking, therefore I am, QED. But that is not the point of the cogito. It is that I understand my own existence only insofar as I conceive thoughts, representations in principle shared by others. My being, in other words, is not wholly contained in myself, but implies the existence of a human community. It is the implicit existence of this community that explains the apparent non sequitur in which Descartes asserts, following the cogito, that God is too good to provide him with senses that will betray him in the normal course of events. Had this idea occurred to Descartes before the reduction to the cogito, it would have saved him a good deal of trouble; if God is worthy of confidence, then there is no urgent need to doubt the evidence of our senses. Why then was this idea not previously available? Because the kernel of the self’s certitude of being is in fact a demonstration that the individual mind knows itself only in language, through the virtual mediation of the community of which God is the guarantor. If God were unreliable, the community would not exist, nor would the enunciator of the cogito.

The mysteries of philosophy aka metaphysics stem from its insistence on understanding its “clear and distinct” ideas as though they were givens in themselves accessible to the isolated individual instead of the derivative products of a collective revelatory event. Because metaphysics denies the ostensive source of language, it cannot conceive how language was born or how it continues to function. Nor, by the same token, can the scientistes, who have fetishized the objective scene of metaphysics into a reductionist dogma.

If all human brains were destroyed, the word “tree” would have no more meaning for any creature. But this does not mean that the meaning of the word “tree” subsists in a set of individual brains; it belongs to a virtual communal sphere of signs and meanings. To affirm that this sphere cannot be reduced to the material world is not to countenance supernatural beings. On the contrary, it is positivism that needs either to embrace or condemn the “supernatural” because it is unable to comprehend the anthropological function of the sign. The danger in dealing with the transcendental is that we cannot talk about it without substantializing it. The intellectual ethic consonant with the reduction of the violent arbitrariness of the sacrificial consists neither in seeking a new formulation that would avoid this danger nor in denying the transcendental and its danger altogether, but in minimalizing it.

We need not reject the intuition that tells us that computers lack “mind,” but it should be understood as a consequence of the fact that computers are, for the moment at least, unacquainted with mimetic desire and its potentially violent consequences. What programmers do about the “experience of red” doesn’t bother me; I’ll begin to be concerned when they learn how to program resentment. As any reader of science fiction knows, cyborgs are a pretty resentful bunch.