The other day my and Anthropoetics’ old friend Andrew McKenna was sufficiently irritated by a piece in the March 4 New York Times magazine to send it to me as material for a Chronicle. The article in question, by Robin Marantz Henig, is listed in the table of contents as “Darwin’s God,” but on the cover of the magazine it is entitled “Why Do We Believe?” Its frame story is the intellectual odyssey of one Scott Atran, currently an anthropologist with the CNRS, who “entered Columbia as a precocious 17-year-old” in 1968 or so (I can’t resist pointing out that I entered Columbia as a not-so-precocious 16-year-old in 1957). The discussion, which cites a number of authorities (Daniel Dennett, Pascal Boyer, Justin Barrett, David Sloan Wilson…) in the field of what might be called “evolutionary religion (not ‘religious’) studies,” gives no definitive explanation for why “we” believe in God, but everyone seems to agree on the propositions that (1) belief in God is “hard-wired” into our brains, and (2) it is “irrational” and as such probably not directly adaptive but a “spandrel” (Stephen Jay Gould’s term) or accidental by-product of the genuinely adaptive human penchant for shared, reassuring ideas and our related inability to “conceive of ourselves as not existing.” This trait has a number of adaptive advantages, both in making the thought of death less painful and in reinforcing what Durkheim (unmentioned here, even by our CNRS anthropologist) called “solidarity.” For most of the authorities consulted, these advantages suffice to make the side-effect belief in God a necessary evil, although Richard Dawkins, whose Voltairean tract The God Delusion has been on the best-seller list for months, thinks it almost wholly pernicious, as does another best-selling atheist, Sam Harris, author of Letter to a Christian Nation. The article concludes, portentously:
No matter how much science can explain, it seems, the real gap that God fills is an emptiness that our big-brained mental architecture interprets as a yearning for the supernatural. The drive to satisfy that yearning, according to both adaptationists and byproduct theorists, might be an inevitable and eternal part of what Atran calls the tragedy of human cognition.
If I sometimes find it discouraging to observe that in twenty-five years, the new way of thinking that I call Generative Anthropology has been adopted only by a tiny handful of adventurous thinkers, pieces such as this one fill me with a sense of beatitude. On the subject of “why we believe,” if on no other, I am obliged to conclude that my “way of thinking” is not merely superior to but in a wholly different league than discussions like the one in the Times. No doubt Andrew sent it to me as a way of boosting my morale.
There is no point in attempting to refute the article’s tentative explanations for our “belief.” Anyone who begins from the same unexamined premises will quite reasonably arrive at similarly tentative conclusions. Nor is it altogether futile to examine the nature of religious beliefs from a “zero-based” standpoint that permits us to discover, for example, that they tend to be what Boyer calls “minimally counterintuitive.” But anyone with a sense of intellectual elegance will be skeptical of an a posteriori explanation that cannot retrodict from a huge mass of data more than a set of vaguely plausible assertions. Formulating the problem in this way guarantees that, given what we intuitively consider to be the fundamental human traits, we will find belief in God a very odd thing indeed. If we begin from the standpoint that knowledge of the world requires that we think logically, explaining the persistence of the God-spandrel requires all kinds of intellectual contortions. For even if shared irrational belief is a good thing (for instance, because by raising the price of entry to a given society it helps solve the “freeloader” problem), why exactly does God have “anthropomorphic” traits? Why wouldn’t a belief in alien abductions do the trick? And why must this “anthropomorphic” god be all-powerful and immortal? Why is the “minimally counterintuitive” not satisfied with green elephants or flying trees? Thus we must offer a different explanation for each trait: immortal and all-powerful to console us for mortality, irrational to promote “solidarity,” anthropomorphic because we tend to associate causality with human-like agency. . .
It never seems to strike these explorers of “the tragedy of human cognition” that their unexamined anthropological intuition may not be the ideal place from which to begin their journey. Perhaps instead of starting with the idea that the thoughts and actions connected with religion are regions of opacity in an otherwise transparent existence, we should extend our wonder at the human proclivity toward “belief” to the uniquely human ability to use language and other forms of representation. These cutting-edge researchers never get as far as Roy Rappaport, the late anthropologist of religion who postulated in the 1990s that religion and language were “coeval,” or Max Müller, who remarked somewhat less rigorously on the same coincidence in the 1870s–or Giambattista Vico, who noted it more imaginatively, albeit still less rigorously, in the 1740s.
Only once we come to see religion, art, and language as manifestations of a single as yet unexplained faculty of representation can we seek a unitary explanation for this faculty itself rather than beginning with Aristotle’s “rational animal” zoon logon exon and then expressing consternation at his or her perverse tendency to burden him- or herself with “irrational” beliefs.
Rather than expounding the originary hypothesis for the nth time, I will merely note that, contrary to what may still be the majority view of specialists of the problem (Terrence Deacon is the main exception here), “symbolic” human language is not an extension of animal signal systems but a radically new mode of communication. I would go still farther: language brings into being an entirely new kind of entity, the category or type–as in the type-token distinction fundamental to language–that is nowhere to be found in the real, material world.
No doubt one can cite analogies to this kind of entity in nature: the “natural kinds,” such as living species or chemical elements, each member or portion of which can be considered under certain constraints as equivalent incarnations of what Aristotle called a “form” (eidos), a dynamic variant of what his master Plato had called an Idea. But these analogies will take us only so far. Plato may have believed that the Idea of, say, a tree was more “real” than a living tree itself, but the “Idea” of a tree is inconceivable without the signified “tree” in human language. No doubt members of a species share a genotype that manifests itself in individuals much as a semiotic type is instantiated in its tokens, just as any batch of silicon is an instantiation indistinguishable from any other in a given chemical reaction. But all we find in the real world are members of various species, batches of various chemicals; there are in the world no types of human or tree or silicon existing independently of these individual instances. Even if, overlooking the fact that it would be impossible to trace all the beings we call “trees” to a common taxonomic root without including many plants we wouldn’t call trees (palm and banana trees are closer to onions than to oaks), we posit the existence of an Urbaum, the formal genetic pattern of this plant would exist only as incarnate within individual Urbaumen in the chains of its DNA, not in an Idea or word such as only human communities can create.
No doubt language could not exist in human minds without a neuronal substrate; ideas do not grow in thin air. But the existence of neurons, even “mirror neurons,” does not explain the existence of language. On the contrary, it is the existence of language that explains the neuronal evolution of the species that uses it. It is a serious category error to affirm that the secrets of language or religion can be discovered by examining the structure and functioning of neurons. Language involves virtual beings of a new kind that “exist” nowhere but in the communal domain of language itself. Once we realize that the ontology of words and meanings, which must be “believed in” by a speech community to exist at all, is altogether different from that of worldly objects of any kind, we will find belief in God–who shares many characteristics of this ontology–less of a mystery. If there is a mystery to explore, it is rather the controversial nature of religious belief both in its particulars (in the often violently asserted incompatibility of different religions) and in general (in the atheistic critique of religion per se).
The nonbeliever claims that man created God, the believer that God created man. The apparent symmetry of these claims suggests that the truth is to be found in the center, in the simultaneous emergence of both. Yet this mutual creation is in fact radically asymmetrical. For the believer, real things are created; for his adversary, God is a fiction no more problematic than Hamlet or Superman. But just as the Times article asks “why we believe” in God without remarking that the language we use to ask the question is just as transcendental as the being of which it is skeptical, so the atheist calls God a “fiction” without remarking that our capacity for creating fictions is no more self-explanatory than that for using language.
Why is it less remarkable that we create “characters” we know never existed and attribute to them actions we know never took place than that we “believe in God”? The obvious answer is that in creating or following a story, we make no assertions about reality, whereas belief is precisely such an assertion. But even if we stipulate that the function of language is to provide manipulable models of reality, it follows only that religious discourse, which makes unprovable and apparently implausible assertions about reality, is more dysfunctional than fictional discourse that asserts nothing at all; we still have not explained the function of the latter. We should not confuse stories with thought experiments, elaborations of hypotheses about reality. Hamlet is not a thought experiment about medieval Denmark.
The simplest distinctions are often the most insightful. Fictions normally concern mortal beings like ourselves. Religious discourse, including what we call “myth,” is about “immortals,” and although these two categories interpenetrate in interesting ways–for example, in “demi-gods” and “dying gods”–their fundamental distinction remains. Gods, and all the more so, the monotheistic God, have the attributes of signs as opposed to things. They are not subject to mortal decay; they inhabit “another world” and from this transcendental vantage point preside over our own. The notion of divine omnipotence reflects the all-powerfulness of the sign in the originary event, where the participants’ unanimous designation of the object of desire by means of the sign defers the violence attendant on attempting its appropriation. The projection of this omnipotence onto the natural world provides a cultural explanation for the events of nature. God is praised for good weather and blamed for earthquakes because he is in the first place understood as the arbiter of chaos or peace in the internal conflicts of the human world.
The anteriority of myth to fiction, of stories of gods to stories of humans, indicates that the total separation of the ontology of language from that of reality that permits us to use the former to create maps of the latter evolved from an earlier stage during which the beings that exist in language were presumed to share its ontology, the invulnerability of signs to the entropy of the material world that, when attributed to living beings, we call “immortality.” This is an immediate corollary of the originary hypothesis, according to which it was by surviving the scene of its first enunciation that the ostensive sign came to be understood as a linguistic type independent of its tokens. As the center of the scene remains after its occupant has been dismembered in the sparagmos, so the sign remains that refers to it. On this hypothesis, the being we call “God” is the permanently subsisting signified of the originary sign, the being whose permanence corresponds to the permanence of the sign itself.
Every human possesses the means of “immortalizing” his or her experience by representing it. It is this capacity that associates with our grief at death a sense of scandal that is more than a mere animal reaction to non-presence. The death of our representational capacity is the death we really care about; few things are sadder than a human being who can no longer use language. For the believer, this scandal is expressed and transcended in the contrast between the soul and its embodiment; our grief at the mortality of the one must be tempered by our faith in the immortality of the other. But the model for the immortal soul is nothing other than the immortal signs of language. Conversely, it is only once we have come to understand that this spiritual immortality belongs to us as well as the gods that we can conceive of fictions in which the creatures we create with our signs share in their spiritual but not their material being the sign’s transcendental status.
The outcome of arguing over such things as whether God created the world in six days is not merely failure to reach agreement but the most dismal intellectual stagnation. We need to reason about the human rather than recycle formulas ad nauseam in the service of preset existential positions.
If atheists and believers, instead of facing off against each other to defend their turf, sought to find their common point of reference in the human self-consciousness that alone permits them to debate in the first place, they would find themselves obliged, not to put away their various beliefs, but to bracket all those that interfere with this new conversation. As a result, they would move to the boundaries of their respective belief-systems, to the one common point on which all humans can stand: the hypothesis of our descent from a single originary event of representation.
Only having reached that singular point would they be able to express in mutually communicable terms how much they do and do not share. It is through this conversation that we will learn the possibilities and limits of ecumenism–which anthropological truths all humans can accept, and conversely, about which the necessities of social order require them provisionally to disagree.
Before this can happen, however, not just the happy few of generative anthropology but the actual participants in the debate will have to become aware that there is indeed an alternative to their way of conducting it. I hope that, if not my mortal incarnation, then at least my immortal soul will hold out long enough to witness this moment of revelation.