There is something to be said for the pragmatist notion that no ideas are true or false in themselves, that what counts is whether they “work.” The real question is what counts as “working”? Many ideas appear to “work” for large groups of people to which it doesn’t seem useful to assign a truth value. An article in the July 23 Los Angeles Times described the Chinese government’s persecution of the Falun Gong sect, seen as a potential threat to the regime. This cult, whose leader Li Hongzhi resides in New York, has several million adherents in China. Is Li truly the “savior” of these people? His doctrine appears to bring its followers something of the inner peace traditionally granted by sacrificial ritual in a world where ritual is no longer practiced. The power of any cult over its adherents comes from its ability to translate these originally collective violence-deferring practices into terms accessible within their daily lives. If the idea of a Wheel of Law spinning in my abdomen is to bring me inner peace, it must project the assurance that I am not a potential victim of mimetic violence. Although it appears easy for outsiders to separate the content of such an idea from the anthropological truth it incarnates, this becomes more difficult in the “higher religions” where a specific historical event, such as the preachings and crucifixion of Jesus, is the occasion for a much more complex anthropological vision.

How should we deal with this question at the very highest level, that of the minimal notion of the human as hypothesized by generative anthropology? Is this construction, however minimal, ultimately just a means of experiencing a more intellectual variant of the peace provoked in the Falun Gong by attending to the Wheel of Law in their bellies, or does it convey a useful anthropological truth? But how is the context of “useful” to be defined? If we think of ourselves as human scientists, then what is useful is what allows us to construct a logically consistent anthropology in the peaceful realm of “theory.” But we may equally well see ourselves as humanists whose vision of the human is subject to the paradox of culture that defers violence without ever freeing itself altogether from violence. I have always conceived generative anthropology as standing in the interface between the scientific and the humanistic visions, making itself as nearly punctual as possible in order to avoid occupying territory on either side.

Why then do we need to postulate an “originary scene”? The emergence of modern homo sapiens in the ancestral line that split off from the common ancestor of the chimpanzees some four million years ago comprises many stages, at any one of which we may declare that “humanity” has come into being: upright posture, permanent tools, fire, hunting… and, of course, language and ritual. And even if we consider language the essential quality of humanity, since fully-developed language capacity probably emerged over some millions of years, why is it necessary to fix on some particular moment as “originary”?

An example of the latest scientific thinking about language is Sydney Lamb’s Pathways of the Brain: The Neurocognitive Basis of Language (John Benjamins, 1999), which constructs a detailed and plausible model of how the operations of language might be realized in the brain. In this “neurocognitive” model, language is a set of connections between sound-production and -detection systems on the one hand and conceptual meanings on the other. Lexical units or “lexemes” are not symbolic entities “stored” in memory and “retrieved” for use, but “nections” or nodes in a network, “recruited” by experiences of association; our linguistic memory, unlike that of a computer, has no existence outside the processing network itself. From the standpoint of this model, as Lamb affirms, to speak of an origin of language is naïve–the very notion of “language” is naïve. There are merely connections, and any “mutation” in which Darwinians might be tempted to locate the origin of language is just one among countless thousands through which the hominid brain has evolved to its current level of connectivity. For Lamb, the defining moment of origin that linguist Derek Bickerton calls the “magic moment” or the “Rubicon” (see Chronicle 167) and that brain researcher Terrence Deacon feels obliged to justify by a mutation of the protohuman social order (see Chronicle 168) is an illusion based on our reifying as “language” what is merely connectivity. As Lamb puts it, the human brain handles language not as the stomach digests food but as the right foot operates a car’s accelerator. Hence although language generally involves the left brain and phonology, it can be implemented by the right brain and by hand signals. Connectivity is all, and the qualitative difference between human and animal communication systems reflects a gradual, genetically selected increase in connectivity over thousands of generations.

No minimalist thinker can dismiss this radical attempt to abolish the reification of language and its origin. But can we meditate on the implications of language as a proliferation of connectivity without some idea of what drove our species toward this proliferation? Lamb would presumably reply that, in the study of brain function or the modeling of linguistic processes in the brain, such speculation is a distraction; to attempt to tell exactly at what point the neural network becomes “human” is to set up an artificial boundary that corresponds to nothing in the network itself. Whatever motivations in hominid social structure there may have been for this cognitive development would be irrelevant. The challenge raised by Lamb’s model is not so much to find a place within it to situate our “originary scene” as to show how doing this is relevant to the model.

Let us then imagine the prelinguistic state of the brain. It contains conceptual links or associations between perceptual traces; it also contains links between such traces and vocal productions or “calls.” Although Lamb does not make the distinction, the brain pathways for calls must be distinguished from those for words. Humans have a few “calls” of their own–laughter, sobs, gasps–qualitatively different in both form and operation from the signs of language. Hence the question of the origin of human language is not evacuated by a description of the multiplicity of its connections in the brain. Even if Lamb’s gradualism is justified with respect to the conceptual nodes that exist in primate (and lower mammal) brains as well as ours, this cannot be the case for the lexical nodes that link the conceptual nodes to centers of phonological production. Lamb’s concern to establish human continuity with the prehuman on the “higher” conceptual plane seems to have led him to overlook the problem that the origin of language poses on the “lower” physical plane. A chimp is mentally equipped with concepts and, although lacking in human speech organs, physically capable of creating linguistic signs. But what is lacking in the chimp, and what he can only acquire at the most elementary level through intense special training by humans, is a set of “nections” between the signs, be they vocal or gestural, and the concepts.

Researchers have succeeded, on a modest scale, in teaching chimps and bonobos to use language. But the demonstration of this possibility, far from disproving the originary hypothesis (as some have claimed), only corroborates it. No one contests that the human brain is far better adapted to language than the bonobo’s, nor that this adaptation is the result of (at least) hundreds of thousands of years of evolution, at the outset of which our gifts for language must have been far more limited than they are now. If we assume that it was language that made humans diverge from the ancestral lines that eventually led to chimps and bonobos, then at the origin of this divergence, our language abilities must have been pretty much the same as theirs. Let us grant, for argument’s sake, that those of the protohuman line did not exceed those of the bonobo today. But bonobos don’t use language in their natural environment; they do so only when trained by humans. This implies that it is not their “natural endowment” that led humans to invent/discover language, but the particular circumstances in which they found themselves. If protohuman and bonobo do not differ in language ability, some other explanation must be found for the difference in their use of language. Teaching chimps to use signs only reminds us of this; it doesn’t tell us why we talk and chimps don’t.

I don’t pretend to have any particular insights into the evolution of the cerebral cortex. But it is clear a priori that a model of language that cannot discriminate between a call and a word or “lexeme” is inadequate. Whatever the nature of the linguistic sign, its use involves a new kind of withdrawal from the world of appetitive association–Bickerton’s “off-line processing”–that requires explanation.

The fact that we can now model specific operations of the brain where previous generations had to be content with the old metaphysical vocabulary of “thinking” and “willing” does not preclude the need for an originary hypothesis to explain the emergence of human language. On the contrary, by making more precise the nature of the brain’s linguistic function, recent research–as Deacon’s book suggests–makes such a hypothesis all the more plausible. But this research makes it incumbent on those of us who work with this hypothesis to take these latest discoveries into account. It may still be a bit premature to attempt to describe our internal scene of representation in terms of neurons and synapses, or even of “nections,” but we should prepare ourselves for the day when a hypothesis that lacks such a description will be dismissed as the product of an outmoded way of thinking.

I would not end on a negative note. Lamb’s book dismisses the origin of language, yet it is not uncongenial to GA’s minimalism: one explains all by mimesis, the other, by connectivity. The originary hypothesis describes the genesis of language in a mimetic crisis; how could this genesis have created the first linguistic “nection” in the brain? What makes such “nections” possible and necessary in the brain can only be what makes the linguistic sign possible and necessary in the originary scene: an inhibition of “horizontal” appetitive associations of the sort possessed by calls, but not by words. Before there could have been “language areas” in the brain, Wernicke’s or Broca’s, there was created a new kind of “nection” between a concept and a sign that suspended rather than provoked appetitive action. Whatever the phonic and/or gestural substance of the first sign, its connection with its “signified” or concept must have been mediated by the necessity of substituting for such action the communication of the signer’s renunciation of it to his fellows. I will leave it to brain researchers to tell us how this was implemented neurocognitively, but I find it hard to imagine what finding of theirs could persuade me that it was not.