I often get the impression that the longstanding principle of not multiplying unknown entities (call them “parameters”) beyond necessity is today considered an obsolete, pre-cybernetic idea. For without computers you can’t handle large numbers of these entities efficiently, but now that we have petaflops and exabytes, the more parameters the better. Could the effect of the computer in the social sciences be similar to that which we have discovered lately when we visit the doctor—to concentrate the professionals’ attention on it instead of the subject, whether humanity in general or the patient in particular, that they are supposed to be examining?

This Chronicle is inspired by the September 2018 special issue of Scientific American entitled “Humans: Why we’re unlike any other species on the planet.” I must admit that it took me a few months to get to it, knowing that what I would find there would confirm my confidence in generative anthropology, but also my certainty that none of its insights can be assimilated into the current universe of the social/human sciences, given the latter’s systematic refusal to recognize any other than quantitative differences between the human and other animals.

1. “Two Key Features Created the Human Mind,” by Thomas Suddendorf (p. 42-47); the two features being (1) “nested scenario building,” which reminds us of the double-scope blending of Fauconnier & Turner’s The Way We Think (Gilles, Basic Books, 2002—see Chronicle 528)—one wonders why there is no standard vocabulary to designate such things; and (2) “the urge to connect, the human drive to exchange thoughts with others.”

Concerning the second, anthropologists should be embarrassed to use this kind of flabby language. What is the “urge to connect” as a term of scientific art? How can it provide a data set? Is the “urge” an emotion like fear, or a psychological need like hunger? How it can be measured? But one thing is certain: characteristically and specifically, humans “connect” through language; animals, although they certainly have their own means of communication, do not.

No doubt the author feared that claiming that humans have something animals do not would make him guilty of “speciesism.” Yet this is a serious journal for non-scientists, where articles about physics and chemistry and biology give specific names to the objects they are discussing and tell you in lay terms what we have learned about them. It strikes me as one more demonstration of the invasion of our intellectual lives by victimary thinking that one cannot explicitly say that there is a qualitative and not merely quantitative distinction between animal and human communication.

Here is characteristic passage from the text:

I . . . found that animals are smarter than widely thought. For instance, chimpanzees can solve problems through insight, console others in distress and maintain social traditions. Nevertheless, there is something profoundly distinct about human language, foresight, intelligence, culture and morality . . . (46)

It takes nothing away from animal intelligence to point out how inappropriate it is to describe in a scientific publication what separates us from the rest of the animal kingdom as “something profoundly distinct,” and then to include “human language” in a list of five “distinct” human qualities (Dr. Ockham, call your office) without the least suggestion of a dependence hierarchy. Are culture and morality possible without language?

Let us imagine a similar description of life: “Atoms are more interactive than widely thought. Nevertheless, there is something profoundly distinct about living creatures…”

How many more decades will we have to wait before the minimalistic clarity of the originary hypothesis outweighs the mutually reinforcing combination of fuzzy-mindedness and PC species-diplomacy that makes this kind of mush standard in the field? Nor is this an effect of dumbing-down for the lay public. If these scientists had hard-edged conclusions, you would hear about them. This is what they think. The Scientific American article only leaves out the data sets you will find in the professional literature. Like Daniel Everett (see Chronicle 567), they are less anxious to explicate the origin of language than to assure us that “language is not that difficult.”

2. Susan Blackmore’s “Decoding the Puzzle of Human Consciousness” (p. 48-53) begins with the pregnant question: “Might we humans be the only species on this planet to be truly conscious?” (50). She then goes on to cite Thomas Nagel’s famous question What is it like to be a bat?

It is no doubt interesting to think about what it’s like to be a bat, but the irony implicit in the question comes from the fact that there is no way, even with the most sophisticated measurements of the bat’s nervous system, of putting in words “what it’s like.” We can say with some assurance that in certain circumstances the bat feels cold, or hot, or hungry, but that is not what Nagel is getting at: he wants us to feel the frustration of not being able to get inside the bat’s consciousness to the extent that we can “dialectically” get inside our neighbor’s. But this avoids the essential issue, which is that imagining whatever bats feel won’t help us at all to understand the mystery of human self-consciousness, which is dependent on language.

Blackmore describes two schools of thought: the “B team,” who “believe” in consciousness, but see it as a kind of side-effect, so that “zombies” may perfectly well exist, who act just like the rest of us but have no “subjective” experience at all, and the “A team,” who see consciousness as an epiphenomenon that may or may not accompany mental activity, presumably in all animals, but has no existence “in itself.” In neither case, the fact that human consciousness is suffused with the language by which we communicate its contents to each other is ever mentioned.

Hence we should not be surprised that in an article devoted to consciousness, the word “language” is mentioned exactly twice—in the context of the theory of memes, a form of information that, unlike language, apparently has real objective status. Thus we learn that:

Because humans are capable of widespread generalized imitation [Girardians, take note!], we alone can copy, vary and select among memes, giving rise to language and culture. “Human consciousness is itself a huge complex of memes,” [Daniel] Dennett wrote in Consciousness Explained, and the self is a “benign user illusion.”

This illusory self, this complex of memes, is what I call the “selfplex.” An illusion that we are a powerful self that has consciousness and free will—which may not be so benign. Paradoxically, it may be our unique capacity for language, autobiographical memory and the false sense of being a continuing self that serves to increase our suffering. Whereas other species may feel pain, they cannot make it worse by crying, “How long will this pain last? Will it get worse? Why me? Why now? (53)

The inane pretentiousness of this perhaps needs no comment. We, poor fools, think we have consciousness, but today’s social scientist and neo-philosopher, instead of seeking, as philosophers did not so long ago, to explicate the activities of this consciousness—as in the 900 pages of L’être et le néant—simply tell us it’s all a “benign user illusion.” You never realized, dupe as you are of the Western philosophical tradition, that it’s all just a selfplex, a complex of memes.

And language—well, this makes me want to take back everything I have said in criticism of Derek Bickerton and the “the food is over the hill” explanation of the origin of language. For Ms. Blackmore, language serves only to “increase our suffering.” Even to conceive that “crying,” as just about any vertebrate can do in its own way, might serve to bring others to succor you a la boy who cried “Wolf” is beyond the comprehension of one who describes our use of language as a means by which we “select among memes.”

Like the distinguished philosopher I mentioned in last week’s Chronicle, but with an intellectual manque de sérieux that I am sure would shock him more than me, we are expected to find objective language in which to describe human consciousness, if indeed it exists at all, with no particular concern for that consciousness’s own dependence on language. But where the traditional philosophers seriously examine human self-consciousness, the scientists and neo-philosophers deny its very existence. They will presumably begin to take it seriously only if and when they find some kind of neuronic phlogiston that establishes its physical existence.

In the meantime, we must be content with the answer provided in the last sentence of the article: “We humans are unique because we alone are clever enough to be deluded into believing that there is a conscious ‘I’” (53). Well, that’s helpful. We can send rockets to land on asteroids, but our scientific reflection on ourselves is a poor imitation of Zeno of Elea. And, in case you hadn’t noticed, Zeno was being paradoxical. In Ms. Blackmore we have, in contrast, the sophomore’s earnest desire to surprise us with her new “knowledge.” I bet you weren’t conscious of the fact that consciousness is a delusion!

3. “What Makes Language Distinctly Human,” by Christine Kenneally (p. 54-59), author of The First Word: The Search for the Origins of Language (Viking, 2007), a review of studies on the subject (not including mine) that reaches no conclusions of its own.

It has been pretty clearly established that human language is not continuous with animal communication systems; this was the outcome of Terrence Deacon’s researches as summarized in The Symbolic Species (1997), and was negatively confirmed by the failure of all those chimp experiments to teach their subjects more than an advanced signaling system. Yet Ms. Kenneally is ready once more to burst the bubble of the illusion of the “dramatic idea” of an “explosive” origin:

For a long time we have been in love with the idea of a sudden, explosive transformation that changed mere apes into us. The idea of metamorphosis has gone hand in hand with a list of equally dramatic ideas. For example: that language is a wholly discrete trait that has little in common with other kinds of mental activity; that language is the evolutionary adaptation that changed everything; and that language is wired into humanity’s DNA. . . .

. . . It [now] looks like language is not a brilliant adaptation. Nor is it encoded in the human genome or the inevitable output of our superior human brains. Instead language grows out of a platform of abilities, some of which are very ancient and shared with other animals and only some of which are more modern. (56-57)

And a little further down:

If creatures with different brains and different bodies can learn some humanlike communicative skills, it means that language should not be defined as wholly human and disconnected from the rest of the animal world. (57)

And in conclusion:

In the short time since the field of language evolution has been active, researchers may have not reached the holy grail: a definitive event that explains language. But their work makes that quest somewhat beside the point. To be sure, language is probably the most unique biological trait on the planet. But it is much more fragile, fluky and contingent than anyone might have predicted. (59)

Well, who said language should “be defined as . . . disconnected from the rest of the animal world”? Or that in order to be a “brilliant adaptation,” it cannot “grow out of a platform of abilities . . . ancient and modern”? Why call a “definitive event” in which language originated a “holy grail,” as though the very idea of an evenemential hypothesis is absurd on its face?

The quasi-religious language used in dismissal is not insignificant. Events, as I understand them, belong to history; otherwise it’s just matter moving around in various ways. The point of the originary hypothesis is not to discover “the” event of the origin of language, but to provide a model of how such an event could inaugurate the world of language and culture.

Culture intrinsically involves the collective memory of events, so that to claim that our every use of language “bears the imprint” of the originary event of this use is not to invent a myth, nor to point to some specific date on the calendar. It is a model, and its many repetitions need not all have been conscious of one another. But within a given community, barring complete dissolution and reformation, it is indeed meaningful to speak of an originary event. A second event cannot take place without recalling the first—in particular, by the linguistic gesture that marked it. If this idea is uncomfortable for human scientists, that is a shame, because that is how the human works. Other animals evolve; we have a history.

In contrast, to imagine language as a “biological trait” emerging through genetic evolution is absurd—which does not of course mean that the institutions of language and culture will not have biological foundations as well as consequences on both the body and the brain.  Language is implemented in each individual brain, but its crucial manifestation is on an interpersonal cultural scene. Calling this scene “biological” simply obscures its distinction, along with the other elements of culture, from merely biological life processes. The common-sense difference between nature and culture is not a pre-scientific illusion.

The “most unique biological trait, . . . fragile, fluky, and contingent”: is that the sum total of what all this research on language has produced? Yet although, in this and the previous article, there is virtually no effort to tell us what consciousness or language are, a good deal of effort goes into telling us that they are not what we thought they were. Rather than proudly proclaim results, like the masters of the hard sciences, these researchers seem above all concerned to destroy our illusions, leaving in their wake only opportunities for new research grants.

4. John Gribbin’s “Alone in the Milky Way” (p. 94-99) is worthy of note for daring to tell us that there is almost certainly no other intelligent life in our galaxy—although the author insists we can’t be sure about the others, and is also certain that other (non-intelligent) life forms do exist in our galaxy. He alleges the relevant fact that life didn’t take very long to appear on Earth—which still doesn’t explain why we haven’t found any evidence of it elsewhere. Nonetheless, this bit of skepticism makes me wonder whether at least some of the scientists are less gung-ho on extraterrestrial life than they let on, but are afraid to reveal this in public for fear that their budgets might be cut.

I wish I could invite some observers from Sirius, like Voltaire’s friend Micromégas, to share my amazement that, for lack of a minimal hypothesis of the origin of human language and culture, rather than constructing models of human specificity, our anthropological theorists are consumed with deconstruction. They are anxious to discourage hopes of “holy grails”: sudden transformations, definitive events, “wholly discrete” traits. Rather than seeking to specify what is human, they are impatient to show that every human trait, language included (let’s not talk about religion) is shared in common with the animals, although no doubt “there is something profoundly distinct about us” and “language is probably the most unique biological trait on the planet.”

The simple truth is that the origin of language and culture as specifically human phenomena cannot be understood in their ontological specificity by either philosophical or natural-science modes of thought so long as both of these ways of thinking depend on the metaphysical fiction that language itself, in the mature propositional form in which we express ourselves, cannot be traced to a human, historical origin.

For philosophy, this is more a feature than a bug; philosophy in its wisdom has never taken seriously the efforts of such as Condillac and Herder to create hypothetical scenes of language origin. For natural science, the situation is more complex. Science is the child of metaphysics; its own language is necessarily composed of propositions. No doubt this necessity does not apply to language as an object of study. But science is also empirical, based on verifiable observation, and propositional languages are the only ones we know. Linguistics as a science wisely does not deal with speculative questions of origin, only of historically attested derivation. Likewise, the broader domain of anthropology is bound by the realia of empirical research. In Les formes élémentaires de la vie religieuse (1912), his most important anthropological text, Durkheim explicitly rejected the idea of speculating about forms of religion more primitive than those attested by the researches he presents, and the same would certainly apply to language.

This leads to an impasse in the study of language origin. It is inconceivable that the first human language consisted of propositions, declarative sentences. Yet in the absence of empirical evidence, anthropologists are highly resistant to the idea of postulating a pre-propositional language as the outcome of an originary hypothesis. Thus their research into language origin, to the extent that it can be so called, treats the idea of originary language in the vaguest manner; it is understood as having a collection of features and performing various functions, but without any attempt to reconstruct a path from what must have been the simplicity of its origin to the “mature” state in which we find all existing human languages.

Only the originary hypothesis as first enunciated in 1981 in The Origin of Language provides a way out of this impasse. Whence I have called GA a new way of thinking.

The minimal hypothesis I first drafted 40 years ago never claims that human language is “disconnected” from anything. It treats the realm of representation—language, religion, culture—as a new category of being, not because its material components were unknown in the animal world, but because the proto-human social configuration that led to our “urge to connect,” which Girard, following Aristotle, called mimesis, led to a hypothetical crisis of the old pecking-order system, requiring for its resolution a new social configuration centered on a radically new means of communication.

But such thinking is too speculative for our social scientists, who cannot deal with anything that is not broken down into separate physical pieces, whether strands of DNA or neurons. If life can be explained in physical terms, how can language be any different? Most of us remember Swift’s portrait of the academicians of Lagado, who had abolished words, and carried sacks on their backs containing the objects they wanted to talk about. But words too are just “physical things,” aren’t they?

With all due respect for the researches carried out by the authors of these articles and their colleagues, we cannot forbear from pointing out the category error inherent in any attempt at a “biological” understanding of what is by its essence super-biological. Calling the human “profoundly distinct” from other animals is merely a symptom that something is afoot here that is not being explained—that, without recognizing it, a qualitative difference is being disguised as a quantitative one.

The irony of the age of the Internet, which makes all our writings readily available throughout the world, is that they are a tiny drop in the ocean of other material, much of which has far greater access than ours to the mechanisms of publicity. Yet I retain my hope that the originary hypothesis will one day be recognized as a critical step in the process of self-understanding that has uniquely characterized humanity from the beginning.