This is a draft version of the first part of my GASC talk this June in Tokyo; the second part will deal with the Japanese philosopher Kitaro Nishida
As the Western-centered world order established after WWII appears to be losing its grip, most significantly in the West itself, considerations of “scholarship” and “theory” can no longer be pursued as if in the context of a civilization confident of self-perpetuation. As GA makes clear, the underlying purpose of language, religion, and all the other trappings of human culture is simply to perpetuate humanity itself. What we may consider the nobility of this quest is that of a simple response to necessity, as a result of which our ancestors invented language, religion, and the rest, not out of some transcendental drive toward perfection, but in order to preserve their species’ existence. We may define our species’ “transcendental” nature by the fact that, being driven by necessity to create language, it has likewise been driven at every stage to perfect its cultural functioning by the need to preserve itself, in what the French call a fuite en avant, a flight from the past toward the future. In past eras, this self-preservation was sought separately by communities, tribes, confederations, empires, and nations, but today, to an increasing extent, we feel the need to preserve humanity as a whole from self-destruction. Thus we combat, for example, the dangers of “climate change,” whose dangers we substitute for the far greater dangers posed to us by humanity itself, fearing to face the latter directly—as we have always done.
That since October 7 of last year the Jewish people have been intimately involved in the current rising tension is merely par for the course, since whether we like it or not, Western civilization depends on the Hebrews’ originary monotheistic intuition, and the hate they still engender on the least pretext is the quintessential example of the scapegoating mechanism that Girard considered to be the origin of humanity. But the arbitrary choice of an “emissary victim” is just the opposite of the role of the Jews, whose scapegoat role—of which we are now witnessing an exemplary demonstration—will presumably last as long as Western civilization itself. If this “arbitrary” mechanism could indeed resolve our problems, we wouldn’t have to keep scapegoating the same group. No, it’s precisely the lack of arbitrariness, the unforgettable historical Firstness of the Jews, which can only be “resolved” by exterminating their descendants and obliterating their writings, until even the fear of their revival has been forgotten. But is it not more likely that in its lust to destroy the Jews, humanity will simply destroy itself?
The human was born in the deferral of its appetites as a prelude to their communal satisfaction. But the boundaries of the human community are not predefined, and it becomes ever more evident that the West in its aspiration to become the “boundless” or “global” community can become so only through its own self-abolition. The Jews are the Western archetype, and their annihilation figures the sacrifice of the “white” world and ultimately of humanity as a whole.
For in the modern era there is one single world civilization, and its core is Western; the remaining differences of cultures are no more than costumes at a festival. China, the most “Eastern” of cultures, is governed today by principles elaborated by that old ex-Jew Karl Marx, and, in contrast with the traditional Muslim civilization that resists the temptation of the “red-green” alliance, the world of today’s jihadist Islamism is a resentful inversion of Jewish firstness that hates the Jews all the more, while seeking to persuade the non-Jewish “whites,” not long ago called “Aryans,” that the Jihadi crocodile will devour them last.
If world civilization is to survive, it must indeed transcend the curse of antisemitism. Christianity has largely done this; the Pope now speaks of the Jews as his “elder brothers.” And there is good reason to hope that, as in the “Abraham accords,” the Islamic world will do so as well. If the remains of antisemitism in the West would only be firmly rejected, instead, as we are seeing today, of being coddled and even encouraged, there is every reason to believe that the problems of the Middle East, and of the rest of the world as well, could be peacefully settled rather than leading humanity to self-destruction.
From our “Western” perspective, in order to understand the point of GA as the solution to the quintessentially modern disillusion with metaphysics, we must ask ourselves how a minimalist understanding of language, and by extension, of the human itself as a cultural phenomenon, can be of service in allowing us a clearer chance than otherwise of diagnosing our present state of incipient crisis, and hopefully, of helping to increase our species’ chances of survival.
The danger faced by the West in the cybernetic age is that the vast proliferation of the quintessentially human characteristic of “scenicity,” as embodied most visibly in our ubiquitous cellphones, facilitates the intensification of what I have called the epistemology of resentment, which since the French Revolution has generated ideologies that, on the one hand, deny legitimacy to hierarchical differences, while on the other facilitating the accumulation of vast divergences in wealth and status, as the billionaire class, all the while denying any justification of privilege, contents itself with profiting from allowing the public to blanket the world with expressions of this same poisonous epistemology. The coexistence of multi-billionaires with the “anti-racist” doctrine of diversity-equity-inclusion or DEI is a modernized version of Tory paternalism, but unlike its 19th century prototype, it is deliberately hostile to the traditional values of civil society that maintain what increasingly seem the final remnants of what we can still call “liberal democracy.”
I have often spoken of the dominant tendency in post-Hegelian philosophy to seek to escape the prison-house of language—a phrase become famous when Fred Jameson made a book title of Erich Heller’s fanciful translation of what Nietzsche had simply called “linguistic constraint” (sprachlichen Zwang: see Emily Apter, “The Prison-House of Translation?” Diacritics 47.4 2019, p. 54). What strikes me on reflection is that, as in one of those hippie drawings where the protagonist starts out enclosed in a cardboard box situated in a lovely landscape that he knows nothing of, this is a condition based on what I have defined as “the metaphysical” and which is common to all of Western philosophy, even including its “existential” post-metaphysical stage. Which is to say that metaphysics is, very simply, the refusal to see that whenever we speak of thinking, we are speaking of language.
The only exception is the sort of reasoning that we can attribute to Kohler’s apes, who are capable of picking up a pole or building a pile of boxes in order to reach a bunch of bananas hanging from the ceiling. Humans too engage in such thinking; when we want to put salt on our food, we look for the salt shaker, and on finding it, satisfy our need, presumably without the use of even “internal” language. But when it comes to the kind of thinking that goes into philosophy books, or even newspapers, as soon as “thinking” means performing an act of reasoning or affirming a “truth,” which really means a true proposition, we are talking about the use of words. Without words, without signs whose meaning is in principle shared by our community, we cannot communicate our thoughts to others, and the fact that we can think without such communication overlooks the fact that the words we use “privately” connect us willy-nilly to the community formed by their users, however broadly or narrowly defined.
However surprising it may seem, the simple idea that the human may be defined by the communication of signs on a scene seems never to have been thought of as sufficient unto itself. Yet it neatly and unambiguously distinguishes homo sapiens from all other species. And although we all recognize that language in the human sense, even including pointing, is unique to our species, the idea that the scene is as well never seems to arise. (The bee-dance famously analyzed by Karl von Frisch is no exception: see Chronicle 724.) But all that is characteristically human is dependent on the cultural phenomenon of language, communication via non-instinctive signs, which can more specifically be described as gestures/sounds/inscriptions emitted deliberately, and whose utterance can be traced to a uniquely human act of what Jacques Derrida called différance. Derrida understood the deferral of an utterance as permitting the speaker to choose within a paradigm of words, but the most fundamental relationship of language to deferral is that language—think again of pointing—begins with an at least half-conscious substitution of reference for appropriation. Thus I believe that we can say with some degree of confidence that the first sign may be defined as an aborted gesture of appropriation: instead of grasping for something we point at it—something which, surprisingly, animals, even chimps with hands, do not do as an act of communication.
The scene of human interaction is not, as some of my early formulations of the originary hypothesis may have suggested, entirely defined by its “interdicted” center. But this configuration is indeed the source of the sharp gradient of attention, or meaningfulness, between objects of interest and the empty space that surrounds them. I think Sartre well captured this aspect of the human scene in L’Etre et le Néant by speaking of a néant or nothingness (a term we shall encounter in Nishida) separating the human Self or pour-soi from its objects, a space which is so to speak an invisible barrier and consequently an enhancement to desire. But more broadly, the human scene is a space of cultural interaction. A degree of potential scenicity exists between any two human beings; in the normal circumstances of “civil society,” their interaction will take place through the mediation of signs.
The theatrical scene, which gave its name to this phenomenon, offers its audience a passive spectacle, but on the “scene” of normal human interaction, there need be no predesignated participants and/or spectators. Indeed, making this distinction explicit defines, outside the norms of everyday human activity, the world of art, a basic definition which is not modified in its essence by the postmodern violations found in spectator interactions with “installations” and the like.
What does permit such delimitation is what we call the sacred. The deferral of appropriative action that came to be interpreted as a sign reflected a sense of sacred interdiction surrounding the desired object, which we must only designate because we feel the attempt to possess it is impossible. The simplest example of this experience—the simplest way for us to re-experience the originary event of human language—is when we hesitate to take the last cookie on the serving dish at a party, particularly if two people reach for it at the same time. This aborted gesture feels “instinctive,” but it is not a reflex action like pulling your hand back from a hot surface. For generative anthropology, this may be considered the minimal human experience, the germ of both language on the one hand (designating the cookie rather than grabbing it) and religion on the other (treating the cookie as an interdicted or sacred object, one that its “worshipers” must admire from a distance without attempting to possess it).
When I sometimes claim that there are no rival theories to GA, I do not mean that no one else has ever sought to provide an explanation for the origin of language. As I discussed in Chronicle 614, primatologist Richard Wrangham in The Goodness Paradox (Pantheon, 2019) emphasizes the human need to contain violence, and at one point suggests that the first use of language would have been to organize coalitions for the purpose of eliminating any member of the proto-human group found to be particularly prone to violence. Another explanation I have seen recently is that, having learned to use tools, proto-humans became able to inflict greater damage on each other than species fighting with bodily weapons alone, leading to the need for language to permit conscious negotiations that would defer conflict.
But such superficially attractive hypotheses are anything but minimalistic, and above all, they fail to take seriously the fundamental difference, which Terrence Deacon traces back to their control by different parts of the brain, between reflexive signals and conscious (and “arbitrary”) signs, which depend precisely on what I have been calling, after Derrida, deferral. Thus most recent writing about language origin deals either with the potential correlation with language of empirical details of primate interaction such as sociality or brain function, or seeks, in popular-science terms, to demonstrate that crossing the “Rubicon” of language wasn’t really all that great a problem.
The intellectual-historical reason for disinterest in a humanistic theory of language origin such as the originary hypothesis can be explained by the “Woke” turning-away within what can broadly be called the humanities from the ambition to understand the human at its root—to create what Girard called fundamental and I have called generative anthropology. This ambition, which was prevalent in the literary fields when I began my studies in the 50s and 60s, was greatly stimulated in the US by the 1966 conference on the Languages of Criticism and the Sciences of Man, organized at Johns Hopkins University by Girard himself, which brought Derrida and a number of postmodern French thinkers to the US for the first time.
Unlike the hard sciences, the fields of the “humanities” were not conceived as laboratory-based endeavors. Since the late 19th century, the study of literature or art had been seen as a privileged avenue toward understanding the human-in-general; 20th-century developments in this direction included Anglo-American “new criticism” and Russian formalism. Arguably the later French version of this movement, post-WWII nouvelle critique, sometimes spoken of as “post-structuralism,” and often referred to in the US as French Theory, was its last and most ambitious stage. Jacques Derrida’s rock-star status in the 1980s and 90s, scarcely believable today, along with the comparable stardom of several other critics (Foucault, Lacan, even Girard himself), reflected this intuition of broad intellectual importance.
Today, this tendency has all but vanished, and its aging survivors can only spout post-Marxist varieties of Wokism. As a consequence, GA is dismissed as a “last gasp” of French Theory, as simply passé. Yet we should recall that fashion is a poor indicator of what really matters in the course of humanity’s ongoing attempt to understand itself.