It had been seventeen years since my last talk at COV&R, and given the Zoom connection, the physical distance was ironically maintained, although my location in Sugar Creek, MO was only about 480 miles from the Purdue conference center in West Lafayette, IN.

After a scheduling mixup, I was able to give my talk on Friday July 9, followed by a response by Chris Fleming and an extra hour and more of discussion: the cordial and enjoyable conversation lasted over three hours. Virtually the only active participants were members of the GASC.

My paper was more a summary of my latest thoughts on GA than an original analysis of artificial intelligence or “algorithmic mimesis,” whose limits I do not feel qualified to define. But I hope it suggests something of what would be involved in endowing a human creation with a “soul.”


Three Kinds of Mimesis—and Two Projects

We are all familiar with John Searle’s “Chinese room,” in which a human follows instructions that allow him to copy out answers to questions in Chinese (I hope not in Chinese characters) without understanding the language. Well, fine, but if the answers are correct, the human computer-clone doesn’t have to “understand” anything; if his/its algorithms produce the correct answer, then it has passed the “Turing Test” and demonstrated its functional equivalence to the human mind in performing this task.

As the years go by, computer simulations become increasingly more capable of improving on the human nervous system. At one time, they were learning to play chess. Then they beat the best humans at chess, but not at Go, whose much larger board is not set up in advance but where each move is dictated by a global strategy. Then, a couple of years ago, a Go program was able to defeat the world’s best players.

Whatever our understanding of the human soul, I think we can stipulate as self-evident that it would be foolhardy to attempt to demonstrate at this early stage in the history of AI that any human behavior cannot be algorithmically simulated. The more profound questions posed by AI lie elsewhere.

As I have been claiming for the past forty years, human language emerged in the first place in order to defer the mimetic violence that threatened the proto-human community. Our “cool” metaphysical language of declarative propositions and truth-values must have been preceded by a sign of designation or pointing: an originary ostensive.

This was not a product of disinterested contemplation. On the contrary, our capacity for such contemplation is the result of a “sacred” imperative to defer the reflexive appropriation of a common object of desire. Thus I consider a humanistic anthropology to be one that refuses to reduce the specifically human to the biological, that recognizes human uniqueness by tacitly or explicitly accepting the originary equation of the sacred with the significant, of the human self with a conscience or soul.

What can such anthropological reflection suggest about AI’s potential for exploring the human world? I will leave aside its obvious uses in industry and science. Nor will I deal with the sci-fi nightmare scenarios in which our creations revolt against their human masters—we are already sufficiently dangerous to ourselves. Yet the Frankensteinian dream need not for all that be abandoned—perhaps.

AI is most simply understood as a mode of cybernetic simulation or mimesis, created by humans whose own mimetic capacity must itself be understood as a qualitative improvement on the biological mimesis that had evolved in plants and animals through natural selection.

This suggests that we should examine mimesis on three levels. In the first place, there is biological mimesis: butterflies that imitate natural backgrounds, predators whose tongues imitate worms or even plants that imitate sources of pollen in order to devour the insects they attract. And needless to say, higher mammals have evolved much more flexible nervous systems that enable them to learn through imitation various techniques, calls, etc., even to engage in ritualized mimetic conflict, as when stags fight over a doe.

What then is it in human mimesis that cannot be understood in wholly biological terms? I need not persuade this group that the kernel of René Girard’s thought was his insight into human mimesis, whence the term “mimetic theory” applied to the ensemble of his work. His first and still most evocative analysis of mimesis is found in his 1961 study of the European novel, Mensonge romantique et vérité romanesque, and the key not only to that book’s great influence but to Girard’s entire intellectual career is its still largely implicit mimetic anthropology—to be revealed more fully in 1972 with La violence et le sacré, and in its definitive Christian form in the 1978 Des choses cachées depuis la fondation du monde—after the publication of which, Girard said to me “maintenant je peux mourir.” Girard’s anthropology, as we all know, emphasizes the qualitatively greater danger of human mimetic conflict than that within animal species, hence the need for new means to control it.

The key to humanity’s superior mimetic intelligence lies, paradoxically, in our sharing of a basic fact of animal biology: vulnerability to physical violence. All creatures have to be able to perpetuate themselves, and individuals must be able to survive long enough to procreate, even if this survival itself be brief. In the case of our immediate predecessors, apes with superior intelligence require a good deal of time to develop, and this means that the society within which they operate must be particularly peaceful and cooperative.

Yet this situation leads to a pragmatic paradox. Given the benefits of higher levels of intelligence within a proto-human or hominin species in dealing with the stresses of its external environment, the increasing danger of intraspecific mimetic conflict imposes a limit on the capacity of biological evolution to continue to improve this intelligence.

Whence the minimal definition of the human as the species that has become a greater danger to its own survival than is the outside world—in a word, the species that is its own worst enemy. The emergence of such a creature is a turning point in evolution and in the ontology of living beings. It is also the source of the paradoxes that plague humanity, the most fundamental of which is the constant tension between love and resentment, good and evil—or in other terms, moral equality and firstness.

Generative anthropology’s originary hypothesis is a modification of the “emissary murder” that was La violence et le sacré’s version of the scene of human origin. In contrast to the latter’s emphasis on the discharge of violence, GA situates the origin of language, religion, and all of human culture in the deferral of violence, which has so far permitted us super-apes to keep one step ahead of the growing potential of conflict inherent in our mimetic intelligence.

GA’s originary hypothesis assumes that growing mimetic tension has brought our proto-human ancestors to a point of transition at which the Alpha-Beta social order is breaking down. Not only will Beta no longer defer to Alpha, but the entire group no longer obeys the inhibition obliging each to accept the serial priority of first one, then the next. After any number of violent encounters in which the food to be distributed—presumably a large animal scavenged or killed in the hunt—may well itself become a casualty of human violence, an anticipatory sense arises within the group that this central object is too dangerous for individuals to seek to appropriate. Such appropriation comes to appear interdicted by a sacred will, which protects the participants from their own potential violence while frustrating their individual appetites.

Hence what had begun as appropriative gestures come to be aborted, or to use Derrida’s term, deferred, and these abortive gestures of appropriation come to be mutually recognized as instances of a common sign—in the simplest case, an act of pointing, shared joint attention, which only humans perform—a sign which they understand as both expressing their mutual renunciation and designating its “sacred” object. The sign’s quasi-ritual repetition in what has become a scene replaces rivalry with cooperation, and subsequently leads to the division of the meat in an “equal feast,” such as those among Homeric heroes, Robertson Smith’s famous camel sacrifice—or a traditional American Thanksgiving dinner.

Thus the transition from animal to human is presided over by a sense of sacred interdiction that defers mimetic rivalry, a deferral that transforms animal appetite into human desire, appetite mediated by this mutually signified interdiction. This anthropological etiology of the sacred is independent—but not contradictory—of a transcendental ontology that would attribute it to God as an independent object of faith.

What we call our soul or conscience derives from this experience, as a historical inheritance transmitted through the generations to each human individual. The conscience is experienced as functioning even in the absence of others, as if the sacred scene were always present within us as a reminder to obey its commands even in the absence of external constraint.

This scenario explains the paradoxical status of human desire. The product of the deferral of “instinctive” appetitive activity, it bears with it from the outset the temptation to reject the sacred interdiction which, by explicitly forbidding action, adds its mediating force to the original appetitive stimulation. Which is why so many of Girard’s admirers have difficulty in conceiving “good” mimetic desire!

Whether or not you are persuaded by this hypothesis, I can’t imagine that any serious student of Girard’s work would disagree with its premise that human language and culture cannot be understood without postulating a “sense of the sacred” of the kind I have described.

Sacred interdiction is not a “conditioned reflex.” We feel compelled to defer our appetitive urge via a conscious mechanism that we specifically call our conscience, in which we willingly subject our decision to a higher will—the basis of the Freudian Superego. This will is something that we sense within us, and at the same time grounded beyond us—the trace of the originary deferral of appropriation, whether compelled by God or by our fear of and loyalty to the human community. To understand the essence of our being as ultimately determined by this will is to understand this essence as a soul.

This conception of the human soul is central to Girard’s thought as well as my own. Indeed, it is at the very core of his thinking, the point he makes at the end of Mensonge by quoting Alyosha Karamazov’s words to his “disciples” celebrating the triumph of Christian love. The book’s lesson, as its epigraph from Max Scheler states, is that “man possesses a God or an idol”: desire implies mediation, and we are mediated either by one or the other, whether by the “real” sacred or a worldly being falsely taken for one, God or the Devil.

What does cybernetic mimesis add to these categories? Recall Descartes’ characterization of animals as “machines.” We can forget his denial to animals of consciousness, pain, and the like. But insofar as mimesis is concerned, and even if the higher apes possess “mirror neurons,” their lack of a scene of representation on which to reflect on the matter of consciousness, a reflection mediated by the scene’s connection via language to the human community, allows us to understand prehuman mimesis as essentially mechanical, reflexive but unreflective. The evolution of animal intelligence is driven by improved reproductive fitness and only marginally transmissible to future generations, rather than, like our own, by human society’s cumulative cultural intentionality.

Hence cybernetic mimesis can be understood as modeled not on human but on animal, reflexive mimesis, although equipped with feedback mechanisms wholly disproportional to those permitted by biological processes. Computers defeat the best human players at games, not by “reflecting” on board configurations, but by learning the best strategies through trial and error over trillions of iterations.

In contrast, the human soul is grounded in its belonging to the human community, whose shared representations inhabit us and make us willy-nilly members of this community. A child learning language is culturally inheriting this human belonging—and as “wild children” reveal, a child not provided this inheritance in its early years never becomes a fully functional human being.

Cybernetic devices are not initiated into language by any such experiences; “language” for them is merely a sequence of instructions. Even cyborgs with sensors that would provide the equivalents of pain and the five human senses would not acquire a “soul”—unless we found a way to program them to simulate, on their own, our discovery of language and the sacred.

What GA as a humanistic anthropology brings to the table in a discussion of AI is what I have called originary phenomenology. This is the use of our hypothetical worldly scene or scenario as a basis of reflection: one in which we can imagine ourselves sharing the reactions of the proto-humans who experience it. This is a feature shared with the less rigorous but far richer scenarios of the Bible and similar religious narratives—but not with the empirical sciences. Scientists cannot tolerate such hypothetical scenes, which they denounce as empirically unfalsifiable, although without them, we are unable to conceive the specificity of the human soul.

A phenomenology describes a scene of consciousness. We take scenes for granted, all the more for the past decade or so as we walk around all day with portable electronic screens to display them—thus I have called ours the screenic age. Animals have fields of perception, but the scene is a specifically human phenomenon—something we realize when we attempt to get a pet to watch a movie—that ultimately derives from the originary event, the ancestor of all scenic phenomena, such as artistic performances—even online conferences.

Cultural phenomena define their scene in advance, and can serve as models for the AI equivalent; but the scenes of everyday life have to be defined ad hoc by their human participants, and in most circumstances, their boundaries are not set in advance.

A scene is not merely a “frame” that includes all within it and excludes everything else; it is a locus of significance, where what happens impinges on our relationship with the human community and the sacred will that holds it together. Our love for fictions derives from the pleasure of immersing ourselves in a scene where the decisions have already been made for us, where the creator-God simulated by the author is charged with insuring its coherence. Unless, of course, he decides, as modernists often do, to make divine incoherence the lesson of the scene.

Thus beyond matters of simple calculation or the mimetic symmetry of zero-sum games, the challenge for cybernetic mimesis is not simply to simulate the result of a given human judgment, but its scenic basis, in which we appeal as it were to our sacred link to the human community in framing, that is, establishing the parameters of our decisions. It is only thus that we can call our choices moral—letting our conscience tell us what factors are relevant from the broadest human standpoint.

These considerations suggest to me two very different AI projects.

The first is to use AI and its dependent devices as we do today, as machines that we program to solve increasingly complex worldly problems. Beyond its obvious industrial and military applications, I would suggest an AI project that might help save our civilization from its current self-destructive malaise, as reflected in the increasingly pervasive “woke” social religion. What are these expressions of cultural self-hatred trying to tell us?

Humanity’s great ethical problem resides in the necessary contrast between, on the one hand, the symmetry of the originary scene and its ritual reproduction in our cultural forms, and on the other, the necessary firstness that is itself a product of this human symmetry. Each of us possesses his own human scene, a mental laboratory, separated from the world by a Sartrean néant, on which we perform what are felicitously known as thought experiments. Lacking such a scene, animals can create little more than what they have been programmed to create.

But in consequence, we must face the fact that although the originary event teaches us that we are all morally equivalent, in ethical terms, we have different talents in different degrees, and the advancement of society demands that these talents be maximally put to use, at the risk of losing out to other social groups that use their talents more efficiently. Nor is there any obviously “moral” way of going about this, although, as Steven Pinker does well to tell us, over the millennia, human equity, as well as our “quality of life,” has made a vast amount of progress.

However cruel or compassionate, the social manifestation of firstness conflicts with moral equality. The history of social organization, from the ancient empires based on slave labor through the feudal system and the advent of bourgeois society to today, is one of clear moral progress, yet it has all too often been marked by various degrees, sometimes outrageous ones, of immorality.

Today the liberal-democratic West, which had been all but persuaded by the downfall of the USSR a few decades ago that it had attained the “end of history,” as defined by competition among political systems, and which had in addition thought to have done away with discriminatory practices of all kinds, has gone from overconfidence to despair, from a sense of achieved equality to one of “systemic racism”—and this, despite the fact that it remains the unrivaled domain of choice for any potential migrants with a chance to make it their home.

This malaise must be understood as pointing to a problem of real significance.

Today it is difficult to conceive the credibility in post-WWII Western economies of Marx’s “labor theory of value,” for which physical labor was the standard. This was a “middle-class” era in which there was no clear-cut class distinction between skilled workers and professionals, or blue-collar and white-collar workers, as we called them in those days.

In sharp contrast, in today’s digital society, in which productive labor increasingly depends on sophisticated mathematical and/or linguistic symbol-manipulation, the need to measure individual aptitudes, that is, for a genuine meritocracy, is vastly increased over previous eras.

Under these circumstances, it is unhelpful to blame racial “privilege” for any lack of success among the descendants of those who have suffered inequities in the past, and even more so to seek to compensate these past inequities by non-merit-based advantages.

That woke ideas have spread without resistance among those most severely criticized by them, including virtually all our tech billionaires, demonstrates that wokeness reflects above all the anxieties generated by the increasingly competitive global economy, among both insecure beginners and successful performers impatient to signal their virtue. While the winners are glad to denounce their “privilege”—without renouncing it more than symbolically—those still in the fight, most especially the young, find in the denunciation and partial undoing of meritocracy a repository for their fears of inadequacy.

What this situation appears to be telling us is that, rather than abolishing merit-based requirements and opportunities, our society must generate new methods for bringing the less successful up to standard.

This is especially crucial for those children who do not enjoy the benefits of supportive family life. Childrearing among the professional class has become much more competitive over my lifetime. When I was in kindergarten 75 years ago, I was the only child in a class of over 40 who could read. Today, among 40 of my classmates’ great-grandchildren, I am certain than many if not most have begun learning to read—whereas in a comparable group of working-class children, this is probably not the case. This is not a racial matter; Charles Murray’s Coming Apart explicitly analyzes the severe new class divide between “Fishtown” and “Belmont” wholly among whites.

Thus it seems to me that a crucial focus for AI research should be an effort, in conjunction with cognitive science, to stimulate and supplement human intelligence from a very early age, perhaps eventually using such techniques as cyber-implants, in order to maximize the entire population’s ability to handle the demands of a global society that will almost certainly remain increasingly dependent on symbol-manipulation.

Afterword: On reflection, the project as described strikes me as insufficiently focused on the human community whose breakdown in the digital era it strives to repair. Thus the focus on improving individual aptitude for symbolic manipulation should be oriented in addition toward communication with others—necessarily mediated by individual reflection—rather than toward the simple enhancement of innate abilities.

The second AI project is more ambitious, but not, I believe, beyond human limitations. Its implementation would be of profound significance for the human self-understanding that is the central purpose of anthropology as I conceive it.

Given my claim that GA’s originary hypothesis provides a plausible model of human origin, would not the ultimate challenge to AI be to test this hypothesis by simulating the worldly conditions of this origin?

This would involve attempting to provoke the simultaneous discovery/invention, among a group of “cyborgs” driven by programmed “instincts,” including the capacity to arouse mimetic rivalry, of signs—that is, of a language of their own—as a means of deferring violence. We would then attempt to determine in what sense this does or does not entail the concomitant emergence of a collective sacred as well as the presence in each of these beings of something like a human “conscience” or “soul.”

If, as might be expected, efforts to carry out such a task ended in failure, the lessons learned could not help but provide new insights into the basis of our own nature. The eventual discovery of an “impossibility proof” would, somewhat on the analogy of Gödel’s proof of the incompleteness of arithmetic, tell us a great deal about the limits of algorithmic as opposed to biological—or perhaps divinely inspired—mimesis.

In the contrary case of success, we would have realized the ultimate science-fiction dream that we have cherished at least since Mary Shelley: the creation of beings with truly human-like rather than mechanical minds, with a language, a soul of their own.

Except that these minds would operate billions of times faster than ours…

On second thought, then, maybe we would do better to leave this project in the realm of science fiction.