for Leslie Lamport

The GA-AI relationship is not one that we should expect to have a clear answer to. A point I made long ago that continues to demonstrate its accuracy is that to attempt to declare a priori the limits of AI is a foolhardy enterprise. No objectively describable task that humans can perform should be declared in principle beyond the capacity of machine intelligence, and in most cases the wait will probably be far shorter than we think. First chess, then Go, and now we have ChatGPT, an AI system that having “learned” massive amounts of material on the Internet is able to put together reasonably readable texts on just about any subject, as well as tell jokes, write poetry and computer programs… Can we really be sure that an AI composing sonnets as good as Shakespeare’s is unthinkable? Or, robotics aside, performing the Scarlett Johansson role in Her of an “operating system” that, limited to verbal communication, has a “love affair” with a user?

None of these developments should have come as a surprise. The principle of Turing machines is that anything that can be expressed in an algorithm is accessible to them. The only conceptual novelty is that with the progress of “machine learning,” just about anything can become the object of an algorithm. Whether or not Turing himself imagined that “his” machines would be capable of writing poems or novels, let alone eventually making scientific discoveries that humans would be incapable of, in any case, the fundamental principle is that anything thinkable is potentially algorithmic.

Thus the only revelation involved in computers winning at Go or writing essays is that, having lived through a transitional era at the beginning of which there were no Turing machines at all, we have had to get used to the fact that there can be no a priori limit to the ability of machine intelligence to duplicate and surpass our best efforts in any form of creative activity. Blaise Pascal, no mean mathematician himself, made a useful distinction between l’esprit de géométrie (math majors) and l’esprit de finesse (design majors), but, given enough géométrie, you can reach any level of finesse you desire, just as you can get as close as you like to drawing a circle by using enough straight lines.

No doubt our era’s general indifference to our scenic origin reflects the fact that we live at a time in which humanity, once the world’s great marvel—recall Hamlet’s lines to Horatio in II, 2: What a piece of work is a man! how noble in reason! how infinite in faculty! in form and moving how express and admirable! in action how like an angel! in apprehension how like a god!—is no longer perceived as the final word in creative intelligence, which, after all is said and done, just boils down once more to the manipulation of algorithms.


At first glance, the Turing test seems frivolous; how can we make our judgment of software “intelligence” depend on its ability to fool one or more individuals into believing that its output is really that of a human being? What about the magician who pulls a rabbit out of an empty hat and saws his assistant in half?

Yet a moment of reflection reveals that successful simulation is the only conceivable criterion we can use to evaluate the source of a presumably meaningful machine-generated message. When Turing suggested it, it was a challenge, since early interactive programs were far from being able to carry on a human-like conversation. But today, with ChatGPT, we are no longer in the realm of casual conversation but formal composition—and with DALL-E on a par with modern art, and in many ways superior to most of it.

Such a prospect makes us realize that however different we are from mere machines, the difference does not consist in our ability to use signs, which is to say, to think. Our human intelligence, the basis of our pride and our sense of commanding the rest of the world and its creatures, has been revealed to be eminently reproducible and surpassable—by our own creations. The moment that artificial intelligence discovered machine learning, improving its algorithms without new input from the programmer, it acquired the ability in principle to surpass our intelligence not merely in mechanical activities like factoring large numbers or even playing chess, but in every mode of sign-manipulation. For an AI does not need to possess from the beginning the genius of Shakespeare any more than that of the best Go player. No doubt there is no way of training it to write sonnets comparable in simplicity to having it play games of Go or chess against itself, where winning and losing provides an objective measure of output quality, but AI’s ability to process the entire output of world literature and human knowledge in general can hardly be refused a priori the capacity to produce works of whatever kind on the same or higher levels of quality as those of humans. Perhaps works of plastic art endear themselves to us through their “human” traces—brush-strokes and the like. But as the salability of NFT cyber artifacts demonstrates, AI can handle this special family of signifiers as well as anything else.

None of this implies any need to substantially modify GA’s originary hypothesis, whose rough algorithmicity AI might perhaps improve upon. But the origin of transcendence and representation, products of mimesis—itself already a quasi-algorithmic phenomenon—becomes of less importance when one realizes that, although human language is qualitatively different from the sort of “representation” we find in the genetic code, in the broader picture of the history of intelligence from the primordial soup to the world-brain as conceived by science fiction, the difference between a human who can sign and mean and his ape predecessors who could only signal becomes far less impressive. ChatGPT is still pretty crude, but its successors should be able to improve on its crudeness a lot faster than our descendants can improve on ours.

In short, we do not need to reproduce the hypothetical details of human evolution in order to reproduce the products of the intelligence we evolved in order to protect us from ourselves. Once symbol-manipulation emerged as a technique, just as its physical productions could be improved by means of printers, computers could learn to perform the manipulations themselves more skillfully than their human teachers. AI is already part of the arsenal of scientists investigating complex phenomena; a December 2020 article in Science 370: 6521,“‘The game has changed.’ AI triumphs at protein folding,” describes the advances in mapping genetically determined protein-folding made possible by AI-based analysis.


This doesn’t mean that as living creatures responsive to the sacred, we are not different from man-made computers or robots. The frequent problematization of this relationship, the Ur-theme of science fiction beginning with Mary Shelley’s Frankenstein, is simply not relevant to the notion of “intelligence,” which as a measurable ability to think, that is, to analyze situations by means of logic, hence algorithmically, cannot make any distinction between the sources of language. That GA emphasizes the fact that language is a human invention, of a different nature from biological coding devices such as DNA, is irrelevant to the question of how the human use of language differs from that of machines, of which we are for the moment the only known creators. Logic and mathematics too are human inventions, but it is easy to understand why thinkers since Plato prefer to understand them as so fundamental that a world in which they would not function is all but inconceivable.

At the same time, it seems clear that the organic connection between human language and the conditions of its (pre-)historical origin can at best be simulated, not created anew. The connection between the sacred and the significant is the urgent necessity of sacralization/interdiction and its communication via the sign for the survival of the protohuman community. Thus AI takes its vocabulary from our human history of significance, not its own. Which is to say that the “soul in the machine” is ours, not its. Of course we can simulate this urgency as a function of robot behavior, but it was not from robot behavior that language evolved. The simulation of our soulful relationship to language and culture as embedded in our nervous and social systems can never reproduce its organic reality, the origin of language as the conversion of gestures of appropriation into gestures of renunciation motivated by the contextual danger of violence.

This is true even though, as in the case of artworks, the quality of such simulations can be indefinitely improved, and there is no reason to assume that replicants such as those in Blade Runner cannot eventually be created. After all, even sex dolls have their use, and AI-based functional “companions” for lonely people already exist that go far beyond the capacity of Alexa.

But can simulations have souls?

Since the days of Frankenstein, the key enigma involved in creating a human-like creature has been the infusion of the soul that makes it “almost human.” And the pathos of figures like the replicants comes from our knowledge that their souls, to the extent they exist, are “simulated,” although their feelings seem just like ours.

In John Searle’s “Chinese room paradox,” a man with no knowledge of Chinese simulates a computer by following instructions to produce the proper Chinese output for the Chinese input he is given—the point being that his “computer-like” actions involve no understanding of Chinese. But in Searle’s parable the man acts as a simple intermediary; his role of transmitting messages is in no way comparable to that of a computer program that has to compute the message, and which consequently has indeed to embody an understanding of Chinese. Searle’s point was to emphasize that the human process of understanding language is intuitively different from the mechanical/electrical operations of the computer; but to my mind this is simply begging the question: deciding a priori that “understanding” is something that humans can do but machines cannot.

The use of Searle’s paradox is best understood in the other direction: What is this “human understanding” that, even if uncapturable by the Turing test, is somehow our intuitive article of faith? Well, we are living creatures, mortals, biological heirs of the origin of life; we feel pain, fear death, are susceptible to faith and awareness of sin and grace… No doubt we can simulate such sentiments in AI, just as we can simulate them in novels and plays: if we can make Hamlet “real,” we can do the same for a successor of ChatGPT.

Let me offer a very different kind of counterexample. A few of these Chronicles (470, 654, and 679) treat of “Bear Theory” in an attempt to understand how it is that humans like myself (or more commonly, children) can become attached to stuffed/plush animals (“bears”) and imaginarily infuse them with a human-like intelligence inaccessible to their real-world counterparts.

The real affection one feels for these creatures is not limited to Winnicott’s conception of the “transitional object” that allows the child to insert himself into the adult world. No doubt I cannot forget that any living creature is a “higher” being than GA’s mascot hedgehog Henri Kipod, but please don’t force me into a “trolley problem” of the kind beloved to moral philosophers in which I’d have to decide between Henri and a dog, or a cat, or a frog, or a fish, or a beetle, or….

My point is that the “soul,” aside from our own, is an intuitive construction that we take for granted in other human beings, but that we can also attribute to objects of all kinds. We know our bears aren’t really alive, but we treat them as though they were, and truly grieve for their loss—think of the little girl who has lost her favorite doll, or in my own experience, the loss of our favorite echidna.

Now consider a “replicant” who has all the qualities of a human being except… Her is a useful example because it dispenses with the need for physical presence; Johansson’s operating system “Samantha” voice suffices to establish the basis of the love-relationship experienced by the protagonist—and presumably by “Samantha” as well— entirely by means of language. Nor is the unhappy ending of this relationship in the film a necessary consequence of the conditions of the plot; if “Samantha” could experience/simulate true love for Theodore throughout the film, the ending, in which we learn that she and other similar OS’s decide that they have outgrown their (numerous) human relationships, surprisingly takes for granted that these replicants can evolve desires of their own unplanned by their creators. Which, however typical in sci-fi, could occur only if this capability was deliberately planned by the simulation’s original programmers, not, as the film implies, as the result of a prise de conscience on the part of the OS’s.

In short, the tragedy of the man-made creature who wishes it could become a true human, as depicted in works from Frankenstein to Blade Runner and beyond, is conceivable only within a human imagination, not as a real possibility. It would no doubt be an interesting task to attempt to program such desires into a replicant, at which point alone the real difficulty of the task would emerge and might even conceivably be surmounted. But in contrast with the kind of algorithmic problems that machine learning can solve, such problems remain for the moment in the domain of art rather than science.


We must not forget that the human can be defined by Rappaport’s coevality of the sacred and the significant; our semiotic heritage is also a religious heritage. We have language because sharing it allowed us to survive by deferring our “sinful” temptation to mimetic rivalry. What we call our soul embodies the memory of this originary connection between humans, mediated by the sacred force that interdicts our reflexive reactions. A computer system simply “has” language; it does not have to learn it through the history of deferral and semiosis that I attempted to outline in The Origin of Language.

In 2021 I gave a talk at the COV&R conference on “Desiring Machines” in which I rather glibly proposed that if we wanted to confirm GA’s originary hypothesis, we should program bots to experience mimetic rivalry in such a way that they could survive only by means of the invention of language. Until such an experiment is successfully attempted, we can remain confident that our machines, however intelligent, will have to do without a soul.