Every field of research has its own way of classifying its objects, according to the factors relevant to the components and configuration of these objects in time and their interactions with other objects. But from a human standpoint, that is, with respect to the scenic self-consciousness that we have observed only in our own species, the number of essentially distinct categories of beings that we encounter is very limited. “Animal, vegetable, or mineral” may be too simple, but it suggests that the number of practical alternatives with which we can compare human self-consciousness is very limited.

In the first place, there are physical (“mineral”) objects, “things” that have no status as unified organisms, even if their place in the universe may lead them to acquire a certain level of complexity and structural integrity, allowing us to differentiate between the structures and the structural complexity of such things as electrons, atoms, molecules, rocks, planets, meteors, stars, galaxies, black holes…

Then there are organisms, examples of life, which maintain equilibrium conditions with respect to their surroundings, and employ for this maintenance specific channels of exchange of matter and energy with their environment. Their complexity is such that they can no longer depend on external material conditions to reproduce them, but can only preserve their necessarily complex configurations by generating near-copies from within themselves. Whether “animal” or “vegetable,” such self-reproducing living beings may be said to constitute the second level of our ontology.

With minor exceptions, animals can be differentiated from plants by their mobility, making their interaction with their environment qualitatively more complex. Hence their acquisition of sense-organs of far greater discriminatory power than those of plants. What concerns us here in particular is their capacity to represent their environment by means of these organs, whereby they attain a higher level in our ontology. Where a plant has only the crudest “awareness” of the outside world, the survival of most animals is dependent throughout their lives on their ability to observe and react to changes in their environment, often on a short-term basis.

The more advanced animals also possess means of communication with their fellows: chemical, visual, tactile, or auditive. I have always insisted that such communication systems, whatever their complexity, are not at all comparable to human language, but are in effect signaling systems. Animals exchange signals, but they do not “converse” as humans do, for they do not share a scene of representation such as exists between any two users of language. How do we know this? We have only to observe the advantage of humans over other creatures in the domain of modification of their environment. Clearly the ability to represent the world to ourselves has led us to continually improve our means of interacting with it, which in the case of other animals produces complex structures via the evolution of stereotyped actions with only minimal possibilities of innovations transmitted across generations.

Given our millions of years of experience with “mineral” and “animal/vegetable” beings, it seems clear that in our world only humans are capable of dealing with their environment through representations on a shared scene that permits the accumulation and preservation of information concerning worldly beings; but this ability to accumulate knowledge begins with the act of communication of information to other members of our species. The fact that, in the hypothesis of GA, the potential for intraspecific conflict, which in other creatures is regulated by the evolution of biological inhibitions, cannot ever be fully eliminated, is the primordial stimulus to our species’ creativity—as well as the source of a latent danger of self-destruction, a plausible scenario of which exists since the creation of nuclear weaponry.


With the development of machines that can manipulate representations rather than simply worldly objects, the question arises of whether this allows us to conceive of a higher ontological level than that which we have attained. The fact that, within a few years of the creation of generative artificial intelligence based on large language models and capable of “conversing” with the human user, our sense of ourselves as belonging permanently at the top of the hierarchy of beings has been deeply shaken, poses a new kind of question to any anthropology: Is it then our “destiny,” by learning how to represent and how to manipulate representations, or think, eventually to render our nature as naturally evolved self-reproducing living creatures capable of representation/communication and hence of “sentience” no longer necessary to the attainment of this highest level of our general ontology?

Many fear that “intelligent” machines with circuits vastly more productive than our neurons will, in tandem with the development of robotics permitting the manipulation of natural objects, surpass humans in the next centuries, if not decades, in virtually all of our abilities, leaving us to imagine whether the machines will learn how to reproduce themselves without our aid and become our masters. Thus we wonder what we can possibly do to prevent this, short of aborting the further development of AI—which given the competition between nations would require an implausible good-faith agreement to limit activities that already generate vast amounts of wealth and power. Unlike the danger of human self-annihilation, which at the very least cannot appeal to our instinct of self-preservation, we might well find the supersession of humans by intelligent machines seductive—at first, at least, before the machines decide to relegate us to the sidelines as objects of study and amusement, or choose to eliminate us altogether…

Yet from the standpoint of our general ontology, a machine performing the same functions as a human, only far faster and more accurately, would still not inaugurate a truly new category of being, since the difference between “natural” and “manufactured” components would not change the fundamental character of its activity—any more than a computer program champion at chess or Go would be doing anything essentially different from the human whom it is now assured of defeating. Although AI can support an indefinite number of levels of recursion, these are merely extensions of the “recursion” inherent in representation itself: the ability to represent both the object of representation (signified) and the sign that represents it (signifier).


The reader familiar with GA will no doubt ask what is the place of the sacred in the world of artificial intelligence. After all, in our anthropology the sacred—in its origin, the deferral/différance of potentially conflictive action—is the very source of the néant that creates the scenic space within which the representational sign alone connects the (human) subject to his (desired/deferred) object.

In Chronicle 703, the basis of a Zoom contribution to a session of the 2021 Girardian COV&R, I suggested that if “intelligent machines” were to compete with humans, they would have to be programmed to experience the equivalent of mimetic desire and its resulting resentful conflict. In the absence of such a configuration, the machine’s only “experience” of desire and resentment would be that accessible in texts and images, in which their real-world signifieds would exist only in representations. The current AI chat- and imagebots illustrate this situation, using words and/or images expressing emotions of all kinds that reflect no corresponding modifications of any internal state of the bots themselves.

Thus it seemed clear that programming such configurations as “desire” and “resentment” would involve a whole new level of simulation, of a higher level of complexity than, for example, creating the cyber-equivalent of pain to make a robot “reflexively” avoid touching hot surfaces. And even were this possible, it is not clear in what sense it would constitute an ontological advance beyond what humans experience already.

No doubt it is foolhardy at this early stage to deny any conceivable future possibility to artificial intelligence, and more reasonable to assume that if we can think of adding a feature to it, a means can be found, whether useful or not. Being able to perform the equivalent of mental operations billions of times faster than living creatures will surely permit manipulations of reality inconceivable at our current, still rudimentary, stage of cybernetics. And as to whether the enlarged universe that includes our AI inventions might come to posit a transcendent force or will on which they would feel obliged to rely in their attempts to defer internal violence, it seems much too early to speculate on the answer.