Around twenty years ago I found, to describe the originary hypothesis, the notion of the “little bang,” as so to speak a dialectical synthesis of gradualism and catastrophism. At the time, I was considering how to describe the originary event as, on the one hand, a small improvement that led to the simplest form of the sign, but on the other, a huge sea-change because it opened the door to language, religion, and culture. This was in contrast to Girard’s mimetic crisis cum emissary murder, which despite the sound and fury did not really explain the transformation of animal into human, as revealed by the fact that Girard never speaks of an originary event, only of a “mechanism” that somehow led by dint of repetition to the notion of the sacred.

The idea of the little bang can be made more precise. If we have a plane, and then elevate one point of the plane by a micrometer, it creates a new dimension, even though the extension along that dimension is extremely small. This is neither the gradual expansion of the plane nor the catastrophic eruption of a whole new kingdom. It has characteristics of both, but the important one is that, however small, it is a bang, something qualitatively new.

I imagine that the “little bang” formula might help clear up some of the ambiguities in Thomas Kuhn’s famous notion of the paradigm shift. In order to remain at the highest level of generality, Kuhn treats paradigms simply as different ways of analyzing the same data. Thus his system lends itself to the “postmodern” idea that there is no “truth,” that we just see things in different ways. Copernicus and Ptolemy could handle the same data, but the first is more efficient than the other—unless of course the progress of computer science led to Ptolemy’s epicycles becoming the more practical way of predicting planetary position. The main point is that the “forces” that motivate the system are simply constructs.

But if the little bang idea can be applied to physics or astronomy, in these fields, the notion of a new dimension is unproblematic, since in science, dimensions have no ontological significance. Space can be described as three-dimensional, four-dimensional, or many-dimensional, as in string theory. What alone matters is what works as a predictive system. I suspect that, du point de vue de Sirius, the same is true of the human, and that the success of computers, which are after all “mechanical” devices with no pour-soi, in simulating human accomplishments cannot be limited a priori by any of the usual caveats. When computers learned to play chess, we said yes, but they’ll never beat us at Go. Now that they beat us at Go, can we continue to say yes, but they’ll never write good poetry, or, more terrifying prospect, never outdo us in anthropological theory? I find it futile to try to predict the limits of so powerful a mode of simulating human thought. This is not because computers will “become human” as in the sci-fi tales, but simply because they have no need of “becoming human” so long as they can simulate the human.

But the very existence of computer simulation would be inconceivable without the prior constitution of the human scene of representation, the result of the passage from en-soi to pour-soi through la différance of appetitive relation. From the ultimate “selfish gene” perspective, considering symbolic languages as the ultimate mode of the conservation of signs, since they continue to “exist” even in the absence of instantiation, one might say that the goal of the universe, and of the human in particular, is to create the notion of the symbolic sign, realized in a crude, physical way in genetic codes, but having only truly come into its own with humanity’s complex of intercommunicating minds. These implement a mature sign-system that allows for a clear and conscious distinction between type and token, langue and parole, so that the instance of a sign can be theorized as of a wholly different nature from the subsistence of the sign itself. This is one more way in which the notion of a transcendent Being that embodies this subsistence is the inevitable product of the emergence of such systems.

Deferral as such does not suffice to explain this result, which justifies Derrida’s ingenious neologism of la différance. Mere delay does not create “difference.” It is a shame that Derrida never thought it worth his while to consider GA’s more fundamental interpretation of this phenomenon, and in retrospect, strange that he did not see that the hesitation between members of a paradigm is derivative of the fundamental “hesitation” that constitutes the sign in the first place, separating off its referent by deferring its connection to the appetitive world.

I have always accepted Chomsky’s famous refutation of Skinner, which pretty much ended the conditioned-reflex interpretation of language in 1959, as the final word on that subject. But Chomsky describes the originality of human language as its capacity for recursion. Recursion is the capacity of a function to perform an operation on the output of the previous iteration of the same operation, but this notion does not explain the human or linguistic pour-soi. In particular, the sign cannot be explained as a recursion of the animal operation of perception. However complex this operation may be in higher animals, there is no way in which the sign is a “perception of a perception.” Only when once created as a representation is the sign susceptible of recursion as a representation of a representation.

As a general principle, we should understand the constitution of the human by events as proceeding by crises, but crises resolved by minimal means. If the Alpha-Beta system had continued to function, we would not have needed to invent language; necessity has always been the mother of invention. But the originary time of crisis, which may well have led to numerous deaths, need not and indeed could not have been resolved by the “maximal” ritual of human sacrifice. Within a small group, such actions would be counter-productive, and are contradicted by all those drawings on cave walls where humans remain always on the periphery of the animal victim. Above all, human sacrifice, by its very maximality, would have been incapable of providing a useful substitute for the pecking-order system, whose primary purpose was, particularly in hunting large animals, the distribution of protein-bearing meat among the group.

And as I insist on pointing out, we still practice the equal feast centered on the sharing of meat, only in exceptional circumstances resorting to cannibalism. What works tends to be preserved because it solves our maximal problems in a minimal way. Talking about lynch mobs is exciting and gives the reader an aha! moment, precisely because lynch mobs are rare, and scapegoating is a powerful metaphor, whereas sharing food with a group of friends and relatives at a Christmas dinner or a seder, or simply with one’s family at home, is no big deal. But that is exactly why it provides the model solution. Each such meal, as is each sentence and each greeting and each gesture of politeness, is one more little bang that repeats the first little bang at the origin of the human.

A thought experiment

We think of the type-token relation within language as of a different nature than the “codings” of DNA and other biological signals by the fact that linguistic types, unlike those found in biology, are “really” ideal entities, whereas in biology such entities exist only in the minds of scientists. Thus we make much of the fact that whereas the word “dog” exists independently of any real dogs, there is no biological “dog” in nature apart from the specific dogs that exemplify the species. Even if we redefine “dog” as a certain DNA sequence, its “type” never exists independently of its “tokens,” whereas “ideas” are independent of their realizations and may indeed never be realized, whence the unicorn, Hamlet, etc.

But consider this. When we talk about unicorns, this is possible only by instantiating somewhere in someone’s brain a unicorn-idea, that is, as some physical, neurological configuration—something neuroscientists have yet to pin down in any very specific way. Whether we speak of an individual dog Spot, or a generic dog, there must be somewhere in our brain a correlative neurological reality, just as we transmit this idea in writing or speech via correlative patters of sound or image. To say that the “type” exists only abstractly is a thought, but that thought, too, exists only in specific neural or sonorous/typographical configurations. We cannot by definition point to a “purely abstract” existence. If we want to speak of an abstract “type” of dog or book or whatever, this is just something we assert while contradicting it in practice by the fact that the “idea” manifests itself only in these concrete instantiations. Even if we only “think” the idea, either it is realized in our brain cells or we are not “thinking” it.

What then do human signs add to the “codes” of nature? In what sense does merely thinking that they are pure abstractions constitute an improvement over biological codes that do not “think” about themselves? Or is the mere ability to formulate the thought, even if ultimately false, nevertheless a victory over time?

Like the tree that falls in the forest, does the meaning of language subsist in the absence of anyone to understand it? But who can answer this question other than we, who “posit” the eternal significance of ideas? And given these reflections, is the “existence” of a sacred Being who guarantees the permanence of signs any more dubious than that of the signs themselves?

What holds this system together is the fact that the signs whose permanence we posit, or rather, unthinkingly assume, are maintained not simply within our individual brains, but within the “ether” of our language- and culture-communities—something we realize poignantly on occasion when the “last speaker” of a language, who is presumably also the last practitioner of a culture, dies, and his/her language/culture would disappear as well, save that other languages and cultures take as their business to preserve the skeletal remains of dead cultures. But we know that this “ether” exists nowhere but within the individual minds of speakers and participants.

Culture can thus be depicted as a fragile triumph over “materialism,” the kind of “information” that physicists speak of when they say that information is never lost, positing unknowingly a Deist watchmaker who “remembers” every coordinate of every particle. At least so long as there is “intelligent life,” it will give itself the right to conceive of Ideas, of types existing prior to tokens, as the “supernatural” components of the world of our understanding.