In times like these, one hopes the pain will stimulate rethinking that will make the world better in a time-frame that makes sense to us as individuals. The destructive power of today’s weaponry increasingly makes the individual life-span a more realistic measure of that of the entire species: if a catastrophe blows me up, starves me, sickens me to death, there’s a good chance it will do so to everybody.
Times like these make us wonder what good GA can do. Yes, it offers an intuitively plausible solution to the central anthropological problem of the origin of language and the transcendence of the animal world of appetite. Before the post-modern era, such problems, precisely to the extent that they had not previously been solved within the bounds of metaphysical reasoning, but only in that of the transcendental-religious, were treated with the special prestige accorded to sacred things. Today, their transcendent nature is denied, or “bracketed,” and they are dissected into myriad research projects, each with its own granting and career opportunities, as is appropriate in the natural sciences. That this process obviously does not explain the origin of human language in its radical difference from animal communication is, in the current institutional perspective, actually a favorable outcome, for it stimulates an unending stream of empirical research. It cannot even be said to leave the basic problem unsolved; it simply denies its pertinence as a scientific problem, as in Michael Corballis’ The Truth about Language (Chicago, 2017) discussed in Chronicle 629, while at the same time, this origin’s intrinsic interest—and the funding it generates—continue to reflect our intuition of this pertinence. One more function of sacred paradox.
However each of us defines his “quest for truth,” the world operates pragmatically, and it is not altogether cynical to point out that agnosticism in reference to the “big questions” reflects the needs of the scientific community much better than attempts like ours to pare them down with Occam’s razor. In the digital big-data age, the philosophy of science that makes the best use of available resources is plus il y a de fous, plus on rit. This situation explains the failure of once-fashionable “French Theory” to have a real effect on the social sciences. Beyond institutional considerations, the underlying cause of this failure was the inability of its dominant, “deconstructive” faction to take in account the complementary “constructive” insights of René Girard—whose intellectual leadership in putting together the 1966 Johns Hopkins conference on “The Language of Criticism and the Sciences of Man,” as we tend to have forgotten, was instrumental in creating “French Theory” in the first place. I view GA as having accomplished this task, but too late to take advantage of the cachet that had made Derrida, although not Girard, a household word.
From the minimalist standpoint of GA, the fundamental human problem was eloquently enunciated by the immortal Rodney King: “Can’t we all just get along?”
As a result of the crisis of animal hierarchy that led to the origin of deferral via communication through the sign, the first humans established the “moral model” of reciprocal exchange that is the basis of our universal sense of moral equality. Yet almost all of human history, or at least the part of this history that we can call history, because it coincides with the power to record it, does not emphasize this fundamental equality. For, with the emergence of sedentary agriculture, if not before, primitive egalitarianism proved itself an inefficient way of maximizing the fitness of the human community.
Whence the birth of human hierarchy, which in The End of Culture I attributed to the “big-man” in Marshall Sahlins’ Stone Age Economics, who was the first to acquire political power by virtue of his superior ability to produce food, hence to dominate the communal feasts previously hosted alternatively throughout the year by a series of clans. The originary source of political hierarchy would thus have been the economic benefit to the community provided by one more productive than the others—and who, Sahlins insists, tended to be less well fed than his beneficiaries.
It is easy enough to see how this opening to hierarchy through individual ability, coupled with the rivalries among tribes for Lebensraum and other desirable objects, would lead in a few thousand years to the extreme differences of wealth and power that characterized the archaic empires. These differences have not so much been abolished as mediated by the institutions of governance that have evolved over time. That the coexistence in today’s most advanced societies of centi-billionaires with the penniless does not pose any real difficulty is a sign that “inequality” in itself is not a problem of hierarchical society.
I borrowed the term firstness from Adam Katz, who found it in C. S. Peirce (who uses it in a very different sense), to designate the originary human form of superiority, which consists not in being more powerful, as the Alpha animal had been in the previous social order, but in being first to grasp the advantage of converting the “aborted gesture of appropriation” into a sign—for himself, of course, but as a consequence of this discovery, for the group as a whole, which is to say, the human race.
By emphasizing temporal anteriority over competitive superiority, the term firstness expresses the essence of human as opposed to animal hierarchy. The animal world never evolves quickly enough for the relationship between those who first acquire an advantageous trait and their fellows to be other than a statistical guide to which of them will be shown, generations later, to have been the “fitter” perpetuators of the gene pool. Humans, and already some animal species, celebrate the permanence of norms in ritual competitions: all perform the same operation, and the winner does it faster, farther, more elegantly, etc. Yet what makes for human progress is the temporal competition for firstness in inventing new ways of doing things.
As opposed to preeminence, firstness is not simply a matter of rank; its temporal nature means that it can be passed on to others. Although being first in rank is of value to the society, this value and its “rarity” are attributes of the system itself, whereas firstness in innovation implies the possibility of spreading its effects throughout the social order. Such was the originary firstness of the inventor of the sign.
Political leaders not are generally inventors of new methods, but their effectiveness and ability to remain in power depend grosso modo on their ability to supervise more productive societies than their rivals—realizing of course that throughout history, “productivity” has often simply meant the ability to defeat these rivals and appropriate their production. Often, but not always.
Thus it is useful to say that, although humans are morally equal in the sense of being able to participate in the culture that allows us to live in peace together, and in particular, to use language, they are not ethically equal, given that they make different contributions to the public welfare.
The different modes of social organization that we have known since the Neolithic have followed each other in a series of “struggles for life,” with the might-makes-right relationship suggested by Darwin’s language being both enhanced and attenuated by the human capacity for “proactive” violence (Wrangham), which makes mastery far more than an attribute of the physically strongest.
Seen from afar, human history is filled with horrors, and we cannot help finding them horrible, but at the very least we must agree that, if you want a less horrible society, you had better acquire more might than the more horrible one you wish to defeat, or at the very least, enough to prevent it from defeating you.
There are no “fair fights” between societies. I don’t have to point out that we didn’t win WWII because we were better people than the Nazis and Japanese, although we would like to think that our social order gave us—with a little help from Stalin—a greater ability to harness the forces and talents of our societies. But there is obviously no iron law that links moral goodness with ethical efficacy.
Moral philosophy is a matter of ideology rather than “reason.” Kant’s insistence on the self-evidence of “moral law” is a consequence of the extreme consistency of his adherence to the credo of metaphysics. “Moral truths” are not notions we invent to manipulate the world—the concepts of the “understanding”—but concepts of “reason,” independent of empirical reality. There is no need to argue with Kant; he offers a model of human behavior that may well claim to be superior to any other. If you want to teach your child not to steal, you would have to use all of Socrates’ sophisms in arguing with Callicles in the Gorgias to attempt, no doubt unsuccessfully, to persuade him that his own “good” is not enhanced by stealing, even if he is not caught. It is far better to teach him simply that stealing is wrong—in effect, to invoke the sacred, even if by another name.
An anthropological ethic is necessarily a form of pragmatism, since focusing one’s concern on a biological species, however blessed or cursed by the transcendental, implies that the ultimate interest of this species be placed above all else. Slogans like “the greatest good for the greatest number” demonstrate a blindness to the very nature of human ethics, reducing it to the originary moral model, whose most profound contribution to ethics was rather to allow itself to be transcended for the benefit of the human community.
What is of genuine interest in John Rawls’ “original position” is how near and yet how far it is from the minimal originary event of GA. Rawls’ intuition rightly tells him that what is equal among all humans is more fundamental than their relative value to the community, that the moral precedes the ethical. Hence to get from one to the other, he sends us back to a time before hierarchy, when everyone was in principle really “equal.” So far, so good.
At this point, we are asked to imagine that the passage from this “original position” to our stratified modern society is made in such a way that each of us is arbitrarily assigned a place, with the qualities and level of privilege associated with it, and then to consider how we would judge the society, given the possibility that, as one of his presumably privileged readers, we might be destined to the lowest position in the society.
In this way, Rawls seeks to reconcile firstness with moral equality, not by crudely suggesting that the more or less meritocratic distribution of responsibilities and privileges should be abolished, but that it should be judged on the basis of this more fundamental egalitarian standard; if there were an equal chance that I might wind up like this person, what changes would I wish to make in the social order?
Rawls seeks a way to minimize the amount of violence we would need to wreak on moral equality in order to bring about a maximally “just” society. But in this instance, morality and ethics simply do not mix. No doubt his speculation’s thought-experiment status makes it vastly superior to the idea of applying “the greatest good for the greatest number” directly to social problems. But the idea of conceiving a society on the basis of each individual’s fear of being stripped of every personal characteristic to become the least favored citizen undoes the very sense of communal harmony that led to the “original position” in the first place. Such a construction of the “original position” could be imagined only as the act of a tyrannical divinity—or a professor speaking to his students. What is lacking in Rawls’ ingenious scheme is, very simply, an originary hypothesis capable of converting the thought-experiment into a model of human reality.
What ethical system then does GA propose in its place? It can only propose Churchill’s worst system except…, because it is the least systematic, and therefore of necessity the maximally self-correcting system: the one that gives the most latitude to human firstness to generate additional degrees of freedom.
So far, this system has clearly been liberal democracy, as invented and practiced in the Judeo-Christian West. But we cannot help wondering whether a society that combines political dictatorship with private ownership, like the inter-war fascist regimes, or China today, whether or not it begins by stealing other nations’ ideas, cannot sustain a higher rate of innovation that allows it to surpass them. The unflinching confidence in liberal democracy that we felt after WWII, and renewed after 1989, is no more.
Yet if indeed the pseudo-religion of totalitarian ideology is more suited to the cybernetic age than the freedom of thought permitted by our civilization, then we must work all the harder to develop our humanist anthropology while we still have the freedom to do so, as the gift of our own firstness to a future world that we hope will one day be able to benefit from it.