In the previous Chronicle, I cited, as a counter-example to the continued vigor of film, the sorry state of lyric poetry. The explanation I suggested is that there is a self-aware community of skilled film-makers who, however predictably victimary their political views, belong to an elite of world-wide influence, entry to which is highly competitive, given the combination of financing and organization required to produce a film for the market, whereas there is nothing comparable among poets.

As one of many examples of how this operates, Lars von Trier’s “documentary” The Five Obstructions (2003), made with his mentor, Jørgen Leth, is a piece of self-indulgent although often ingenious narcissism, whose real interest lies to my mind simply in showing the audience that they can get away with it—get funding for world travel and production, complacently indulge in intimate conversation presumably not intended for spectators, put together little incoherent films in response to obscure personal challenges—because they are not only “celebrities,” but above all creators capable of making real films that deserve our attention.

Such antics were common among poets when they played a similar role in the esthetic marketplace; they are virtually unknown since the days of the Beats. On the contrary, the artificial outlets for poetry created by creative writing programs are just one more reflection of the lack of true market tests. Certainly there is an “elite” of poets, some of whom even earn money from their profession, but precisely because the production of poems requires neither skills nor financing, nor is their quality tested in anything like an genuine market, membership in this elite is less a measure of poetic talent than of harmony with the Zeitgeist—today, largely a matter of “identity”: and the poets’ public personas embody this role.

A similar elitism once existed, on a less exalted scale, in the academic world (see Chronicle 610). When I entered the profession, professors at major universities too belonged to an elite, not generally world-renowned, but self-respecting, which is the important thing. Faculty members in a given field all knew (of) each other, although they traveled far less than in recent decades to conferences, let alone corresponded by email or exchanged tweets and Facebook posts. Once you had joined the club, you were not obliged to toe any lines, let alone that of PC; simply being a member sufficed. In those days there were in fact many respected scholars who published very little, if at all.

This is the way things have been done in clubs ever since bourgeois society was invented; Winston Churchill’s lifelong “Other Club,” composed of members of the British elite, was a prime example. The point was feeling, not superior to everyone else, as WC and his friends might have felt entitled to do, but simply a sense of shared worthiness, an experience of centered community of a sort that the modern Gesellschaft is too abstract to engender. As sociologists such as Robert Putnam have pointed out, this sense of shared worthy belonging in fact extended quite far down the economic scale, and its decline under the conditions of postmodern sociality is the source of much grief.

This Chronicle was prompted not by the woes either of my poetic avocation or my academic career (see Chronicle 140), but by those of my principal activity, that of generative anthropologist, or let’s just say, intellectual. From the mid-60s through the 80s, the world of literary and what might be called para-literary study in the US was dominated by a contingent of largely French theorists, whence the slightly pretentious term “French theory.” I knew or met a good number of these people, although I was rather too young to be accepted as their equal, and in any case no American was ever really an equal, merely a translator/adapter/cultural transmitter. Thus as chairman of the UCLA French Department in the 1970s and 80s, I considered it part of my job to invite a number of these theorists to Los Angeles, California being in any case one of their favorite destinations.

Where I think I misread the possibilities open to an American writing in the mode of “French Theory” was in believing that its apparent freedom of thought was not confined to form, but included content. The deconstructionists and Lacanians loved to mystify their readers by abstruse reasonings, or by intuitive rapprochements with little or no reasoning at all. But, as we should have realized, the bottom line for this family of thinkers (Lacan rather less than the others, if indeed one could attribute to him a clear-cut anthropology of any kind) was postwar victimary thought—the post-Marxist critique of firstness.

The edifice of Western reason had to be pulled down piece by piece to expose its exploitative foundations. That it all began with Barthes’ critique of Stalinist rhetoric in Le degré zéro de l’écriture is in retrospect a minor irony; in 1953, deconstruction was just warming up to take on the real enemy. But because Girard, whose concern for victims had Christian roots independent of, if not entirely divorced from, the victimary concerns of the Left, supplied what was to me a much more attractive model of “French theory,” I plunged into these intellectual waters unmindful that the fearless freedom of thought they appeared to permit derived the greatest part of its energy from political resentment—specifically, from the search for a foundational critique of the social order that would obviate Marxism’s embarrassing ambition of constructing an alternative to “capitalism,” now that the Soviet Union’s attempt at this was proving terminally unsuccessful. And it was above all this political agenda that was the source of its popularity among French Theory’s fast-radicalizing followers in American academia.

Generative Anthropology was not always a creature of the Internet, which it long antedated. Back in 1981, when I proposed a minimalist hypothesis that explained humanity’s specific difference in simple terms, one might have expected the scientific world to pay attention. And surely that was what Jack Miles thought—a scholar of religion, working at the time as a University of California Press editor, who would later achieve fame with God: A Biography—when he not only got The Origin of Language approved for publication, but got the Press to advertise it in the New York Review of Books. But by this time, the academic world was stiffening its positions as a mass rather than an elite institution, meaning that the elites that it did promote could no longer bask in the sentiment of belonging, but had to constantly reaffirm their status, like entertainers whose market value depends on remaining in the public eye—or in this case, the eyes of the other participants in the increasingly frequent conferences and journal issues that provided occasions for mutual backscratching.

In curious contrast to what we saw as the likelihood that a powerful, minimalistic thesis would sweep the field, what emerged was rather a consensus that any theory that did not derive from empirical research was ipso facto external to the “scientific community.” This was the world’s rather harsh way of teaching us that GA was indeed a new way of thinking, one that appeared just at a time when tolerance for individual eccentricities was disappearing.

This did not mean that the originary hypothesis lacked the potential of suggesting new avenues of neuroscientific research. Such research would indeed be of great value in refining and perhaps modifying the stages of the evolution that lead from the ostensive origin of language to the “mature” languages of the present, all centered on the declarative sentence. But neither the prestige of the University of California Press nor my professorial status could overcome the simple fact that, with a few personal exceptions, no one had any interest in promoting or even examining this radical new theory. The Origin of Language affirmed the necessity of conceiving the origin of the human faculty of representation in a hypothetical scenario incompatible with either the modest extrapolations of empirical anthropology or the abstract constructions of philosophy. Thus it is no surprise that The Origin of Language no longer appears in bibliographies of “serious” works on the subject, a number of which I have examined and found wanting in these Chronicles.

The reaction of incomprehension and indifference met with by the originary hypothesis in scientific circles—which today include philosophy as well, which has taken the linguistic turn along the analytic “Anglo-Saxon” path—is effectively that between mutually incompatible belief systems. The inability of scientists to define man’s difference from his animal relatives in qualitative terms is taken as not a failure, but a badge of honor, a proof of empirical objectivity, in contrast to illusions of humanity’s ontological difference, dismissed as just disguised forms of Creationism.

There are lots of blogs, and now vlogs, on the Internet, and I have no a priori grounds for claiming that my Chronicles belong to a superior genre. But given their provenance, I think I can justifiably say that they reflect a greater ambition. In the days when people were just beginning to create personal webpages, I began the Chronicles in the spirit of the intellectuals of an earlier generation, thrilled by the access provided by the Internet to the free movement of thought (see Chronicle 10). In those days there were no “platforms” like WordPress to handle web interfaces; you had to learn HTML, and knowing a few other languages as well was a good idea. Thus I thought—as it turned out, quite naively—that by taking advantage of my amateur facility with micro-computing, I could “scoop” the intellectual community and achieve visibility for the ideas of Generative Anthropology, ideas that by 1995, the year of the simultaneous founding of Anthropoetics and the Chronicles, were already some 15 years old.

Our website won a couple of awards in the beginning, when there were few comparable sites in competition. And for the first few years, the number of hits on our articles and chronicles increased geometrically. But as the Internet rapidly turned into a mass enterprise with billions and trillions of daily hits, the increase stopped. Blog sites and other platforms emerged to grant access to online publication to those with no computer qualifications beyond word-processing. We not merely failed to keep up, we became less popular than before. The explanation is basically the same as that of the decline of lyric poetry, currently being produced in vast quantities by both “self-expressers” and students in classes and workshops, but very little read. Except that poetry, however significant it might be as art, is not essential to humanity’s anthropological self-understanding.

The academic world’s lack of attention to online journals like Anthropoetics—which in its nearly 25 years of continuous publication has never received a single word of publicity from UCLA—can be seen as an understandable reaction to a fear of lowered standards. For although our journal’s editors, and most of our contributors, have been qualified university instructors and holders of PhDs, such qualifications are clearly not required for online publications as a whole. Thus it is not surprising that Anthropoetics did not start a trend. As far as I know, I am still the only humanities professor at UCLA to have created an online journal, and the only editor of a journal that is not sponsored by a pre-existing professional organization.

Just as there is a Gresham’s law of poetry that explains why by now no one has any idea who the worthwhile poets are, so there is a Gresham’s law of Internet prose. I still fancy I am writing essays, serious works of thought that are not mere topical columns, let alone blogs, but how many out there have a stake in respecting this distinction?

Under these circumstances, there being no possibility of arguing my case in the halls of science, assuming I would be allowed admission, I have continued simply to present it in our usual venues, at times polemically, but never as a direct challenge to the “scientific community.” The latter, meanwhile, continues to produce much valuable paleontological and neuropsychological research, but is never troubled by the fact that all the knowledge it garners, whether from the study of the human brain, child language acquisition, early hominin anatomy, or Sapiens-Neanderthal-Denisovan genetics, never allows it to define human language as anything more than “something profoundly distinct.”

No wonder, when the emergence of human language is consistently described by contemporary scientists as, to quote Daniel Everett, “really not that difficult.” Yes, human language is clearly more powerful than animal signal systems, but elephants are more powerful than humans—does that make them “ontologically superior”? Is their strength “something profoundly distinct” from ours? It seems a foregone conclusion that, in an era when empirical measurements force us to accept the seemingly absurd propositions of quantum theory as the only set of equations that accord with the data, the idea of asserting as something absolute and not quantifiable the difference between humans and, say, bats—that we can ask how it is to be a bat, but bats cannot do the same for us—is simply taboo.

Indeed, there is no presently conceivable way to demonstrate the superiority of GA by anything other than intellectual constructions, plausible qualitative explanations of the phenomena of language and culture. Even if neuroscientists discovered tomorrow a structure in the brain they could call a “scene of representation,” they would no doubt be able to show that other animals too possess this structure, albeit in a less developed form.

Our ontological difference cannot be defined by physical structures alone, even neurological ones. For a neurological structure that exists primarily as a mode of communication, so that its presence in the individual brain is relevant only insofar as it can interact with fellow humans’ similar structures through the use of shared representations—primarily language, but also common ritual and esthetic activities—is not a wholly biological entity. Instead of seeing animal communication as itself primitively post-biological, albeit lacking the means to liberate itself from the world of instinct and to acquire the freedom, the néant, of the human pour-soi, natural science prefers to reduce human communication as well as animal to the merely mechanical.

Whether lyric poetry remains a viable genre is no doubt a worthy question, but presumably society can survive without it. As I have been saying, I have no doubt that genuine artistic creation is ongoing, and the fact that the cinema, our most powerful art-form, is also the most universally accessible, is anything but a coincidence. Once we had oral epic, now we have video; the media evolve, but art itself does not “progress.” Its fundamental cultural function remains the same—demonstrating the capacity of human representation to give meaning, in the two inextricable senses of the term, linguistic and spiritual, to the elements of human life, by situating them on our common scene of representation.

But if art does not progress, such is presumably not the case for knowledge, whether knowledge of the natural world, based on the analysis of empirical data, that enhances its future predictability, or knowledge of ourselves as not merely biological beings but participants in culture, that is, in the very enterprise of reflection in which our self-knowledge is enmeshed. Which is to say that at the root of such knowledge there must be an understanding of the specificity of human representation, language and other systems of cultural signs, that cannot simply be translated into quantitative measurements.

Religion and philosophy are not just holdovers from a pre-scientific past: they reveal truths about our human specificity that quantitative measurements of any kind cannot capture; the phenomenon of the human introduces parameters that are not part of the quantitative, natural universe. The sacred or the beautiful cannot be reduced to the secretions of hormones or the twitches of nerve fibers, because they depend on the scene of representation that we share virtually with our internalized yet indefinitely extensible sense of the human community.

But neither philosophy nor religion can reach a minimal comprehension of this scene. For the one, presence on it is defined by significance, for the other, by sacrality. Generative Anthropology offers for the first time an anthropological synthesis of these categories. And although, as a result of their deliberate epistemological narrowness, neither philosophy nor theology is altogether compatible with physical science, this handicap does not apply to the practice of GA.

I think that, despite the limitations of our humanistic backgrounds and the scientists’ indifference, we of the GA community should nonetheless consider ourselves delinquent for not proposing to scientists, not means of “testing” our hypothesis, which is necessarily a fiction, but ways of using it as a source of research projects. Were I a neuroscientist, I would certainly be doing my best to find what in the brain corresponds to the scene of representation that is the central element of our hypothesis. This scene belongs in the broadest sense to the community, but there is no ether in which “communal” realities subsist—their only material presence is in our brains. How these brains communicate with each other to preserve a cultural corpus that, before means of inscription evolved, had to be carried in them alone, is a subject of great interest, one that GA should make more of an effort to ask neuroscientists to explore with us. And similar projects can no doubt be conceived throughout the whole range of what are called in France les sciences humaines.

Very recently there has indeed been progress on a small scale among independent intellectuals, if not academic scientists. For the first time, largely as an effect of Adam Katz’s efforts on the GABlog, we are seeing interest in GA from not merely isolated individuals but whole groups of people, at more than one or two degrees of separation from our original group. I would note in particular Truediltom’s series of videos on YouTube discussing the new Origin of Language.

I’m not holding my breath. But I would rather conclude in this vein of prospective hope than by rehashing old and new disappointments. At some point, the current passion for “socialist” victimocracy will lose its charms, the anarchic sense of liberation unleashed by the proliferation of the social media will be calmed, and the need for our new way of conceiving the human will be recognized. If we seek signs of hope, a culture sufficiently authentic to teach us how to talk to girls at parties, to show Jane how to find a boyfriend, cannot continue indefinitely to ignore the significance of the originary hypothesis.