Adam provides here the clearest and most strategically plausible program I have seen for Generative Anthropology, not merely to survive but to contribute fundamentally to human self-knowledge. He proposes a collaborative intellectual ethic very much in the spirit of the deferral of violence at the core of the human. It’s up to us to make it happen. -EG
***
One possibility that I have never seen considered within discussions of the originary hypothesis is that thinking in terms of the hypothesis is simply incompatible with any other way of thinking. GA, in that case, would represent a new way of thinking repellent to old ways of thinking. For example, Eric Gans has proposed for many years that originary thinking provides a way of mediating, or establishing a dialogic space for, atheists and theists, given the formula, generated by his hypothetical account of the originary scene, that “man creates God/God creates man.” To this day, has anyone taken him up on it? Even among GA’s most devoted and sophisticated adherents, debates continue to proliferate regarding how to “reconcile” one’s particular faith with the hypothesis. I think the assumption behind Gans’s proposal is that every faith comes into being under specific conditions, requiring that certain anthropological insights be brought to the fore and framed ritually, while those insights can in turn be re-framed and deepened through an understanding of the originary scene. That sounds reasonable, but maybe only in “Enlightenment” terms, that is, only if you are already thinking about your faith primarily in terms of its “anthropological insights” (which would, of course, assume a readiness to exchange faiths if one with more profound or more necessary insights comes along). The same would hold, for that matter, for those invested in Enlightenment terms, which presuppose their own anthropological insights, ones which may not react well upon exposure to GA’s radically different ones. The resistance to any “penetration” by GA into the institutionalized and “hardened” disciplines (the Human Sciences) is, if anything, more complete: even leaving aside the victimary turn these disciplines have taken in recent decades, the founding assumptions of literary studies, psychology, anthropology, linguistics, economics, and even sociology may simply be too embedded in the daily practices of members of those disciplines to allow for GA to become visible, much less plausible.
The assumptions founding these disciplines (and all their sub-disciplines and inter-disciplines) are, I think it must be said, arbitrary—which is to say, articles of faith. I now make a point to remind myself, in thinking about “ideas,” to go to Google’s Ngram viewer and see the trajectory of the use (in books) of the words denoting those ideas over the past few decades (or centuries if relevant). It’s a fascinating exercise. For example, pretty much all of the current catch-phrases in contemporary academic discourse (“multiculturalism,” “critical thinking,” “homophobia,” “white privilege,” among many others) all started to take off during the very late 70s to very early 80s. Something clearly happened around that time. “Sociology,” meanwhile, was introduced, apparently, sometime in the 1880s, became gradually more popular up until 1940, then, following a brief dip in the early years of World War 2 (why?), shot up, reaching its peak in the mid 70s, then declining slowly but steadily (edged out by the victimary theories?), finally evening out in the mid 80s. There are enormous opportunities for young scholars to make their mark in innovative data-driven forms of research that would employ far more nuanced searches. Needless to say, the meaning of this or that shift in usage is not transparent. But it’s always helpful to keep in mind that whatever words we use and must use just to conduct our most basic business as academics, teachers, and intellectuals, were first used by someone, and then used by others, and used instead of other words, and all for reasons that we could in principle learn much more about—and that would not, to put it mildly, completely coincide with the conceptual coherence and power of the words in question. What I conclude from this realization is that the healthiest attitude we could take toward any of the disciplines is to assume there is an excellent chance that it needs a complete overhaul.
No mode of thought could have a greater interest in initiating such overhauls than GA. If all established discourses are, we might say, GA-proof, we must really go for broke and direct our efforts toward establishing GA as the One Big Discipline. A good starting point for doing so would be by inquiring into the conditions under which GA might eventually be installed as that One Big Discipline. At the very least, we would then be acknowledging what every human science now insists on implicitly denying, that any mode of thought is historical, and enters the world as an event. (How and when did psychology “discover,” say, “cognition” as a human “faculty”? If man creates God/God creates man, “cognition,” in the sense of “cognitive science,” must have both revealed itself somewhere and at a specific point in time, and have been “constructed” as an object of inquiry. I’m sure a history of psychology could—perhaps already has—answer that question with greater or lesser precision [my guess from a quick look at Ngram, is that the concept was consolidated in the early 40s], but contemporary psychology is certainly not interested in “grounding” itself in any such event.) And what about the origin of the originary hypothesis itself? Eric Gans’s account of his articulation of Girard and Derrida (although I believe there is some musing over the esthetics of the French declarative sentence in there) points us in two extremely interesting directions. First, through Derrida, to the “linguistic turn” in 20th century thought, that not quite conclusive blow to traditional metaphysics (“linguistic turn” also shoots upward on Ngram starting from the very early 80s), which imposed the recognition that all concepts are, in fact words; second, through Girard, back through a history of inquiries (including Freud and early 20th century anthropology) into constitutive violence, into what we might call that in the archaic world which our modernity leaves us least prepared to see. This yoking together of, not opposites but certainly “disparates,” is an origin we can always return to as ground.
There are two possible paths toward one big disciplinarity. The most familiar one, attempted by virtually every “revolutionary” theory in the human sciences (historical materialism, psychoanalysis, even deconstruction in its way) is the establishment of a controlling meta-language. GA certainly has the materials for taking this path, in a set of “foundational” concepts (desire, resentment, deferral, center, scene, sign, etc.) defined precisely and contextually through a series of cultural, literary and social analyses. A contest of meta-languages is the almost universal, indeed practically the only conceivable mode of contestation among competing theories. You put forward your concepts, and I put forward mine; we then look at various objects of analysis and each of us provides an “explanation,” with the “better” or “stronger” one winning. Of course, this is a big problem, since there is no theory-free criterion for distinguishing between “stronger” and “weaker” analyses. We also know that there is no level playing field on which these meta-theory contests take place—rather, the process is historical, with the new theory taking aim at the dominant one. Post-structuralism hit North America in the 1980s with sophisticated analyses of the Romantic poets and canonical novelists like Melville, Hawthorne, and Poe because it was through its readings of such figures that the established New Criticism controlled the field. Which theory prevails might depend far more upon which provides more opportunity for novel PhD theses, grant proposals, and tenure track positions, and what determines that? I’m not reducing everything to power relations, and both New Criticism and Poststructuralism provided powerful and, in their time, innovative ways of reading texts; but it would be very hard to show that one theory displaced the other simply because it was “better.” The Kuhnian theory of the replacement of one theory by another through the emergence of anomalies in the existing system that multiply and generate increasingly strained attempts at resolution within that system, ultimately to be resolved within a new paradigm including the previous one as a limit case, might describe what happens in the natural sciences (I have my doubts). But when it comes to the human sciences, there’s no doubt that a far more complicated process, closer to the kinds of transformations described by Paul Feyerabend, in which economics, politics, and institutional imperatives weigh in heavily, determines the outcome. We will never be able to market our wares in a free and open marketplace before informed “consumers,” and it’s hard to see even a younger, leaner and meaner generation of originary thinkers (were such to emerge) engaging in the kind of marketing and politicking and just sheer ganging up that’s needed for a theory to prevail in the universities.
There’s another reason to be skeptical about the grand field of meta-theoretical contest. Such struggles encourage polemics, and polemics encourage the hardening of lines, the fetishization of the intellectual materials and the introduction of coercion into matters where it has no rightful place. It was not only Marxists who turned conceptual differences into life and death organizational struggles (or vice versa); psychoanalysis almost immediately split into competing versions, with each singling out a particular element of Freud’s analysis and all anathematizing the others. It was not very different with Derridean, Lacanian, Deluezean, etc., versions of poststructuralisms. If there were to be an institutional stake in GA, enough to draw in “fresh blood,” we would quickly see lines drawn over “orthodox” and “heretical” understandings of “resentment,” the “moral model,” and every other concept. Of course there need to be discussions and there will always be disagreements over any theory, GA certainly included, and disagreements, properly conducted, are generative for any discipline. But the struggle for institutional mastery doesn’t provide the field where those kinds of disagreements could take place. I would also add that the days of grand theoretical battles in the university, at least the American university, are probably over: between victimary inquisitions, budgetary shortfalls, and the business model imposed on universities, forcing English Departments as much as anyone else to explain how they will be providing students with the kind of “critical thinking skills” they need to get jobs, leaves little zest for genuine theoretical battles.
The other path toward “owning” the transdisciplinary field has never, as far as I know, been tried. That path is learning to speak the theoretical languages we wish to supplant. This is more like body snatching than planet smashing. Let’s take an example that has come up often lately, due to the importance of cognitive psychology to the latest GASC conference in Stockholm: the computer model of the mind. Obviously the computer model is an antagonist to the originary hypothesis, insofar as it takes a product of human thinking aimed at supplementing human thinking in certain areas, and retrojects that product back as the model for human thinking itself. Saying that the mind is a computer is not really all that different from saying our bodies are automobiles. It defines down the human essence to one of our tools, rather than engaging in inquiry into the human essence that enables us to create, use, and criticize computers and cars, and to do many other things as well. But once we make our arguments, then what? The computer model of mind enables the inquirer to describe in interesting and complex ways all kinds of things. If I say that as a result of “experience” (refining an already existing “feedback mechanism”) humans construct and continually revise “algorithms” for determining, automatically, specified responses to probable phenomena, can it really be asserted that nothing illuminating can emerge from such an approach? When I am faced with a “choice,” I can, we might say, run some probability calculations, based on available evidence and subject to time constraints, so as to continually enhance the “effectiveness” of my choices. At the very least, the computer model is a source of provocative metaphors and ways of subverting various sentimentalisms.
We can certainly speak and think in these ways because the “computer model of mind” is a language, with its rules, idioms and tacit assumptions (to use a famous metaphor from Wittgenstein, it’s one of those newer suburbs built, in a planned and grid-like manner, around the more eccentric, improvised old city—perhaps an industrial park!), and we are language-using beings. I find it irritating and exhausting to repeat the same arguments over and over, which is what one has to do if one is determined to “refute” and “defeat” the “computer model of mind.” But we can speak to the computer model of mind by speaking within it. Asking an adherent of the computer model of mind to lay bare the algorithms he has followed in constructing an experiment so as to test the working of this model in some experimental subjects might be more instructive than hectoring him with its contradictions and dehumanizing consequences. The computer model of mind, like any discourse, has its origin and its originary structure (the iteration of its origin in its ongoing operations), and the way to discover this is not just by going back to the records of that conference in 1944 or whenever (unsurprisingly, Ngram has the “computer model of mind” shooting upward in the mid-80s), but by noticing what kinds of things must be said within the discipline and what kinds of things must not be said: by entering it as if it were just emerging and its terms needed to be learned by applying them to its emergence. By thus infiltrating the disciplines, we might get them to speak their own truth, which (we must have faith in the power of our own discourse here) must both iterate and evade its originary structure.
How do we remember our own origins while thus present undercover in the disciplines; how does One Big Discipline emerge from what appears to be dispersal? The real “proof” of GA as the “strongest” theoretical discourse will be that it can keep showing whatever disciplinary space it inhabits that this space needs the way of thinking only originary thinking can provide to address the anomalies in its own discourse, anomalies which the originary disciplinary inquirer will have trained himself to detect (yes, there is also a place to speak Kuhnian). The proof, that is, is in the way we will have clarified in a collaborative manner whatever those in that discipline have devoted themselves to studying—not in how we distract them by pointing out mistakes that might not seem such, or seem relevant, to them. If we find ways to “represent” so as to defer resentments in the course of any inquiry, we exemplify an originary intellectual “ethic,” which can in turn become a compelling topic of conversation. “Our” language, in other words, can take its place within the disciplinary language, and that is where something like genuine intellectual competition takes place, through the framings and counter-framings of intellectual collaborators. We would have to have faith that one day all might wake up and find themselves speaking generative anthropology; of course, along the way something like a crisis would have to take place in each of the various disciplines, and a substantial part of what we are studying now are the elements of that possible crisis.
This is also, I think, the best way to the best way to maintain GA, as we must do, as an autonomous disciplinary space, where we try out new concepts and new formulations and compare results from the field. You could certainly say that other GA practitioners practice a lot more of what I preach here than I do. Keep in mind that working within a field is not the same as eliciting the originary structure of that field. Regardless, I do something different (in addition), which I will just point to here as another possibility, complementary, I believe, to the one I’ve sketched out here. I’m trying to invent possible disciplinary languages out of the originary hypothesis. You could call it a kind of “skunkworks,” creating originary structures that might have various ramifications. (I am referring to what I call “originary grammar.”) On a “personal” level, I would rather fail at this than succeed in many other ventures. Whether there’s a place for the idiosyncratic space I’ve carved out in our discipline is ultimately for others to decide. I am attempting to insinuate the possibility of a dismantling of any utterance whatsoever into the succession of speech forms along with the possibility of its subsequent reconstruction, now marked by its articulation of that succession. It could be seen and used as an all-out orchestrated attack on commonplaces, grounded in the moral imperative of making our utterances new and ever more cognizant of their historicity. A grinding out of everything generic so as to reveal the originary in language—I hope it will be a portable laboratory for putting disciplinary languages under the microscope.
A final note: One Big Discipline would mean everything, “religion,” ritual and theology included. It’s hard to imagine that what we now call “psychology,” “economics,” “education,” “communications” and all the rest would become regions of originary thinking, but that somehow we’d still be talking about faith and ritual in the same ways. Religion is simply inquiry into some revelation of the sacred, and is in principle no different than any other discipline, which all rely (and deny their reliance) on the “auto-probatory.” The same process of infiltration would occur within religious institutions and vocabularies. There’s nothing compromising about inhabiting opposing, in some points incompatible, vocabularies—that will always be the case, even in the One Big Discipline, which would always be under constant review and revision. Remembering that concepts are words, and that there is that in the archaic which we need to bracket our modernity to see, provides a frame for conducting such conversations, within oneself and with others. In the One Big Discipline we would always be learning from the externalization of our resentments, which is probably what religion is best for, and which we would keep trying to make religions (as regions within the One Big Discipline) better at. And that’s something we can always begin to do.