Department of English
Quinnipiac University
Hamden, CT 06518


The virtual presence of human communication is not—today nor in the beginning—simply an “open channel,” but an essentially suspicious one. It is this suspicious nature of linguistic communication, its requirement of guarantees, that is at the basis of the severity of the norms which pervade language at all levels, from the phonetic to the logical and esthetic. Each speaker in proposing his linguistic model to one or more of his fellows in effect recreates a crisis in which the significance of the information conveyed provides the power of reconciliation. (Eric Gans, The Origin of Language, 78)

Paul Ricoeur, in his The Symbolism of Evil, traces the progression from defilement, to sin, to guilt as an index of the progression from the ethical to the moral:

“Guilt,” in the precise sense of unworthiness at the core of one’s personal being, is only the advanced point of a radically individualized and interiorized experience. The feeling of guilt points to a more fundamental experience, the experience of “sin,” which includes all men and indicates the real situation of man before God, whether man knows it or not…. But sin, in its turn, is a correction and even revolution with respect to a more archaic conception of fault—the notion of “defilement” conceived in the guise of a stain or blemish that infects from without. (7-8)

The defilement of the communal space occurs when some prohibition has been violated—the intention behind the violation is irrelevant (it could have been an accident, say spilling some liquid used in the ritual); the contamination or pollution must be cleansed, and this can also only be done through strictly prescribed ritual means. Meanwhile, one only sins when one deliberately violates some divine command, and one is only guilty when one can be judged (and judge oneself) according to standards of probity that are shared but also internalized within each individual.

From the standpoint of the responsible individual capable of guilt, “defilement” is an extremely primitive way of thinking about violation, one that can only be a source of violence itself, that we are fortunate to have transcended, and that, to the extent we must recognize it at all, should be reduced to arcane rituals to be interpreted allegorically or neurotic quirks. All this is indicated by Ricoeur’s reference to the “correction” and “revolution” the concept of sin effected in our understanding of “fault.” But we hardly need Ricoeur to tell us that. How could Jesus’ claim that one is defiled not by what one puts into one’s mouth, but by what comes out of it, not have settled the matter? But if everything human can be found, albeit implicitly or potentially, on the originary scene, it follows that nothing found on the originary scene is ever lost. Defilement, or its possibility, is undoubtedly present on the originary scene: the sign has to be emitted “properly,” or recognizably, by all participants, and the failure to do so, even due to slowness or inadequate mimetic capacities, would pollute the scene, i.e., leave lurking unacceptable levels of menacing violence. It is easy to understand why Judaism, Christianity, and then modernity would want to eliminate all trace of the “irrationality” of defilement; but it should also be possible to understand that trying to do so has only reproduced in new forms that same sense of “pollution,” and can be implicated in the worst violence of 20th century’s crisis of modernity.

Indeed, the investment by both Nazism and Communism in discourses of defilement might lead us to redouble our efforts to expunge its apparently indelible traces. We can recognize these traces in the tropes of “infection,” “pollution,” “contamination” and so on applied to both the race and class enemies of these regimes; and it would therefore be possible to reduce these regimes to a recrudescence of primitive, “compact” societies in revolt against the market order. But the power of White Guilt is also the power of defilement—otherwise, it would not have proven so “contagious” so as to “contaminate” even those (Western) countries that defeated Nazism; nor would it continue to prove itself so impervious to attempts to direct attention back to more “rational” articulations of individual act, intent, and acknowledged social norms. I wouldn’t propose changing the name of the condition in question to “White Defilement,” but we could take the mysterious power of White Guilt to indicate that sinfulness and guilt are constructed with the materials of defilement, rather than by replacing them.(1)

A sense of defilement, of some derangement of the communal and even world order that implicates one even if you not only didn’t commit but resisted with all your might, is a perfectly proper response to the crimes of genocide and totalitarianism—it is denying that sense, then, that is “irrational.” The mapping of the genocidal space by historian Raul Hilberg in terms of perpetrators, victims and bystanders (with this final, and novel, category implicitly extendible ad infinitum) suggests a series of circles of the “tainted,” with no one excused due to ignorance or incapacity. Even for someone born after the event the obligation to testify on behalf of the victims attaches, and no obligation to testify could ever be fulfilled once and for all. The perceived irrationality of that sense of defilement leads to the irrationality in the practices through which we seek to ward off the dreaded derangement of being, and which lead us to invest more and more in deferring ever more vague threats (we speak of this in terms of “slippery slopes”).

I would suggest that the most originary form of defilement, and the form in which it most readily presents itself for our inspection today is that of error, and I will propose what I will call “originary mistakenness” as a new way of speaking about mimetic rivalry and crisis and the resources available for deferral. The one who makes a spelling, pronunciation, or grammatical mistake, or commits some solecism (or, for that matter, “misreads” a situation, “misunderstands” a text, “misses” a “hint,” and so on), is as “blameless” as the one who accidentally disrupts some ritual space; and the mistake evokes a very similar sense of unease and fragility—everyone around feels compelled to show that they would never make such a mistake, first of all by demonstrating some recognition of its mistakenness. I suggest that this is because the error, whether grammatical or in any other unsatisfied convention, is a sign of infinite desire: making a mistake exposes one as imitating what one doesn’t know how to imitate, and therefore what one doesn’t understand, and the only reason for doing so is an “empty” and insatiable desire to be included in the very community one has just demonstrated oneself unsuitable to join. Naturally all mistakes don’t have the same high stakes, but the possibility of granting entrance to one capable of merely adopting the required forms as means of advancement turns the norms of the community themselves into an object of desire, and possible possession, and they can therefore no longer serve as reliable means of mediation. In that case, defilement can be seen as originary mistakenness.

While recognizing that we can speak of mistakes among animals (I have been informed many times by nature shows that the reason sharks attack humans is because they mistake them for seals or other marine mammals) and even in DNA replication, leading to the emergence of new species, I will suggest that mistakenness is specifically human. This is the case because only with humans does the mistake emerge simultaneously with the norm according to which we could judge it to be a mistake and therefore affect one’s relations with one’s fellows. We can see the simultaneity of norm and mistake on the originary scene. If we assume that one member of the group must have aborted his gesture first, then it follows that that individual could not really have known what he was doing. This makes sense if we speak, not of an aborted gesture (Eric Gans has himself spoken of the ambiguity of the phrasing here) but as what it actually is, an aborted act of appropriation. It only becomes a gesture with its imitation by the rest of the group. In this case, no member fully intends the meaning of the sign. There would be an emergent realization of what they have all done in the also emergent contrast between the grasping and the withholding that each can see in all the others. To use linguistic terms, anything that looks like a grasping motion is marked; anything that looks like a withholding motion is unmarked, as reflecting the power of the central object; or, to put it another way, the members of the group presence to each other. To get to that point, though, an accentuation of the shape of the aborted act so as to convert it into gesture must have taken place in its circulation through the group, and we can only think of such an accentuation in contrast with an insufficiently shaped or distinguished gesture (one which had not convincingly separated itself from the act, perhaps because it was still too much like grasping, but perhaps because it looked too much like sheer fright)—the insufficiently distinguished gesture is the first mistake, and the shaped one the first sign. My argument will assume that this mistakenness accompanies normativity in each act of signification in the way (to borrow Saussure’s famous metaphor for the signifier-signified distinction) each side of a sheet of paper accompanies the other. There is always a position on the scene from which anyone can look more like they are grasping or routed than withholding. Mistakenness appears as a fully fledged mistake when we have a visible confrontation of rules extrapolated from the same model: when, pragmatically, the people involved cannot see themselves as following the same model.

It is worth noting that Gans, in his account of the evolution of the primary linguistic forms in The Origin of Language, proposes that the imperative, at least, began as a mistake: the first imperative was an “inappropriate ostensive”(2). But we need not stop there: the first interrogative was a prolonged, which is to say, botched imperative, diverted part way through by the uncertainty of its fulfillment. The first declarative, the negative ostensive, is an on the face of it ludicrous attempt to offer a word in place of a thing. In each case, the mistake is not corrected but “completed” and thereby changed into a new kind of sign: the “interlocutor” brings the object in response to the inappropriate ostensive, and ceases his command or demand in response to the negative ostensive, thereby complementing the other in creating the new speech form. The mistake, in each case, is engirded by a new norm, and it is instructive to look at what seems to happen here. The interlocutor, confronted with an initially unintelligible sign, simply remains on the scene with the other, and since sustaining the scene requires sense, that sense is supplied: the object is made present, or the sharing of the sign marks its absence. The grammarian, on the other hand, expels the other from the normative scene.

Gans, in his subsequent discussion of linguistic diversification (within the ostensive realm) following the emergence of the sign says that, in new potential crises,

[t]o take too much time in communicating a piece of information is to commit an error not linguistic but practical, the hearer’s potential for anger tending to increase with the duration of his subjection to the speaker’s linguistic model. Thus this time will tend to approximate the minimum required for the hearer to absorb the information conveyed. (78)

So, taking too much time would be an error because, far from deferring violence, it would intensify it. I think we can assume, then, a complementary tendency to err on the side of brevity, which is to say the abbreviation of the sign. This error might often be simply that, a gesture which doesn’t fulfill its aim, but it would become a new sign or “idiomatic” revision of the sign as soon as it “took.” The new sign might replace the old one for use in the most intense situations (brevity would then “mean” urgent”), or it might take effect at a lower threshold of danger (brevity would then mean “trivial”). Here we would have the “two-place hierarchy of signs constitutive of the opposition between sacred and profane representations” (79) required if the sign is to refer to objects different from the original one, or one of equal desirability. Interestingly, Gans then goes on to say, regarding this dynamic of unequally significant signs, that “if we assume that the ‘profane’ sign attracts from its addressee an interest of a certain intensity, then this interest too can be deceived by a relatively insignificant referent, which will therefore tend to acquire for its designation a newly differentiated sign.” Again, it seems to me that error is driving the process here: in this case, the intrinsic interest in the other’s sign leads the addressee’s attention to an otherwise uninteresting object and hence a new word. The “gradual lowering” of the threshold of danger, and hence of significance, leads to diversity because of “the suspicious nature of linguistic communication” (78): the linguistic community’s “requirement of guarantees” “provides the impulse for a vocabulary richer in information” (80) because, I would suggest, that “suspicion” (the normativity of language) produces both error (abbreviations as well as prolongations) and the determination to locate the objects that would correspond to such errors.

Consider, as well, David Olson’s account, in his The World on Paper, of the emergence of writing, in the context of his argument that it was writing that made language visible as an object of analysis, and the “syntax” of graphic representations used for record keeping that made awareness of syntax in language possible. Olson writes:

Subsequent developments [following the earliest hieroglyphic writing systems] which gave rise, eventually, to the alphabet may be traced in large part to the consequences of borrowing. A shift in what a script “represents” is a consequence of adapting a script to a language other than that for which it was originally developed, an activity that led logographs to be taken as representations of syllables and later for syllables to be taken as representations of phonemes.

The first syllabary was the result of using Sumerian logographs to represent a Semitic language, Akkadian. To represent an Akkadian word such as “a-wi-lu-um,” man, with Sumerian logographs, the Akkadians simply took the Sumerian graphs which could be read as “a,” “wi,” “lu,” and “um,” ignoring the fact that in Sumerian each graph would represent a separate word… Reading Akkadian would then be a matter of pronouncing this series and the graphs would now be taken to represent syllables of Akkadian rather than words as they had done in Sumerian. (80-1)

Now, making use of a script in a new way, one which ignores its previous uses, is not itself a “mistake.” But the “consequences of borrowing” which Olson refers to must include the two central elements of any mistake: some understanding of what the sign to be appropriated is meant for; and the use of it in a way that would be marked as misuse by those fluent in the use of that sign. We could describe such appropriations as a knowing re-purposing of the sign in question—Olson’s description would allow us to do so. It seems to me more economical, though, to assume that an attempt to use another’s writing system, which is to say to acquire a capacity one does not have (as opposed, say, to considering replacing the alphabet one has with another), would involve a genuine attempt at imitation. Indeed, the shift from logographic to alphabetic writing is best understood as an example of what Michael Tomasello, in his Origins of Human Communication calls the “drift to the arbitrary”:

Are certain obscene gestures “arbitrary” or are they iconic representations of real actions? Many such gestures were at one time iconic, and then they became more arbitrary over historical time—but they were conventional, in the sense of shared, throughout. In any case, our proposal here will be that first came shared conventions, and then there was a kind of “drift to the arbitrary” over historical time. (219)

Tomasello attributes the drift to the arbitrary to the entrance of learners and outsiders, who must imitate the sign before having acquired the shared intentionality of its original users, into the group: “outsiders, who are missing some common ground as a basis for ‘naturalness,” may have a difficult time comprehending and parsing the communicative signs of others” (304). In that case, such a drift from the iconic to the arbitrary would result from a series of failed attempts either to make an iconic gesture understood to someone from another community (or to understand, i.e., respond to, the gesture—if we can separate those two phenomena), followed by a re-norming along more purely conventional lines. In this case, all language change must be driven by much mistakenness and re-norming as each sign user sees the consequences of the other’s borrowing of his sign; and language is a process of change from the very beginning. (Although we should note that Gans, just as much as Derrida, would have us acknowledge that there is an element of arbitrariness in even the most iconic sign, since the sign emerges not naturally but in an event that could have not happened.)

The anthropologist and linguist Edward Sapir noted that in certain Native American tribes,

definite points of contact have been established between speech defects and “mocking-forms,” with consonantal play, on the one hand, and between the latter and myth-character forms with consonantal play, on the other. I am inclined to believe that the observation of consonant substitutions such as take place, with involuntarily humorous effect, in the speech of those that articulate incorrectly, has set the pace for the consciously humorous use of the same or similar substitutions in both mocking and, directly or indirectly, myth-character forms. (191)

So, mistakes in speech that mark individuals within the community are incorporated into idioms which in turn make their way into the representation of mythical characters. This provides a very interesting model for thinking about the process of idiom generation through the norming and re-norming of mistakes. As many teachers have noticed, singling out a mistake made by students in front of the classroom can easily, through sheer mimetic power, get it repeated rather than rejected by other students. One could attribute this effect to student inattention—perhaps they simply copy whatever is written on the board. I’m not sure how important the difference is—as human beings, mimetic beings, what else could overtake us while inattentive other than attraction to some powerful model or the habits inculcated through adherence to one? Either way, rather than mark it for exclusion, it may be better to have students make conscious use, in the creation of idioms, of incorrect articulations. While all mistakes may not be equally viable candidates, for the more “striking” ones a strategy of incorporation rather than extirpation is the more productive approach.

All idioms are built upon the cornerstone of mistakenness. I should note that I am using mistakenness conceptually and heuristically here—one could note, of course, that creative writers generate idioms all the time and know exactly what they are doing. But there is much that they don’t know about what they are doing, and their highly conscious activity implies more, not less, tacit knowledge and unconscious mental activity. To put it another way, I am making an argument about what is entailed in “creativity”: applying some sign to some new domain (a helpful definition of intellectual creation) involves responding to the sign from the position of an unintended addressee (almost as if an eavesdropper were to suddenly answer a question). The process, I am proposing, is no different from that which, in Gans’s hypothesis, led to the co-invention of the imperative: in working with a sign that finds nothing in the real or imaginary scene to complete, some kind of external supplementation is provided—the creator, then, may not be making a mistake, but he or she certainly allows him or herself to draw upon reserves of mistakenness.

The ethical consequences, it seems to me, are as follows. If the mistaken is marked, then, and we unmark ourselves by enforcing the norm against the polluting mistake, then rather than intensify the marking of the error of which we are rid, we can allow ourselves to be marked by deliberate innovations that risk being mistaken and unmark others by constructing idioms around their mistakes. Such an ethics of language would break with metaphysical normativity, grounded in well formed propositions and the presumed model of a well ordered reality. We can never be certain of understanding each other but we can be certain of misunderstanding each other—there will be mistakes in every exchange of signs. We can try and reduce misunderstandings but with a few exceptions the law of diminishing returns sets in quickly as each marks the other as source of the misunderstanding. But if error is simply the application of a rule in some situation where another rule claims it doesn’t belong, mistakes can be treated as reciprocal interference in each others’ freedom—rather than striving for the transparency of traditional notions of dialogue, we would embrace a more digressive model of discourse in which I try to follow, refine and enhance the rules I take the other to be following. Mistakes, in this case, can be revelatory, as they put forward other rules, which can always be articulated in some way.

An acknowledgement of our originary mistakenness would radically transform our attitude toward risk, and there is little today that is in more desperate need of transformation. It would be very easy to see White Guilt as an indemnification policy against the risks involved in resentments held by anyone not firmly invested in the existing system, and the increasing terror of risk of any kind can be seen across all our institutions to the point where it is nearly paralyzing us. The realization that everything is interconnected intensifies the fear that any mistake can bring everything down, but it could just as easily lead us to notice all kinds of redundancies and back-ups that are also part of our interconnectedness. The real threats to the market system are the desperate attempts to avoid its breakdown—I would go so far as to assert that no such thing would have happened if the government just stayed out of the financial meltdown in September 2008. Enormous amounts of wealth would have been lost, but before too long people would have been buying and selling, lending and borrowing, saving and investing again, perhaps first of all on the margins of the current system dominated by the alliances between the government regulators and the huge financial institutions. Accepting our originary mistakenness will eliminate the terror of contagion, contamination and defilement, in its contemporary form of various “domino theories,” which tell us if one crucial piece goes down it will bring everything else down with it. Even if it does, something will get up again, and we can put our energies into that inevitably risky something.

The insight that we are fundamentally mimetic beings should put the question of error or mistakenness at the top of our concerns. When, after all, is imitation error free? When does one ever get the model “right”? On the other hand, who determines whether I have done so or not? If you are too close to the model, you risk unwitting parody, or a shameful failure to grasp the setting within which the model has acted, a setting different from the one in which you are to act. If you are too far then the model won’t be recognizable in your actions at all, and if your desire is set by the model, you will fail to gratify it. But determining what is just right must be left to the averaging out of subsequent imitations and representations of imitations until a norm emerges which enables us to share in the resentment towards mistakenness. Needless to say, mistakenness will corrode that normativity from the very beginning.

But it is not so obvious that it is meaningful to say that imitations can be mistaken, because there is no natural norm against which imitation can be measured, and hence mistakenness must be located on the originary scene as well. The constitutive mistake of imitation, I would suggest, is taking the model as origin—a mistake, of course, because the model is himself just imitating someone else. The more completely we emulate the model, constructing a formal unity out of his approach to the object, the more his relation to the object appears self-generating and self-contained. This mistake is what makes the sign both necessary and possible, because only then could the model’s abdication of his claim to the object become an act for me to imitate as well.

Mistaking the model as origin can take two forms: one, assuming the model is blithely unaware of you; two, assuming the model is dead set against you. In the first case, blithe unawareness, the self-contained model seems not to attend to the arrangement of attention amongst the others—their attention simply automatically follows his own, including everyone but yourself. In the second case, that of being dead set against you, everyone’s, following the model’s, attention is directed towards the means of your exclusion. The two possibilities represent the extremes of an absence of attention and a dangerous intensity of attention. The imitator oscillates between these two attitudes toward the model, but they crystallize into the aim of interposing oneself between model and object—that would both force the model’s attention, and turn his antagonism toward you back towards him, in such a way that you get the drop on him. The form of the imminent confrontation on the originary scene in this construal, then, entails each figure getting a little in between the other and the object and that is what would bring things to a standstill: there’s no way you can get in between the other and the object if he is positioning himself in between you and the object. But this standstill would involve ongoing adjustment, as each makes the mistake of seeing the other move toward the object and of appearing to move toward the object while really trying to block the other. And the only way of ending the stand-off is for someone to mistake the gesture as a invitation to partake of the object, and initiate the sparagmos. This is originary mistakenness and our originary defilement: to be irremediably in between, tainting the origin that being in between places us at.

Accepting originary mistakenness implies accepting normativity as well. Indeed, a mistake is only a mistake once it has been disclosed as such to a normative intelligence. Norm and error emerge simultaneously. The mistake is the sign of infinite desire and therefore also of incalculable danger—the norm is the resentment of the center that contains this danger by marking the error as such and refining the sign so that it fixes the mistake: just as someone might exaggerate the correct pronunciation of a word in response to someone’s mistake. All idioms, however informal or idiosyncratic, distinguish between norm and error. If anything, the “grammarians” defending certain forms of slang may be more demanding than the worst martinet in the classroom: think of the likely consequences for a gang member unable to use the group’s jargon properly or any normal teenager who doesn’t deploy words like “whatever” or “dude” properly or use the exclamation “really?” with the right intonation.

I don’t see the recuperation of “defilement” as a way of displacing the ethical advances made by concepts like “shame,” “sin” and “guilt.” But those concepts are rather problematic, and the introduction of defilement as mistakenness might help us to see why. We assert that some should feel ashamed of themselves, that they are right to feel guilty, that they have sinned, in each case imputing free will to the doer. It rarely makes sense, though, to assert that someone “shouldn’t” have made a mistake—it’s such a tautology that the statement seems obvious but trivial and question-begging. Even more, if we trained our attention upon all conceivable mistakes, so multifarious, complex and shifting are the various rules of human interaction that we would hardly be able to attend to anything else. That is why the match between free will and shame, sin or guilt is invariably imperfect: people often experience these feelings even when there is no good reason for it (not to mention not experiencing it when they have good reason to do so). On the other hand, what is a good reason, and how do we know? We end up attributing originarity, or infinity to such emotion—we are more guilty, more steeped in shame, than we can ever know or begin to make amends for. There is something “there” before we transgress, even in our hearts: our mistakenness regarding appropriate regard for the center, our confusion of our desire with the resentment of the center.

It is interesting that people today often speak about making mistakes when previously they would have spoken about doing wrong. (Consider the by now familiar political locution: “Mistakes were made.”) On the individual level it’s probably an attempt to minimize responsibility—after all, we all make mistakes! At the same time, though, it does open one’s conduct to a more thorough inspection—if I have done wrong, I am expected to suffer the consequences and make it right, and if I do it’s no one else’s concern and, more broadly, we have well established institutions and procedures for dealing with wrongdoing in the form of shame, sin and guilt; if I’ve made mistakes an inspection of the entire scene is called for because the consequences of those mistakes may have spread without limit. Noting the mistakes underlying the sinful, shameful or guilty behavior provides a means, then, for reintegrating the “offender” back into the community by re-staging his relation to the model he “mistook,” and even if punishment and restitution is necessary to fix the mistakes, a study of the mistakenness at root will help construct the idioms of correction. If we sense that some feel “too” guilty, or ashamed for no good reason, perhaps it is because their mistakenness has been insufficiently attended to—in other words, we may not have asked what model they take themselves to be following, and how. And noting the mistakenness that just happened to broach the threshold of visibility might further make visible other mistaking of the shared model—and one kind of mistake, of course, may be failing to insist sufficiently upon normativity at the “right” time.

Originary mistakenness provides an alternative heuristic to what I have been calling the “grammarian” one, which presupposes a shared model and measures and punishes deviations from it: something will be mistaken in any utterance or gesture (some context overlooked, some shifting of emphasis askew, some possible response unanticipated), and if we train ourselves to attend to that mistakenness then much that is invisible in the utterance or gesture becomes visible. What has become visible is the idiomatic character of all semiosis: we can identify what is formulaic in a free expression, and free in a formulaic one: mistakes break up the cliché and the commonplace. I have seen students speak of “loosing their focus”; of being “weary [i.e., wary] of nationalistic excesses”; of some “point” being a “mute” one; of two functions “complimenting” each other and much more (any teacher can no doubt generate many examples of their own). I recently heard Sarah Palin say that the excitement in conservative circles regarding the upcoming elections was “palatable.” In these cases, where near homonyms occupy overlapping semantic domains (if your focus is loosened, aren’t you in danger of losing it; aren’t we wary of nationalistic excess because we are weary of its consequences; if a point is moot, in the sense of no longer practically relevant, isn’t it for all intents and purposes—or, to cite another mistake, “for all intensive purposes”—”mute”; wouldn’t something that complements you be worthy of being complimented; and, isn’t “palpable” excitement also quite “palatable”?) we have the articulation of the iconic and arbitrary, oral and written, dimensions of language, and in these articulations we can locate the generation of new idioms.

Mistakenness raises the question of pedagogy, and any cultural problem, being a question of the transmission of models, is a problem of ensuring the rough equivalence of the new generation to the models put forward for them; but also, of course, of providing them with the capacity to mold those models so as to address novelties. We can think about the “drift toward arbitrariness” as an ongoing process, one that simultaneously involves the re-creation of spheres of iconicity in grammatical, phonetic and gestural forms and that is cultural as well as linguistic, since the drift is determined by the entrance of outsiders into the sign community (including the new generation of outsiders, or children). The forms of mimetic pedagogy we have inherited, ultimately from the Greeks, relied upon an intensified iconicity carved out of the spread of arbitrariness: you become like the teacher, who models a particular mode of inquiry or attentiveness, and thereby extricate yourself from the chaotic world of desire and undirected resentments. I don’t think that mode of pedagogy, which treats mistakes as shameful because they introduce drift into the pedagogical arena, works any more. Perhaps a pedagogy that doesn’t know where the student is going, that simply places before the students the basic questions, practices and materials constitutive of a disciplinary space, and takes mistakes, the ever-generative mismatch between model and pupil, as its point of departure, will be able to accept more arbitrariness and a create more minimal forms of iconicity. Such a pedagogy would be interested in the most minimal conditions under which the mistaken and the normative could share the same object and co-regulate their practices so as to keep it in view; and such a pedagogy might be able to move through less institutionalized cultural spaces and conjoin attentions in new ways.

More specifically, it might be noted that all the mistakes I pointed to a paragraph back involved the “infection” of writing by speech: as linguists like Roman Jakobson and Dwight Bolinger(3) have argued, this is a very common feature of language, as words that sound alike tend to converge on similar meanings and words with similar meanings tend to become closer in sound: mistakenness, then, is both the drift toward arbitrariness and the restoration of the iconic dimension of language in the face of that drift. The question of error pervades language from the very beginning, I have suggested, but becomes much more explicit once languages become written and standardized. The mistakes I have mentioned, whose occurrence is, perhaps, accelerated by the use of spell check, reverse the process of standardization, which involves the protection of the written language from the contamination of speech— resisting constantly changing punctuation, slowing the drifts of meaning, marginalizing slang, and so on.

As Eric Havelock, Walter Ong and David Olson, have shown, print culture institutes ways of thinking radically opposed to those of oral culture. Writing segments language—into sounds, syllables, words and sentences—and in doing so also shows us that the reality made available by language can also be segmented—and therefore analyzed, studied, modified and recombined. As can the mental habits which enable us to treat reality. Education, in print culture, means the protection of the habits of print from the encroachment of the habits of orality: formulaic, repetitious, additive, literalistic and overly metaphorical, and so on. But there’s no way of systematically presenting and familiarizing students with all the possible ways in which orality can contaminate writing—the successful student is the one who immerses himself in the culture of writing by closely following the models presented him by the educational system and the culture at large.

But what if the period in Western history in which one could assume that an ever increasing number of young people desired such immersion has ended—in part because the basic minimum of literacy needed for most career paths is readily available to most, in part because of the proliferation of electronic technologies that render traditional literacy less attractive and in some cases obsolete. Rather than mounting futile defenses against the rising tide of neo-oral barbarism, we might reify the boundary between orality and literacy as a way of preserving and enhancing the intellectual habits indebted to the latter. As Michael Polanyi argues, a better guide to the generation of knowledge creating spaces (disciplines) than abstract principles and concepts (which we seek to define, clarify and calcify) is the maxim, a quintessentially oral mode of conserving experience and knowledge. Unlike concepts, which presuppose privileged members of the discipline entrusted with the purity of conceptual applications (method), maxims, or “rules, the correct application of which is part of the art they govern,” include a rule-of-thumb, collaborative component that encourages innovation: “maxims can function only . . . within a framework of personal knowledge” (31). The encrusted maxims of traditional society, of course, have little credibility today; what might be productive, though, is the construction of maxims out of the confusion and provisional restoration of the boundary between oral and literate. My little takes on the mistakes I listed above were already embryonic maxims: say, “loosen it or lose it”; “compliment and you will find you have complements”; “if you don’t moot the point it will become moot and you will find yourself mute”; etc. Such maxims are the basis of new idioms of inquiry that establish disciplinary spaces and it is in and through such spaces that long chains of declarative sentences emerge and issue in shared imperatives and ostensives—and it is through such articulations of declaratives with imperatives and ostensives that culture is created and sustained.

If we were to treat mistakes as anomalies around which disciplinary spaces, idioms and maxims were to be created, it follows that we would notice a lot more mistakes (rather than politely overlooking them most of the time, as we do now in social settings; or ridiculing or punishing them in more competitive environments); indeed, one could even imagine the perverse result that mistakes, and a proliferation of them would constitute marks of privilege. This final result, though, is not really possible, because deliberate mistakes would no longer be mistakes—they would, rather, be a kind of rule following that would generate its own distinctive modes of mistake making and fixing. If deliberate mistakes are impossible, even unthinkable, what is in fact easily conceivable is the setting of tasks in such a way that mistakes are far more likely. If you ask someone (in a school assignment, in an inter-personal confrontation, in a piece of performance art, or publics their representatives) to act simultaneously according to two different rules, they are bound to err according to at least one of them. What they will also do, though, is generate at least the preliminaries of a new, idiomatic, rule that would govern a new set of practices. I suspect emergent, post-millennial cultures, if we live to see them, will be able to tolerate a great deal of planned mistakenness, even if complete tolerance is no more desirable than the saturation of all social sites with ritual sacrality.

Beyond the most primitive and stereotyped acts, when we speak of mistakes we are speaking about unintentionally violating rules. Rules are notoriously hard to define and describe—for one thing, they always presuppose certain conditions and involve exceptions, so you need meta-rules for determining when the rules apply; for another, much of our knowledge of rules is how-to, tacit knowledge that can’t be made explicit. But this just means that rules, at least the rules of language and everyday interaction (as opposed, say, to the artificial rules of games) operate, to a great extent, below the level of declarative statements, in the area of ostensives and imperatives. The beginnings of rule would be in the extension of an ostensive into an imperative: something one points to along with actual or possible others “orders” one to carry out some act. The most economical way of thinking about what such an imperative would be is that it is a command to preserve the possibility of issuing that ostensive another time. When we say “that ostensive,” though, we don’t just mean the same object, but the same aspect of the object in the same kind of critical situation—but all of that can’t be reproduced, so “that ostensive” will be progressively refined with each instance of obedience to the imperative (and the same qualifications apply to “the imperative” as to “that ostensive”). This “refinement,” in turn, must mean that we assume a confirmatory ostensive following the obeyed imperative, confirming, that is, that the imperative has in fact been obeyed.

A rule, then, is the iterable articulation of an ostensive, a derived imperative, and a confirmatory ostensive. If we don’t share the same ostensives we wouldn’t recognize each other’s signs anyway, so our focus should be on the imperative and concluding ostensive: mistaking the ostensive-imperative link would be making an obscene joke at a family gathering, in front of the kids; mistaking the imperative-ostensive link would be laughing at the joke. In both cases, we have what we might call a “dangling” imperative—an acted upon imperative without ostensive grounding or confirmation (one person laughing at the obscene joke doesn’t confirm the imperative to make such a joke—the general silence confirms the lack of confirmation). The originary grammarian expels such an imperative from the scene by issuing an unquestionably grounded and confirmable replacement. Those who acknowledge our originary mistakenness, though, seek to supply the ostensives which might ground and confirm the imperative—for example, by integrating the obscene joke into family lore, while perhaps categorizing it with analogous, anomalous instances so its mistakenness would not simply be erased; or, more riskily, joining in the laughter at the joke and turn it into a challenge to the bounds of propriety at that occasion. These kinds of moves are modes of deferral operating at the most preliminary cultural level—deferring the punishment customarily issued and the purging ordinarily demanded for mismatched imperatives and ostensives.

The actions of one can contaminate or defile the whole to the extent that we all share the same “thing,” as, for example, one infected person will poison a shared food or water source—that is why the notion of defilement loses its power in a market society where we exchange things in increasingly indirect ways. White Guilt, our contemporary mode of defilement, derives from our sharing the non-sacrificial, which is to say, normalized, removal of the Jews from the scene during the Holocaust—according to the Auschwitz theology which at the very least shadows all contemporary thought, if we aren’t victims or perpetrators we are bystanders, by however many degrees. To put it crudely, the Nazis offered a solution to the problem of our modernity(4), and so we all share complicity in the crime—except to the extent that we actively and overtly repel all the “slippery slopes” that led to the crime: not just anti-semitism (ultimately not even, because anti-semitism was just a “symptom”), but nationalism, individualism, institutional loyalty, normativity, or any other way in which I take a piece for myself or some “we” at the however indirect expense of some “them” and, even worse, justify it by some appeal to nature or necessity. White Guilt is an experience of defilement because it assumes that only a universal sharing of some common substance will prevent the splitting up into individuals and groups that will renew the process of scapegoating.

Universal mistakenness respects the anthropological intuition embodied in White Guilt: not only in spite of, but because of the advanced market economy, there are collective crimes, resulting from shared fantasies about the common good as de-contamination, that we all participate in; even more, those fantasies are of a scene in which everyone circulates the sign with equal efficacy, a kind of fantasy perpetuated by metaphysics and the Enlightenment faith in shared understanding through unhindered linguistic exchange. If we all spoke with each other freely and frequently, our discourse would approximate that of each other—we’d pronounce alike, we would recognize the same commonplaces and gestures, we would approach a shared sense of correctness and refinement. But there would still be lots and lots of mistakes, because that’s how signs circulate, and it is just as discourse becomes freer and more frequent across accustomed boundaries that we will be choosing whether to enforce grammar or participate in new idioms by undergirding dangling imperatives.

In other words, aside from the post-genocidal collective guilt we all share, we all partake of language. No one can ever have more than a piece of language, under strictly regulated conditions; at the same time, language change results from what everyone does with their little piece. I can’t decide whether and how nouns and verbs agree in a sentence anymore than any member of the group can decide upon the size of the kill brought home for dinner, or the number of the group itself—but language also reaches into the gift and market economies and offers innumerable possibilities for innovation. An ethics or cultural politics of originary mistakenness would seize on mistakes as possibilities for innovative turns, for, we might say, husbanding our resources for deferral; while, at the same time, proposing new idioms that would carry with them norms that, in their collisions with other normative centers, would create new mistakes. In a sense, originary mistakenness points us to a practice that I believe is unprecedented: taking responsibility for the historicity of our linguistic being through deliberate strategies of inclusion and renewal.

Works Cited

Bauman, Zygmunt. Modernity and the Holocaust. Ithaca, NY: Cornell University Press, 1989.

Bolinger, Dwight, “The Sign is Not Arbitrary,” Thesaurus: Boletindel instituto Caro y Cuervo 5, 1949, 52-62.

Gans, Eric. The Origin of Language: A Formal Theory of Representation. Berkeley, Los Angeles: University of California Press, 1981.

Jakobson, Roman, and Linda Waugh. The Sound Shape of Language. Mouton de Gruyter, 1987.

Magnus, Margaret. Gods of the Word: Archetypes in the Consonants. Missouri: Thomas Jefferson University Press, 1999.

Olson, David. The World on Paper: The Conceptual and Cognitive Implications of  Writing and Reading. Cambridge: Cambridge University Press, 1996.

Polanyi, Michael. Personal Knowledge: Toward a Post-Critical Philosophy. New York and Evanston: Harper & Row, 1958.

Ricoeur, Paul. The Symbolism of Evil. New York: Harper & Row, 1967.

Sapir, Edward. Selected Writings in Language, Personality and Culture. Edited by David G. Mandelbaum. Berkeley: University of California Press, 1985.

Tomasello, Michael. Origins of Human Communication. Cambridge, MA: MIT Press, 2008.


1. I am working with Eric Gans’s discussions of White Guilt as the “guilt of the unmarked toward the marked”: see, in particular, his Chronicles of Love & Resentment 310, 311, 313, 316, 320 and 323.(back)

2. See Gans, 98-188 (back)

3. See, in particular, Jakobson, 1982, and Bolinger, 1949. For a more radical argument for “phonosemantics,” see Magnus. (back)

4. Zygmunt Bauman’s Modernity and the Holocaust offers a powerful account of totalitarianism and genocide as implicated modern concerns with orderliness and (political and physical) cleanliness. In other words, with the desire to remain “unmarked.” (back)